[pgpool-general-jp: 1529] Re: 自動フェイルオーバに失敗するケースについて

池田亘 ikeda.wataru @ gmail.com
2018年 4月 8日 (日) 22:52:20 JST


池田です。
こちらシェルスクリプトの問題ではなく、postgresユーザが各サーバそれぞれのローカルホストへのSSH接続確認待ちとなっていたのが原因でした。
標準エラーのシェルのデバック結果から発覚し、認証を通してから同様のケースで正常にpromoteされることを確認しました。
彭様、ご確認ありがとうございました。

iPhoneから送信

2018/04/08 21:01、Bo Peng <pengbo @ sraoss.co.jp>のメール:

> 池田さん
> 
> 彭です。
> 
> フェイルオーバコマンドの実行で失敗していたようで、
> failover.sh を見せていただけますか。
> 
> execute command: /etc/pgpool-II-95/failover.sh 1 1 192.168.3.220 /var/lib/pgsql/9.5/data
> 
> 
> または、以下のように実行結果をログファイル /var/log/pgpool/failover.log に出力するように設定し、
> フェイルオーバ実行結果をご提供していただけますか。
> 
> =====
> log=/var/log/pgpool/failover.log
> 
> if [ $falling_node = $old_primary ]; then
>   if [ $UID -eq 0 ]
>   then
>       su postgres -c "ssh -T postgres@$new_primary $pghome/bin/pg_ctl promote -D $pgdata" >> $log 2>&1
>   else
>       ssh -T postgres@$new_primary $pghome/bin/pg_ctl promote -D $pgdata >> $log 2>&1
>   fi
>   exit 0;
> fi;
> =====
> 
> 以上、よろしくお願いします。
> 
> On Thu, 5 Apr 2018 12:00:07 +0900
> 池田亘 <ikeda.wataru @ gmail.com> wrote:
> 
>> こんにちは、池田と申します。
>> 
>> こちらのチュートリアルを参考にpgpool-IIで自動フェイルオーバの検証をしているのですが、
>> フェイルオーバに失敗するケースについて確認させてください。
>> http://www.pgpool.net/docs/pgpool-II-3.7.1/ja/html/example-cluster.html
>> 
>> [環境]
>> 全て仮想マシンでMaster-Slaveモード/非同期ストリーミングレプリケーションの2台構成です。
>> OS: CentOS7.4.1708
>> pgpool-II: 3.5.13
>> postgresql: 9.5.12
>> 
>> [現象]
>> 構成は異なりますが設定はほぼ本チュートリアル通りです。A,Bは各ホストになります。
>> 1. A: Master 作成
>> 2. B: Slave 作成(オンラインリカバリ)
>> 3. Master-Slave の正常なレプリケーションを確認
>> 4. A: Master 停止
>> 5. B: 自動フェイルオーバ発生。旧 Slave が新 Master へ昇格したことを確認
>> 6. A: 旧 Master を新 Slave として再作成(オンラインリカバリ)
>> 7. Master-Slave の正常なレプリケーションを確認
>> 8. B: Master 停止
>> 9. A: 自動フェイルオーバ失敗。Slave が昇格しない。
>> 
>> 手順8, 9 のログは以下の通りです。
>> ホストを入れ替えても旧マスタの自動フェールオーバに失敗してしまうのですが、考えられる原因、手順の誤りはありますでしょうか。宜しくお願い致します。
>> 
>> ホストB:
>> ==> /var/log/pg.log <==
>> Apr  5 10:46:27 dash postgres[17862]: [6-1] < 2018-04-05 10:46:27.383 JST
>>> LOG:  received immediate shutdown request
>> Apr  5 10:46:27 dash postgres[18105]: [6-1] < 2018-04-05 10:46:27.386 JST
>>> WARNING:  terminating connection because of crash of another server process
>> Apr  5 10:46:27 dash postgres[18105]: [6-2] < 2018-04-05 10:46:27.386 JST
>>> DETAIL:  The postmaster has commanded this server process to roll back the
>> current transaction and exit, because another server process exited
>> abnormally and possibly corrupted shared memory.
>> Apr  5 10:46:27 dash postgres[18105]: [6-3] < 2018-04-05 10:46:27.386 JST
>>> HINT:  In a moment you should be able to reconnect to the database and
>> repeat your command.
>> Apr  5 10:46:27 dash postgres[17949]: [6-1] < 2018-04-05 10:46:27.425 JST
>>> WARNING:  terminating connection because of crash of another server process
>> Apr  5 10:46:27 dash postgres[17949]: [6-2] < 2018-04-05 10:46:27.426 JST
>>> DETAIL:  The postmaster has commanded this server process to roll back the
>> current transaction and exit, because another server process exited
>> abnormally and possibly corrupted shared memory.
>> Apr  5 10:46:27 dash postgres[17949]: [6-3] < 2018-04-05 10:46:27.426 JST
>>> HINT:  In a moment you should be able to reconnect to the database and
>> repeat your command.
>> Apr  5 10:46:27 dash postgres[17862]: [7-1] < 2018-04-05 10:46:27.454 JST
>>> LOG:  archiver process (PID 17950) exited with exit code 1
>> 
>> ==> /var/log/pgpool.log <==
>> Apr  5 10:46:29 dash pgpool[17758]: [364-1] 2018-04-05 10:46:29: pid 17758:
>> LOG:  failed to connect to PostgreSQL server on "192.168.3.221:5432",
>> getsockopt() detected error "Connection refused"
>> Apr  5 10:46:29 dash pgpool[17758]: [365-1] 2018-04-05 10:46:29: pid 17758:
>> ERROR:  failed to make persistent db connection
>> Apr  5 10:46:29 dash pgpool[17758]: [365-2] 2018-04-05 10:46:29: pid 17758:
>> DETAIL:  connection to host:"192.168.3.221:5432" failed
>> Apr  5 10:46:29 dash pgpool[17758]: [366-1] 2018-04-05 10:46:29: pid 17758:
>> LOG:  failed to connect to PostgreSQL server on "192.168.3.221:5432",
>> getsockopt() detected error "Connection refused"
>> Apr  5 10:46:29 dash pgpool[17758]: [367-1] 2018-04-05 10:46:29: pid 17758:
>> ERROR:  failed to make persistent db connection
>> Apr  5 10:46:29 dash pgpool[17758]: [367-2] 2018-04-05 10:46:29: pid 17758:
>> DETAIL:  connection to host:"192.168.3.221:5432" failed
>> Apr  5 10:46:30 dash pgpool[17758]: [368-1] 2018-04-05 10:46:30: pid 17758:
>> LOG:  health checking retry count 1
>> Apr  5 10:46:30 dash pgpool[17758]: [369-1] 2018-04-05 10:46:30: pid 17758:
>> LOG:  failed to connect to PostgreSQL server on "192.168.3.221:5432",
>> getsockopt() detected error "Connection refused"
>> Apr  5 10:46:30 dash pgpool[17758]: [370-1] 2018-04-05 10:46:30: pid 17758:
>> ERROR:  failed to make persistent db connection
>> Apr  5 10:46:30 dash pgpool[17758]: [370-2] 2018-04-05 10:46:30: pid 17758:
>> DETAIL:  connection to host:"192.168.3.221:5432" failed
>> Apr  5 10:46:31 dash pgpool[17758]: [371-1] 2018-04-05 10:46:31: pid 17758:
>> LOG:  health checking retry count 2
>> Apr  5 10:46:31 dash pgpool[17758]: [372-1] 2018-04-05 10:46:31: pid 17758:
>> LOG:  failed to connect to PostgreSQL server on "192.168.3.221:5432",
>> getsockopt() detected error "Connection refused"
>> Apr  5 10:46:31 dash pgpool[17758]: [373-1] 2018-04-05 10:46:31: pid 17758:
>> ERROR:  failed to make persistent db connection
>> Apr  5 10:46:31 dash pgpool[17758]: [373-2] 2018-04-05 10:46:31: pid 17758:
>> DETAIL:  connection to host:"192.168.3.221:5432" failed
>> Apr  5 10:46:32 dash pgpool[17758]: [374-1] 2018-04-05 10:46:32: pid 17758:
>> LOG:  health checking retry count 3
>> Apr  5 10:46:32 dash pgpool[17758]: [375-1] 2018-04-05 10:46:32: pid 17758:
>> LOG:  failed to connect to PostgreSQL server on "192.168.3.221:5432",
>> getsockopt() detected error "Connection refused"
>> Apr  5 10:46:32 dash pgpool[17758]: [376-1] 2018-04-05 10:46:32: pid 17758:
>> ERROR:  failed to make persistent db connection
>> Apr  5 10:46:32 dash pgpool[17758]: [376-2] 2018-04-05 10:46:32: pid 17758:
>> DETAIL:  connection to host:"192.168.3.221:5432" failed
>> Apr  5 10:46:33 dash pgpool[17758]: [377-1] 2018-04-05 10:46:33: pid 17758:
>> LOG:  health checking retry count 4
>> Apr  5 10:46:33 dash pgpool[17758]: [378-1] 2018-04-05 10:46:33: pid 17758:
>> LOG:  failed to connect to PostgreSQL server on "192.168.3.221:5432",
>> getsockopt() detected error "Connection refused"
>> Apr  5 10:46:33 dash pgpool[17758]: [379-1] 2018-04-05 10:46:33: pid 17758:
>> ERROR:  failed to make persistent db connection
>> Apr  5 10:46:33 dash pgpool[17758]: [379-2] 2018-04-05 10:46:33: pid 17758:
>> DETAIL:  connection to host:"192.168.3.221:5432" failed
>> Apr  5 10:46:33 dash pgpool[18111]: [299-1] 2018-04-05 10:46:33: pid 18111:
>> LOG:  failed to connect to PostgreSQL server on "192.168.3.221:5432",
>> getsockopt() detected error "Connection refused"
>> Apr  5 10:46:33 dash pgpool[18111]: [300-1] 2018-04-05 10:46:33: pid 18111:
>> ERROR:  failed to make persistent db connection
>> Apr  5 10:46:33 dash pgpool[18111]: [300-2] 2018-04-05 10:46:33: pid 18111:
>> DETAIL:  connection to host:"192.168.3.221:5432" failed
>> Apr  5 10:46:34 dash pgpool[17758]: [380-1] 2018-04-05 10:46:34: pid 17758:
>> LOG:  health checking retry count 5
>> Apr  5 10:46:34 dash pgpool[17758]: [381-1] 2018-04-05 10:46:34: pid 17758:
>> LOG:  failed to connect to PostgreSQL server on "192.168.3.221:5432",
>> getsockopt() detected error "Connection refused"
>> Apr  5 10:46:34 dash pgpool[17758]: [382-1] 2018-04-05 10:46:34: pid 17758:
>> ERROR:  failed to make persistent db connection
>> Apr  5 10:46:34 dash pgpool[17758]: [382-2] 2018-04-05 10:46:34: pid 17758:
>> DETAIL:  connection to host:"192.168.3.221:5432" failed
>> Apr  5 10:46:34 dash pgpool[17758]: [383-1] 2018-04-05 10:46:34: pid 17758:
>> LOG:  setting backend node 1 status to NODE DOWN
>> Apr  5 10:46:34 dash pgpool[17758]: [384-1] 2018-04-05 10:46:34: pid 17758:
>> LOG:  received degenerate backend request for node_id: 1 from pid [17758]
>> Apr  5 10:46:34 dash pgpool[17759]: [74-1] 2018-04-05 10:46:34: pid 17759:
>> LOG:  new IPC connection received
>> Apr  5 10:46:34 dash pgpool[17759]: [75-1] 2018-04-05 10:46:34: pid 17759:
>> LOG:  failover request from IPC socket is forwarded to master watchdog node
>> "Linux_violet.incredibles.family_9999"
>> Apr  5 10:46:34 dash pgpool[17759]: [75-2] 2018-04-05 10:46:34: pid 17759:
>> DETAIL:  waiting for the reply from master node...
>> Apr  5 10:46:34 dash pgpool[17759]: [76-1] 2018-04-05 10:46:34: pid 17759:
>> LOG:  new IPC connection received
>> Apr  5 10:46:34 dash pgpool[17759]: [77-1] 2018-04-05 10:46:34: pid 17759:
>> LOG:  read from socket failed, remote end closed the connection
>> Apr  5 10:46:34 dash pgpool[17759]: [78-1] 2018-04-05 10:46:34: pid 17759:
>> LOG:  processing failover command lock request from IPC socket
>> Apr  5 10:46:34 dash pgpool[17759]: [79-1] 2018-04-05 10:46:34: pid 17759:
>> LOG:  failover command lock request from IPC socket is forwarded to master
>> watchdog node "Linux_violet.incredibles.family_9999"
>> Apr  5 10:46:34 dash pgpool[17759]: [79-2] 2018-04-05 10:46:34: pid 17759:
>> DETAIL:  waiting for the reply from master node...
>> Apr  5 10:46:34 dash pgpool[17758]: [385-1] 2018-04-05 10:46:34: pid 17758:
>> LOG:  starting degeneration. shutdown host 192.168.3.221(5432)
>> Apr  5 10:46:34 dash pgpool[17758]: [386-1] 2018-04-05 10:46:34: pid 17758:
>> LOG:  Restart all children
>> Apr  5 10:46:34 dash pgpool[17759]: [80-1] 2018-04-05 10:46:34: pid 17759:
>> LOG:  new IPC connection received
>> Apr  5 10:46:34 dash pgpool[17759]: [81-1] 2018-04-05 10:46:34: pid 17759:
>> LOG:  processing failover command lock request from IPC socket
>> Apr  5 10:46:34 dash pgpool[17759]: [82-1] 2018-04-05 10:46:34: pid 17759:
>> LOG:  failover command lock request from IPC socket is forwarded to master
>> watchdog node "Linux_violet.incredibles.family_9999"
>> Apr  5 10:46:34 dash pgpool[17759]: [82-2] 2018-04-05 10:46:34: pid 17759:
>> DETAIL:  waiting for the reply from master node...
>> Apr  5 10:46:34 dash pgpool[17759]: [83-1] 2018-04-05 10:46:34: pid 17759:
>> LOG:  new IPC connection received
>> Apr  5 10:46:34 dash pgpool[17759]: [84-1] 2018-04-05 10:46:34: pid 17759:
>> LOG:  processing failover command lock request from IPC socket
>> Apr  5 10:46:34 dash pgpool[17759]: [85-1] 2018-04-05 10:46:34: pid 17759:
>> LOG:  failover command lock request from IPC socket is forwarded to master
>> watchdog node "Linux_violet.incredibles.family_9999"
>> Apr  5 10:46:34 dash pgpool[17759]: [85-2] 2018-04-05 10:46:34: pid 17759:
>> DETAIL:  waiting for the reply from master node...
>> Apr  5 10:46:34 dash pgpool[17758]: [387-1] 2018-04-05 10:46:34: pid 17758:
>> LOG:  find_primary_node_repeatedly: waiting for finding a primary node
>> Apr  5 10:46:34 dash pgpool[17758]: [388-1] 2018-04-05 10:46:34: pid 17758:
>> LOG:  find_primary_node: checking backend no 0
>> Apr  5 10:46:34 dash pgpool[17758]: [389-1] 2018-04-05 10:46:34: pid 17758:
>> LOG:  find_primary_node: checking backend no 1
>> Apr  5 10:46:35 dash pgpool[17758]: [390-1] 2018-04-05 10:46:35: pid 17758:
>> LOG:  find_primary_node: checking backend no 0
>> Apr  5 10:46:35 dash pgpool[17758]: [391-1] 2018-04-05 10:46:35: pid 17758:
>> LOG:  find_primary_node: checking backend no 1
>> Apr  5 10:46:36 dash pgpool[17758]: [392-1] 2018-04-05 10:46:36: pid 17758:
>> LOG:  find_primary_node: checking backend no 0
>> Apr  5 10:46:36 dash pgpool[17758]: [393-1] 2018-04-05 10:46:36: pid 17758:
>> LOG:  find_primary_node: checking backend no 1
>> Apr  5 10:46:37 dash pgpool[17758]: [394-1] 2018-04-05 10:46:37: pid 17758:
>> LOG:  find_primary_node: checking backend no 0
>> Apr  5 10:46:38 dash pgpool[17758]: [395-1] 2018-04-05 10:46:38: pid 17758:
>> LOG:  find_primary_node: checking backend no 1
>> Apr  5 10:46:39 dash pgpool[17758]: [396-1] 2018-04-05 10:46:39: pid 17758:
>> LOG:  find_primary_node: checking backend no 0
>> Apr  5 10:46:39 dash pgpool[17758]: [397-1] 2018-04-05 10:46:39: pid 17758:
>> LOG:  find_primary_node: checking backend no 1
>> Apr  5 10:46:40 dash pgpool[17758]: [398-1] 2018-04-05 10:46:40: pid 17758:
>> LOG:  find_primary_node: checking backend no 0
>> Apr  5 10:46:40 dash pgpool[17758]: [399-1] 2018-04-05 10:46:40: pid 17758:
>> LOG:  find_primary_node: checking backend no 1
>> Apr  5 10:46:41 dash pgpool[17758]: [400-1] 2018-04-05 10:46:41: pid 17758:
>> LOG:  find_primary_node: checking backend no 0
>> Apr  5 10:46:41 dash pgpool[17758]: [401-1] 2018-04-05 10:46:41: pid 17758:
>> LOG:  find_primary_node: checking backend no 1
>> Apr  5 10:46:42 dash pgpool[17758]: [402-1] 2018-04-05 10:46:42: pid 17758:
>> LOG:  find_primary_node: checking backend no 0
>> Apr  5 10:46:42 dash pgpool[17758]: [403-1] 2018-04-05 10:46:42: pid 17758:
>> LOG:  find_primary_node: checking backend no 1
>> Apr  5 10:46:43 dash pgpool[17758]: [404-1] 2018-04-05 10:46:43: pid 17758:
>> LOG:  find_primary_node: checking backend no 0
>> Apr  5 10:46:43 dash pgpool[17758]: [405-1] 2018-04-05 10:46:43: pid 17758:
>> LOG:  find_primary_node: checking backend no 1
>> Apr  5 10:46:43 dash pgpool[18111]: [301-1] 2018-04-05 10:46:43: pid 18111:
>> ERROR:  Failed to check replication time lag
>> Apr  5 10:46:43 dash pgpool[18111]: [301-2] 2018-04-05 10:46:43: pid 18111:
>> DETAIL:  No persistent db connection for the node 1
>> Apr  5 10:46:43 dash pgpool[18111]: [301-3] 2018-04-05 10:46:43: pid 18111:
>> HINT:  check sr_check_user and sr_check_password
>> Apr  5 10:46:43 dash pgpool[18111]: [301-4] 2018-04-05 10:46:43: pid 18111:
>> CONTEXT:  while checking replication time lag
>> Apr  5 10:46:43 dash pgpool[18111]: [302-1] 2018-04-05 10:46:43: pid 18111:
>> LOG:  failed to connect to PostgreSQL server on "192.168.3.221:5432",
>> getsockopt() detected error "Connection refused"
>> Apr  5 10:46:43 dash pgpool[18111]: [303-1] 2018-04-05 10:46:43: pid 18111:
>> ERROR:  failed to make persistent db connection
>> Apr  5 10:46:43 dash pgpool[18111]: [303-2] 2018-04-05 10:46:43: pid 18111:
>> DETAIL:  connection to host:"192.168.3.221:5432" failed
>> ...
>> 
>> ホストA:
>> ==> /var/log/pg.log <==
>> Apr  5 10:46:33 violet postgres[17436]: [5-1] < 2018-04-05 10:46:33.254 JST
>>> FATAL:  could not connect to the primary server: could not connect to
>> server: Connection refused
>> Apr  5 10:46:33 violet postgres[17436]: [5-2] #011#011Is the server running
>> on host "dash.incredibles.family" (192.168.3.221) and accepting
>> Apr  5 10:46:33 violet postgres[17436]: [5-3] #011#011TCP/IP connections on
>> port 5432?
>> Apr  5 10:46:33 violet postgres[17436]: [5-4]
>> 
>> ==> /var/log/pgpool.log <==
>> Apr  5 10:46:33 violet pgpool[16810]: [378-1] 2018-04-05 10:46:33: pid
>> 16810: LOG:  health checking retry count 4
>> Apr  5 10:46:33 violet pgpool[16810]: [379-1] 2018-04-05 10:46:33: pid
>> 16810: LOG:  failed to connect to PostgreSQL server on "192.168.3.221:5432",
>> getsockopt() detected error "Connection refused"
>> Apr  5 10:46:33 violet pgpool[16810]: [380-1] 2018-04-05 10:46:33: pid
>> 16810: ERROR:  failed to make persistent db connection
>> Apr  5 10:46:33 violet pgpool[16810]: [380-2] 2018-04-05 10:46:33: pid
>> 16810: DETAIL:  connection to host:"192.168.3.221:5432" failed
>> Apr  5 10:46:34 violet pgpool[16811]: [145-1] 2018-04-05 10:46:34: pid
>> 16811: LOG:  watchdog received the failover command from remote pgpool-II
>> node "Linux_dash.incredibles.family_9999"
>> Apr  5 10:46:34 violet pgpool[16811]: [146-1] 2018-04-05 10:46:34: pid
>> 16811: LOG:  watchdog received failover command [DEGENERATE_BACKEND_REQUEST]
>> Apr  5 10:46:34 violet pgpool[16811]: [147-1] 2018-04-05 10:46:34: pid
>> 16811: LOG:  forwarding the failover request [DEGENERATE_BACKEND_REQUEST]
>> to all alive nodes
>> Apr  5 10:46:34 violet pgpool[16811]: [147-2] 2018-04-05 10:46:34: pid
>> 16811: DETAIL:  watchdog cluster currently has 1 remote connected nodes
>> Apr  5 10:46:34 violet pgpool[16811]: [148-1] 2018-04-05 10:46:34: pid
>> 16811: LOG:  failover request [DEGENERATE_BACKEND_REQUEST] is sent to 0
>> nodes
>> Apr  5 10:46:34 violet pgpool[16811]: [149-1] 2018-04-05 10:46:34: pid
>> 16811: LOG:  received degenerate backend request for node_id: 1 from pid
>> [16811]
>> Apr  5 10:46:34 violet pgpool[16810]: [381-1] 2018-04-05 10:46:34: pid
>> 16810: LOG:  Pgpool-II parent process has received failover request
>> Apr  5 10:46:34 violet pgpool[16811]: [150-1] 2018-04-05 10:46:34: pid
>> 16811: LOG:  new IPC connection received
>> Apr  5 10:46:34 violet pgpool[16811]: [151-1] 2018-04-05 10:46:34: pid
>> 16811: LOG:  remote pgpool-II node "Linux_dash.incredibles.family_9999" is
>> requesting to become a lock holder for failover ID: 0
>> Apr  5 10:46:34 violet pgpool[16811]: [152-1] 2018-04-05 10:46:34: pid
>> 16811: LOG:  request to become a lock holder is denied to remote pgpool-II
>> node "Linux_dash.incredibles.family_9999"
>> Apr  5 10:46:34 violet pgpool[16811]: [152-2] 2018-04-05 10:46:34: pid
>> 16811: DETAIL:  only master/coordinator can become a lock holder
>> Apr  5 10:46:34 violet pgpool[16811]: [153-1] 2018-04-05 10:46:34: pid
>> 16811: LOG:  processing failover command lock request from IPC socket
>> Apr  5 10:46:34 violet pgpool[16811]: [154-1] 2018-04-05 10:46:34: pid
>> 16811: LOG:  local pgpool-II node "Linux_violet.incredibles.family_9999" is
>> requesting to become a lock holder for failover ID: 77
>> Apr  5 10:46:34 violet pgpool[16811]: [155-1] 2018-04-05 10:46:34: pid
>> 16811: LOG:  local pgpool-II node "Linux_violet.incredibles.family_9999" is
>> the lock holder
>> Apr  5 10:46:34 violet pgpool[16810]: [382-1] 2018-04-05 10:46:34: pid
>> 16810: LOG:  starting degeneration. shutdown host 192.168.3.221(5432)
>> Apr  5 10:46:34 violet pgpool[16810]: [383-1] 2018-04-05 10:46:34: pid
>> 16810: LOG:  Restart all children
>> Apr  5 10:46:34 violet pgpool[16810]: [384-1] 2018-04-05 10:46:34: pid
>> 16810: LOG:  execute command: /etc/pgpool-II-95/failover.sh 1 1
>> 192.168.3.220 /var/lib/pgsql/9.5/data
>> Apr  5 10:46:34 violet pgpool[16811]: [156-1] 2018-04-05 10:46:34: pid
>> 16811: LOG:  remote pgpool-II node "Linux_dash.incredibles.family_9999" is
>> checking the status of [FAILOVER] lock for failover ID 0
>> Apr  5 10:46:34 violet pgpool[16811]: [157-1] 2018-04-05 10:46:34: pid
>> 16811: LOG:  FAILOVER lock is currently LOCKED
>> Apr  5 10:46:34 violet pgpool[16811]: [157-2] 2018-04-05 10:46:34: pid
>> 16811: DETAIL:  request was from remote pgpool-II node
>> "Linux_dash.incredibles.family_9999" and lock holder is local pgpool-II
>> node "Linux_violet.incredibles.family_9999"
>> Apr  5 10:46:34 violet pgpool[16811]: [158-1] 2018-04-05 10:46:34: pid
>> 16811: LOG:  new IPC connection received
>> Apr  5 10:46:34 violet pgpool[16811]: [159-1] 2018-04-05 10:46:34: pid
>> 16811: LOG:  processing failover command lock request from IPC socket
>> Apr  5 10:46:34 violet pgpool[16811]: [160-1] 2018-04-05 10:46:34: pid
>> 16811: LOG:  local pgpool-II node "Linux_violet.incredibles.family_9999" is
>> requesting to release [FAILOVER] lock for failover ID 77
>> Apr  5 10:46:34 violet pgpool[16811]: [161-1] 2018-04-05 10:46:34: pid
>> 16811: LOG:  local pgpool-II node "Linux_violet.incredibles.family_9999"
>> has released the [FAILOVER] lock for failover ID 77
>> Apr  5 10:46:34 violet pgpool[16810]: [385-1] 2018-04-05 10:46:34: pid
>> 16810: LOG:  find_primary_node_repeatedly: waiting for finding a primary
>> node
>> Apr  5 10:46:34 violet pgpool[16810]: [386-1] 2018-04-05 10:46:34: pid
>> 16810: LOG:  find_primary_node: checking backend no 0
>> Apr  5 10:46:34 violet pgpool[16810]: [387-1] 2018-04-05 10:46:34: pid
>> 16810: LOG:  find_primary_node: checking backend no 1
>> Apr  5 10:46:34 violet pgpool[16811]: [162-1] 2018-04-05 10:46:34: pid
>> 16811: LOG:  remote pgpool-II node "Linux_dash.incredibles.family_9999" is
>> checking the status of [FAILOVER] lock for failover ID 0
>> Apr  5 10:46:34 violet pgpool[16811]: [163-1] 2018-04-05 10:46:34: pid
>> 16811: LOG:  FAILOVER lock is currently FREE
>> Apr  5 10:46:34 violet pgpool[16811]: [163-2] 2018-04-05 10:46:34: pid
>> 16811: DETAIL:  request was from remote pgpool-II node
>> "Linux_dash.incredibles.family_9999" and lock holder is local pgpool-II
>> node "Linux_violet.incredibles.family_9999"
>> Apr  5 10:46:35 violet pgpool[16810]: [388-1] 2018-04-05 10:46:35: pid
>> 16810: LOG:  find_primary_node: checking backend no 0
>> Apr  5 10:46:35 violet pgpool[16810]: [389-1] 2018-04-05 10:46:35: pid
>> 16810: LOG:  find_primary_node: checking backend no 1
>> Apr  5 10:46:36 violet pgpool[16810]: [390-1] 2018-04-05 10:46:36: pid
>> 16810: LOG:  find_primary_node: checking backend no 0
>> Apr  5 10:46:36 violet pgpool[16810]: [391-1] 2018-04-05 10:46:36: pid
>> 16810: LOG:  find_primary_node: checking backend no 1
>> Apr  5 10:46:37 violet pgpool[16810]: [392-1] 2018-04-05 10:46:37: pid
>> 16810: LOG:  find_primary_node: checking backend no 0
>> Apr  5 10:46:37 violet pgpool[16810]: [393-1] 2018-04-05 10:46:37: pid
>> 16810: LOG:  find_primary_node: checking backend no 1
>> ...
> 
> 
> -- 
> Bo Peng <pengbo @ sraoss.co.jp>
> SRA OSS, Inc. Japan
> 


pgpool-general-jp メーリングリストの案内