View Issue Details
| ID | Project | Category | View Status | Date Submitted | Last Update |
|---|---|---|---|---|---|
| 0000817 | Pgpool-II | Bug | public | 2023-10-27 16:47 | 2023-11-14 15:21 |
| Reporter | jsoler | Assigned To | Muhammad Usama | ||
| Priority | normal | Severity | major | Reproducibility | sometimes |
| Status | assigned | Resolution | open | ||
| Platform | x86_64 | OS | linux | OS Version | rhel8 |
| Product Version | 4.4.4 | ||||
| Summary | 0000817: watchdog choice as a leader a node with less wd_priority | ||||
| Description | Hi, This issue is related to #0000814 and #0000815 , but we have upgraded pgpool from 4.4.2 to 4.4.4 . We have setup a 3 nodes (paqcxast01 , paqcxast02 and paqcxast04) watchdog cluster , where paqcxast01 is placed in one datacenter and the rest of nodes is located in a different DC. Two nodes ( paqcxast01 and paqcxast02) of the cluster also run database services. Due to network restrictions, we are defining diferents deletegate_ip in both database/pgpool server ( paqcxast01 and paqcxast02 )and we leave empty the delegate_ip on the third pgpool node ( paqcxast04 ) because we only want to be leader and route application traffic two of the nodes, and leave the third one to participate in quorum elections. ) We are using wd_priotiry to enforce that watchdog leader could be only the database nodes ( paqcxast01 and paqcxast02) , I mean , we are setting the lowest priority to the third node ( paqcxast04 ), and higher priotiry ( paqcxast01 and paqcxast02) . | ||||
| Steps To Reproduce | NODE1(paqcxast01): PGPOOL LEADER - POSTGRES PRIMARY NODE2(paqcxast02): PGPOOL STANDBY - POSTGRES STANDBY NODE3(paqcxast04): PGPOOL STANDBY - NO POSTGRES Test: stop of NODE1 with a poweoff -ff command at "2023-10-26 11:49:59" We have shared witch you logs from pgpool of every node and the output of status of pcp_watchdog_info executed from every node during the test and also pgpool config using pgpool show all. This issue appear every time quorum node ( paqcxast04 ) detect the lost of leader node before node2 (paqcxast02) , I mean, if node2 realized that node1 was lost before than node3, node2 is going to get elected as leader. | ||||
| Tags | consensus, vip, virtual ip, watchdog | ||||
|
|
nodo3-pgpool-2023-10-26_113919.log (72,095 bytes)
2023-10-26 11:39:19: pid 762010: LOG: health_check_stats_shared_memory_size: requested size: 12288 2023-10-26 11:39:19: pid 762010: LOCATION: health_check.c:541 2023-10-26 11:39:19: pid 762010: LOG: memory cache initialized 2023-10-26 11:39:19: pid 762010: DETAIL: memcache blocks :64 2023-10-26 11:39:19: pid 762010: LOCATION: pool_memqcache.c:2061 2023-10-26 11:39:19: pid 762010: LOG: allocating (138460248) bytes of shared memory segment 2023-10-26 11:39:19: pid 762010: LOCATION: pgpool_main.c:3024 2023-10-26 11:39:19: pid 762010: LOG: allocating shared memory segment of size: 138460248 2023-10-26 11:39:19: pid 762010: LOCATION: pool_shmem.c:61 2023-10-26 11:39:19: pid 762010: LOG: health_check_stats_shared_memory_size: requested size: 12288 2023-10-26 11:39:19: pid 762010: LOCATION: health_check.c:541 2023-10-26 11:39:19: pid 762010: LOG: health_check_stats_shared_memory_size: requested size: 12288 2023-10-26 11:39:19: pid 762010: LOCATION: health_check.c:541 2023-10-26 11:39:19: pid 762010: LOG: memory cache initialized 2023-10-26 11:39:19: pid 762010: DETAIL: memcache blocks :64 2023-10-26 11:39:19: pid 762010: LOCATION: pool_memqcache.c:2061 2023-10-26 11:39:19: pid 762010: LOG: pool_discard_oid_maps: discarded memqcache oid maps 2023-10-26 11:39:19: pid 762010: LOCATION: pgpool_main.c:3108 2023-10-26 11:39:19: pid 762010: LOG: waiting for watchdog to initialize 2023-10-26 11:39:19: pid 762010: LOCATION: pgpool_main.c:428 2023-10-26 11:39:19: pid 762013: LOG: setting the local watchdog node name to "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:39:19: pid 762013: LOCATION: watchdog.c:772 2023-10-26 11:39:19: pid 762013: LOG: watchdog cluster is configured with 2 remote nodes 2023-10-26 11:39:19: pid 762013: LOCATION: watchdog.c:782 2023-10-26 11:39:19: pid 762013: LOG: watchdog remote node:0 on paqcxast01.aaa.es:9000 2023-10-26 11:39:19: pid 762013: LOCATION: watchdog.c:799 2023-10-26 11:39:19: pid 762013: LOG: watchdog remote node:1 on paqcxast02.aaa.es:9000 2023-10-26 11:39:19: pid 762013: LOCATION: watchdog.c:799 2023-10-26 11:39:19: pid 762013: LOG: interface monitoring is disabled in watchdog 2023-10-26 11:39:19: pid 762013: LOCATION: watchdog.c:668 2023-10-26 11:39:19: pid 762013: LOG: watchdog node state changed from [DEAD] to [LOADING] 2023-10-26 11:39:19: pid 762013: LOCATION: watchdog.c:7227 2023-10-26 11:39:19: pid 762013: LOG: new outbound connection to paqcxast02.aaa.es:9000 2023-10-26 11:39:19: pid 762013: LOCATION: watchdog.c:3484 2023-10-26 11:39:19: pid 762013: LOG: new outbound connection to paqcxast01.aaa.es:9000 2023-10-26 11:39:19: pid 762013: LOCATION: watchdog.c:3484 2023-10-26 11:39:24: pid 762013: LOG: new watchdog node connection is received from "10.151.18.82:39534" 2023-10-26 11:39:24: pid 762013: LOCATION: watchdog.c:3405 2023-10-26 11:39:24: pid 762013: LOG: watchdog node state changed from [LOADING] to [JOINING] 2023-10-26 11:39:24: pid 762013: LOCATION: watchdog.c:7227 2023-10-26 11:39:24: pid 762013: LOG: new node joined the cluster hostname:"paqcxast02.aaa.es" port:9000 pgpool_port:9999 2023-10-26 11:39:24: pid 762013: DETAIL: Pgpool-II version:"4.4.4" watchdog messaging version: 1.2 2023-10-26 11:39:24: pid 762013: LOCATION: watchdog.c:1663 2023-10-26 11:39:28: pid 762013: LOG: watchdog node state changed from [JOINING] to [INITIALIZING] 2023-10-26 11:39:28: pid 762013: LOCATION: watchdog.c:7227 2023-10-26 11:39:28: pid 762013: LOG: new watchdog node connection is received from "10.23.18.111:56407" 2023-10-26 11:39:28: pid 762013: LOCATION: watchdog.c:3405 2023-10-26 11:39:28: pid 762013: LOG: new node joined the cluster hostname:"paqcxast01.aaa.es" port:9000 pgpool_port:9999 2023-10-26 11:39:28: pid 762013: DETAIL: Pgpool-II version:"4.4.4" watchdog messaging version: 1.2 2023-10-26 11:39:28: pid 762013: LOCATION: watchdog.c:1663 2023-10-26 11:39:29: pid 762013: LOG: watchdog node state changed from [INITIALIZING] to [STANDING FOR LEADER] 2023-10-26 11:39:29: pid 762013: LOCATION: watchdog.c:7227 2023-10-26 11:39:29: pid 762013: LOG: our stand for coordinator request is rejected by node "paqcxast02.aaa.es:9999 Linux paqcxast02.aaa.es" 2023-10-26 11:39:29: pid 762013: DETAIL: we might be in partial network isolation and cluster already have a valid leader 2023-10-26 11:39:29: pid 762013: HINT: please verify the watchdog life-check and network is working properly 2023-10-26 11:39:29: pid 762013: LOCATION: watchdog.c:5919 2023-10-26 11:39:29: pid 762013: LOG: watchdog node state changed from [STANDING FOR LEADER] to [NETWORK ISOLATION] 2023-10-26 11:39:29: pid 762013: LOCATION: watchdog.c:7227 2023-10-26 11:39:39: pid 762013: LOG: trying again to join the cluster 2023-10-26 11:39:39: pid 762013: LOCATION: watchdog.c:6514 2023-10-26 11:39:39: pid 762013: LOG: watchdog node state changed from [NETWORK ISOLATION] to [JOINING] 2023-10-26 11:39:39: pid 762013: LOCATION: watchdog.c:7227 2023-10-26 11:39:39: pid 762013: LOG: setting the remote node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" as watchdog cluster leader 2023-10-26 11:39:39: pid 762013: LOCATION: watchdog.c:7966 2023-10-26 11:39:39: pid 762013: LOG: watchdog node state changed from [JOINING] to [INITIALIZING] 2023-10-26 11:39:39: pid 762013: LOCATION: watchdog.c:7227 2023-10-26 11:39:40: pid 762013: LOG: watchdog node state changed from [INITIALIZING] to [STANDBY] 2023-10-26 11:39:40: pid 762013: LOCATION: watchdog.c:7227 2023-10-26 11:39:40: pid 762013: LOG: signal_user1_to_parent_with_reason(1) 2023-10-26 11:39:40: pid 762013: LOCATION: pgpool_main.c:773 2023-10-26 11:39:40: pid 762013: LOG: successfully joined the watchdog cluster as standby node 2023-10-26 11:39:40: pid 762013: DETAIL: our join coordinator request is accepted by cluster leader node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:39:40: pid 762013: LOCATION: watchdog.c:6887 2023-10-26 11:39:40: pid 762010: LOG: watchdog process is initialized 2023-10-26 11:39:40: pid 762010: DETAIL: watchdog messaging data version: 1.2 2023-10-26 11:39:40: pid 762010: LOCATION: pgpool_main.c:443 2023-10-26 11:39:40: pid 762010: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:39:40: pid 762010: LOCATION: pgpool_main.c:1417 2023-10-26 11:39:40: pid 762010: LOG: Pgpool-II parent process received watchdog state change signal from watchdog 2023-10-26 11:39:40: pid 762010: LOCATION: pgpool_main.c:1461 2023-10-26 11:39:40: pid 762099: LOG: 3 watchdog nodes are configured for lifecheck 2023-10-26 11:39:40: pid 762099: LOCATION: wd_lifecheck.c:493 2023-10-26 11:39:40: pid 762099: LOG: watchdog nodes ID:2 Name:"paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:39:40: pid 762099: DETAIL: Host:"paqcxast04.aaa.es" WD Port:9000 pgpool-II port:9999 2023-10-26 11:39:40: pid 762099: LOCATION: wd_lifecheck.c:501 2023-10-26 11:39:40: pid 762099: LOG: watchdog nodes ID:0 Name:"paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:39:40: pid 762099: DETAIL: Host:"paqcxast01.aaa.es" WD Port:9000 pgpool-II port:9999 2023-10-26 11:39:40: pid 762099: LOCATION: wd_lifecheck.c:501 2023-10-26 11:39:40: pid 762099: LOG: watchdog nodes ID:1 Name:"paqcxast02.aaa.es:9999 Linux paqcxast02.aaa.es" 2023-10-26 11:39:40: pid 762099: DETAIL: Host:"paqcxast02.aaa.es" WD Port:9000 pgpool-II port:9999 2023-10-26 11:39:40: pid 762099: LOCATION: wd_lifecheck.c:501 2023-10-26 11:39:40: pid 762010: LOG: we have joined the watchdog cluster as STANDBY node 2023-10-26 11:39:40: pid 762010: DETAIL: syncing the backend states from the LEADER watchdog node 2023-10-26 11:39:40: pid 762010: LOCATION: pgpool_main.c:1468 2023-10-26 11:39:40: pid 762013: LOG: received the get data request from local pgpool-II on IPC interface 2023-10-26 11:39:40: pid 762013: LOCATION: watchdog.c:2944 2023-10-26 11:39:40: pid 762013: LOG: get data request from local pgpool-II node received on IPC interface is forwarded to leader watchdog node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:39:40: pid 762013: DETAIL: waiting for the reply... 2023-10-26 11:39:40: pid 762013: LOCATION: watchdog.c:2971 2023-10-26 11:39:40: pid 762010: LOG: leader watchdog node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" returned status for 2 backend nodes 2023-10-26 11:39:40: pid 762010: LOCATION: pgpool_main.c:3587 2023-10-26 11:39:40: pid 762010: LOG: backend:0 is set to UP status 2023-10-26 11:39:40: pid 762010: DETAIL: backend:0 is UP on cluster leader "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:39:40: pid 762010: LOCATION: pgpool_main.c:3629 2023-10-26 11:39:40: pid 762010: LOG: backend:1 is set to UP status 2023-10-26 11:39:40: pid 762010: DETAIL: backend:1 is UP on cluster leader "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:39:40: pid 762010: LOCATION: pgpool_main.c:3629 2023-10-26 11:39:40: pid 762010: LOG: unix_socket_directories[0]: /run/pgpool/.s.PGSQL.9999 2023-10-26 11:39:40: pid 762010: LOCATION: pgpool_main.c:4823 2023-10-26 11:39:40: pid 762010: LOG: listen address[0]: * 2023-10-26 11:39:40: pid 762010: LOCATION: pgpool_main.c:4855 2023-10-26 11:39:40: pid 762010: LOG: Setting up socket for 0.0.0.0:9999 2023-10-26 11:39:40: pid 762010: LOCATION: pgpool_main.c:984 2023-10-26 11:39:40: pid 762010: LOG: Setting up socket for :::9999 2023-10-26 11:39:40: pid 762010: LOCATION: pgpool_main.c:984 2023-10-26 11:39:40: pid 762010: LOG: listen address[0]: * 2023-10-26 11:39:40: pid 762010: LOCATION: pgpool_main.c:4855 2023-10-26 11:39:40: pid 762010: LOG: Setting up socket for 0.0.0.0:9898 2023-10-26 11:39:40: pid 762010: LOCATION: pgpool_main.c:984 2023-10-26 11:39:40: pid 762010: LOG: Setting up socket for :::9898 2023-10-26 11:39:40: pid 762010: LOCATION: pgpool_main.c:984 2023-10-26 11:39:40: pid 762204: LOG: PCP process: 762204 started 2023-10-26 11:39:40: pid 762204: LOCATION: pcp_child.c:160 2023-10-26 11:39:40: pid 762205: LOG: process started 2023-10-26 11:39:40: pid 762205: LOCATION: pgpool_main.c:890 2023-10-26 11:39:40: pid 762206: LOG: process started 2023-10-26 11:39:40: pid 762206: LOCATION: pgpool_main.c:890 2023-10-26 11:39:40: pid 762207: LOG: process started 2023-10-26 11:39:40: pid 762207: LOCATION: pgpool_main.c:890 2023-10-26 11:39:40: pid 762010: LOG: pgpool-II successfully started. version 4.4.4 (nurikoboshi) 2023-10-26 11:39:40: pid 762010: LOCATION: pgpool_main.c:647 2023-10-26 11:39:40: pid 762010: LOG: node status[0]: 0 2023-10-26 11:39:40: pid 762010: LOCATION: pgpool_main.c:658 2023-10-26 11:39:40: pid 762010: LOG: node status[1]: 0 2023-10-26 11:39:40: pid 762010: LOCATION: pgpool_main.c:658 2023-10-26 11:39:41: pid 762100: LOG: set SO_REUSEPORT option to the socket 2023-10-26 11:39:41: pid 762100: LOCATION: wd_heartbeat.c:691 2023-10-26 11:39:41: pid 762100: LOG: creating watchdog heartbeat receive socket. 2023-10-26 11:39:41: pid 762100: DETAIL: set SO_REUSEPORT 2023-10-26 11:39:41: pid 762100: LOCATION: wd_heartbeat.c:231 2023-10-26 11:39:41: pid 762102: LOG: set SO_REUSEPORT option to the socket 2023-10-26 11:39:41: pid 762102: LOCATION: wd_heartbeat.c:691 2023-10-26 11:39:41: pid 762102: LOG: creating watchdog heartbeat receive socket. 2023-10-26 11:39:41: pid 762102: DETAIL: set SO_REUSEPORT 2023-10-26 11:39:41: pid 762102: LOCATION: wd_heartbeat.c:231 2023-10-26 11:39:41: pid 762101: LOG: set SO_REUSEPORT option to the socket 2023-10-26 11:39:41: pid 762101: LOCATION: wd_heartbeat.c:691 2023-10-26 11:39:41: pid 762101: LOG: creating socket for sending heartbeat 2023-10-26 11:39:41: pid 762101: DETAIL: set SO_REUSEPORT 2023-10-26 11:39:41: pid 762101: LOCATION: wd_heartbeat.c:148 2023-10-26 11:39:41: pid 762103: LOG: set SO_REUSEPORT option to the socket 2023-10-26 11:39:41: pid 762103: LOCATION: wd_heartbeat.c:691 2023-10-26 11:39:41: pid 762103: LOG: creating socket for sending heartbeat 2023-10-26 11:39:41: pid 762103: DETAIL: set SO_REUSEPORT 2023-10-26 11:39:41: pid 762103: LOCATION: wd_heartbeat.c:148 2023-10-26 11:41:20: pid 762099: LOG: watchdog: lifecheck started 2023-10-26 11:41:20: pid 762099: LOCATION: wd_lifecheck.c:431 2023-10-26 11:49:44: pid 762204: LOG: forked new pcp worker, pid=764686 socket=8 2023-10-26 11:49:44: pid 762204: LOCATION: pcp_child.c:308 2023-10-26 11:49:44: pid 762204: LOG: PCP process with pid: 764686 exit with SUCCESS. 2023-10-26 11:49:44: pid 762204: LOCATION: pcp_child.c:364 2023-10-26 11:49:44: pid 762204: LOG: PCP process with pid: 764686 exits with status 0 2023-10-26 11:49:44: pid 762204: LOCATION: pcp_child.c:378 2023-10-26 11:50:12: pid 762205: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:50:12: pid 762205: LOCATION: pool_connection_pool.c:661 2023-10-26 11:50:14: pid 762206: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:50:14: pid 762206: LOCATION: pool_connection_pool.c:661 2023-10-26 11:50:14: pid 762206: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:50:14: pid 762206: LOCATION: health_check.c:218 2023-10-26 11:50:14: pid 762206: LOG: received degenerate backend request for node_id: 0 from pid [762206] 2023-10-26 11:50:14: pid 762206: LOCATION: pool_internal_comms.c:147 2023-10-26 11:50:14: pid 762013: LOG: failover request from local pgpool-II node received on IPC interface is forwarded to leader watchdog node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:50:14: pid 762013: DETAIL: waiting for the reply... 2023-10-26 11:50:14: pid 762013: LOCATION: watchdog.c:2884 2023-10-26 11:50:17: pid 762205: ERROR: Failed to check replication time lag 2023-10-26 11:50:17: pid 762205: DETAIL: No persistent db connection for the node 0 2023-10-26 11:50:17: pid 762205: HINT: check sr_check_user and sr_check_password 2023-10-26 11:50:17: pid 762205: CONTEXT: while checking replication time lag 2023-10-26 11:50:17: pid 762205: LOCATION: pool_worker_child.c:390 2023-10-26 11:50:17: pid 762013: WARNING: we have not received a beacon message from leader node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:50:17: pid 762013: DETAIL: requesting info message from leader node 2023-10-26 11:50:17: pid 762013: LOCATION: watchdog.c:7079 2023-10-26 11:50:18: pid 762013: WARNING: we have not received a beacon message from leader node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:50:18: pid 762013: DETAIL: requesting info message from leader node 2023-10-26 11:50:18: pid 762013: LOCATION: watchdog.c:7079 2023-10-26 11:50:18: pid 762013: WARNING: we have not received a beacon message from leader node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:50:18: pid 762013: DETAIL: requesting info message from leader node 2023-10-26 11:50:18: pid 762013: LOCATION: watchdog.c:7079 2023-10-26 11:50:18: pid 762013: WARNING: we have not received a beacon message from leader node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:50:18: pid 762013: DETAIL: requesting info message from leader node 2023-10-26 11:50:18: pid 762013: LOCATION: watchdog.c:7079 2023-10-26 11:50:19: pid 762013: LOG: We are connected to leader node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" and another node "paqcxast02.aaa.es:9999 Linux paqcxast02.aaa.es" is trying to become a leader 2023-10-26 11:50:19: pid 762013: LOCATION: watchdog.c:6990 2023-10-26 11:50:19: pid 762013: WARNING: we have not received a beacon message from leader node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:50:19: pid 762013: DETAIL: requesting info message from leader node 2023-10-26 11:50:19: pid 762013: LOCATION: watchdog.c:7079 2023-10-26 11:50:19: pid 762013: WARNING: we have not received a beacon message from leader node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:50:19: pid 762013: DETAIL: requesting info message from leader node 2023-10-26 11:50:19: pid 762013: LOCATION: watchdog.c:7079 2023-10-26 11:50:19: pid 762013: WARNING: we have not received a beacon message from leader node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:50:19: pid 762013: DETAIL: requesting info message from leader node 2023-10-26 11:50:19: pid 762013: LOCATION: watchdog.c:7079 2023-10-26 11:50:20: pid 762013: LOG: remote node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" is not replying.. 2023-10-26 11:50:20: pid 762013: DETAIL: marking the node as lost 2023-10-26 11:50:20: pid 762013: LOCATION: watchdog.c:4808 2023-10-26 11:50:20: pid 762013: LOG: remote node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" is lost 2023-10-26 11:50:20: pid 762013: LOCATION: watchdog.c:5450 2023-10-26 11:50:20: pid 762013: LOG: watchdog cluster has lost the coordinator node 2023-10-26 11:50:20: pid 762013: LOCATION: watchdog.c:5457 2023-10-26 11:50:20: pid 762013: LOG: removing the remote node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" from watchdog cluster leader 2023-10-26 11:50:20: pid 762013: LOCATION: watchdog.c:7961 2023-10-26 11:50:20: pid 762013: LOG: We have lost the cluster leader node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:50:20: pid 762013: LOCATION: watchdog.c:6948 2023-10-26 11:50:20: pid 762013: LOG: watchdog node state changed from [STANDBY] to [JOINING] 2023-10-26 11:50:20: pid 762013: LOCATION: watchdog.c:7227 2023-10-26 11:50:20: pid 762206: LOG: degenerate backend request for 1 node(s) from pid [762206] is canceled by other pgpool 2023-10-26 11:50:20: pid 762206: LOCATION: pool_internal_comms.c:221 2023-10-26 11:50:20: pid 762013: LOG: watchdog node state changed from [JOINING] to [INITIALIZING] 2023-10-26 11:50:20: pid 762013: LOCATION: watchdog.c:7227 2023-10-26 11:50:21: pid 762013: LOG: watchdog node state changed from [INITIALIZING] to [STANDING FOR LEADER] 2023-10-26 11:50:21: pid 762013: LOCATION: watchdog.c:7227 2023-10-26 11:50:25: pid 762013: LOG: watchdog node state changed from [STANDING FOR LEADER] to [LEADER] 2023-10-26 11:50:25: pid 762013: LOCATION: watchdog.c:7227 2023-10-26 11:50:25: pid 762013: LOG: Setting failover command timeout to 5 2023-10-26 11:50:25: pid 762013: LOCATION: watchdog.c:8203 2023-10-26 11:50:25: pid 762013: LOG: I am announcing my self as leader/coordinator watchdog node 2023-10-26 11:50:25: pid 762013: LOCATION: watchdog.c:6027 2023-10-26 11:50:27: pid 762205: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:50:27: pid 762205: LOCATION: pool_connection_pool.c:661 2023-10-26 11:50:29: pid 762013: LOG: I am the cluster leader node 2023-10-26 11:50:29: pid 762013: DETAIL: our declare coordinator message is accepted by all nodes 2023-10-26 11:50:29: pid 762013: LOCATION: watchdog.c:6067 2023-10-26 11:50:29: pid 762013: LOG: setting the local node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" as watchdog cluster leader 2023-10-26 11:50:29: pid 762013: LOCATION: watchdog.c:7966 2023-10-26 11:50:29: pid 762013: LOG: signal_user1_to_parent_with_reason(1) 2023-10-26 11:50:29: pid 762013: LOCATION: pgpool_main.c:773 2023-10-26 11:50:29: pid 762013: LOG: I am the cluster leader node but we do not have enough nodes in cluster 2023-10-26 11:50:29: pid 762013: DETAIL: waiting for the quorum to start escalation process 2023-10-26 11:50:29: pid 762013: LOCATION: watchdog.c:6081 2023-10-26 11:50:29: pid 762010: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:50:29: pid 762010: LOCATION: pgpool_main.c:1417 2023-10-26 11:50:29: pid 762010: LOG: Pgpool-II parent process received watchdog state change signal from watchdog 2023-10-26 11:50:29: pid 762010: LOCATION: pgpool_main.c:1461 2023-10-26 11:50:30: pid 762013: LOG: adding watchdog node "paqcxast02.aaa.es:9999 Linux paqcxast02.aaa.es" to the standby list 2023-10-26 11:50:30: pid 762013: LOCATION: watchdog.c:8063 2023-10-26 11:50:30: pid 762013: LOG: quorum found 2023-10-26 11:50:30: pid 762013: DETAIL: starting escalation process 2023-10-26 11:50:30: pid 762013: LOCATION: watchdog.c:6161 2023-10-26 11:50:30: pid 762013: LOG: escalation process started with PID:764748 2023-10-26 11:50:30: pid 762013: LOCATION: watchdog.c:6742 2023-10-26 11:50:30: pid 764748: LOG: watchdog: escalation started 2023-10-26 11:50:30: pid 764748: LOCATION: wd_escalation.c:94 2023-10-26 11:50:30: pid 762013: LOG: signal_user1_to_parent_with_reason(3) 2023-10-26 11:50:30: pid 762013: LOCATION: pgpool_main.c:773 2023-10-26 11:50:30: pid 762010: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:50:30: pid 762010: LOCATION: pgpool_main.c:1417 2023-10-26 11:50:30: pid 762010: LOG: Pgpool-II parent process received watchdog quorum change signal from watchdog 2023-10-26 11:50:30: pid 762010: LOCATION: pgpool_main.c:1422 2023-10-26 11:50:30: pid 762013: LOG: Setting failover command timeout to 5 2023-10-26 11:50:30: pid 762013: LOCATION: watchdog.c:8203 2023-10-26 11:50:30: pid 762010: LOG: watchdog cluster now holds the quorum 2023-10-26 11:50:30: pid 762010: DETAIL: updating the state of quarantine backend nodes 2023-10-26 11:50:30: pid 762010: LOCATION: pgpool_main.c:1429 sh: /etc/pgpool-II/escalation.sh: Permission denied 2023-10-26 11:50:30: pid 764748: WARNING: watchdog escalation command failed with exit status: 126 2023-10-26 11:50:30: pid 764748: LOCATION: wd_escalation.c:124 2023-10-26 11:50:30: pid 762013: LOG: watchdog escalation process with pid: 764748 exit with SUCCESS. 2023-10-26 11:50:30: pid 762013: LOCATION: watchdog.c:3269 2023-10-26 11:50:30: pid 762099: LOG: informing the node status change to watchdog 2023-10-26 11:50:30: pid 762099: DETAIL: node id :0 status = "NODE DEAD" message:"No heartbeat signal from node" 2023-10-26 11:50:30: pid 762099: LOCATION: wd_lifecheck.c:529 2023-10-26 11:50:30: pid 762013: LOG: received node status change ipc message 2023-10-26 11:50:30: pid 762013: DETAIL: No heartbeat signal from node 2023-10-26 11:50:30: pid 762013: LOCATION: watchdog.c:2274 2023-10-26 11:50:30: pid 762013: LOG: remote node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" is lost 2023-10-26 11:50:30: pid 762013: LOCATION: watchdog.c:5450 2023-10-26 11:50:32: pid 762205: ERROR: Failed to check replication time lag 2023-10-26 11:50:32: pid 762205: DETAIL: No persistent db connection for the node 0 2023-10-26 11:50:32: pid 762205: HINT: check sr_check_user and sr_check_password 2023-10-26 11:50:32: pid 762205: CONTEXT: while checking replication time lag 2023-10-26 11:50:32: pid 762205: LOCATION: pool_worker_child.c:390 2023-10-26 11:50:33: pid 762013: LOG: watchdog received the failover command from remote pgpool-II node "paqcxast02.aaa.es:9999 Linux paqcxast02.aaa.es" 2023-10-26 11:50:33: pid 762013: LOCATION: watchdog.c:2525 2023-10-26 11:50:33: pid 762013: LOG: watchdog is processing the failover command [DEGENERATE_BACKEND_REQUEST] received from paqcxast02.aaa.es:9999 Linux paqcxast02.aaa.es 2023-10-26 11:50:33: pid 762013: LOCATION: watchdog.c:2778 2023-10-26 11:50:33: pid 762013: LOG: failover requires the majority vote, waiting for consensus 2023-10-26 11:50:33: pid 762013: DETAIL: failover request noted 2023-10-26 11:50:33: pid 762013: LOCATION: watchdog.c:2639 2023-10-26 11:50:33: pid 762013: LOG: failover command [DEGENERATE_BACKEND_REQUEST] request from pgpool-II node "paqcxast02.aaa.es:9999 Linux paqcxast02.aaa.es" is queued, waiting for the confirmation from other nodes 2023-10-26 11:50:33: pid 762013: LOCATION: watchdog.c:2832 2023-10-26 11:50:33: pid 762013: LOG: signal_user1_to_parent_with_reason(4) 2023-10-26 11:50:33: pid 762013: LOCATION: pgpool_main.c:773 2023-10-26 11:50:33: pid 762010: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:50:33: pid 762010: LOCATION: pgpool_main.c:1417 2023-10-26 11:50:33: pid 762010: LOG: Pgpool-II parent process received inform quarantine nodes signal from watchdog 2023-10-26 11:50:33: pid 762010: LOCATION: pgpool_main.c:1437 2023-10-26 11:50:35: pid 762206: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:50:35: pid 762206: LOCATION: pool_connection_pool.c:661 2023-10-26 11:50:35: pid 762206: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:50:35: pid 762206: LOCATION: health_check.c:218 2023-10-26 11:50:35: pid 762206: LOG: received degenerate backend request for node_id: 0 from pid [762206] 2023-10-26 11:50:35: pid 762206: LOCATION: pool_internal_comms.c:147 2023-10-26 11:50:35: pid 762013: LOG: watchdog received the failover command from local pgpool-II on IPC interface 2023-10-26 11:50:35: pid 762013: LOCATION: watchdog.c:2857 2023-10-26 11:50:35: pid 762013: LOG: watchdog is processing the failover command [DEGENERATE_BACKEND_REQUEST] received from local pgpool-II on IPC interface 2023-10-26 11:50:35: pid 762013: LOCATION: watchdog.c:2778 2023-10-26 11:50:35: pid 762013: LOG: we have got the consensus to perform the failover 2023-10-26 11:50:35: pid 762013: DETAIL: 2 node(s) voted in the favor 2023-10-26 11:50:35: pid 762013: LOCATION: watchdog.c:2650 2023-10-26 11:50:35: pid 762206: LOG: signal_user1_to_parent_with_reason(0) 2023-10-26 11:50:35: pid 762206: LOCATION: pgpool_main.c:773 2023-10-26 11:50:35: pid 762010: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:50:35: pid 762010: LOCATION: pgpool_main.c:1417 2023-10-26 11:50:35: pid 762010: LOG: Pgpool-II parent process has received failover request 2023-10-26 11:50:35: pid 762010: LOCATION: pgpool_main.c:1482 2023-10-26 11:50:35: pid 762013: LOG: received the failover indication from Pgpool-II on IPC interface 2023-10-26 11:50:35: pid 762013: LOCATION: watchdog.c:3003 2023-10-26 11:50:35: pid 762013: LOG: watchdog is informed of failover start by the main process 2023-10-26 11:50:35: pid 762013: LOCATION: watchdog.c:3077 2023-10-26 11:50:35: pid 762010: LOG: === Starting degeneration. shutdown host paqcxast01.aaa.es(5432) === 2023-10-26 11:50:35: pid 762010: LOCATION: pgpool_main.c:4205 2023-10-26 11:50:35: pid 762010: LOG: Restart all children 2023-10-26 11:50:35: pid 762010: LOCATION: pgpool_main.c:4368 2023-10-26 11:50:35: pid 762010: LOG: execute command: /etc/pgpool-II/failover.sh 0 paqcxast01.aaa.es 5432 /var/lib/pgsql/data 1 paqcxast02.aaa.es 0 0 5432 /var/lib/pgsql/data paqcxast01.aaa.es 5432 2023-10-26 11:50:35: pid 762010: LOCATION: pgpool_main.c:2407 sh: /etc/pgpool-II/failover.sh: Permission denied 2023-10-26 11:50:35: pid 762010: LOG: find_primary_node_repeatedly: waiting for finding a primary node 2023-10-26 11:50:35: pid 762010: LOCATION: pgpool_main.c:2865 2023-10-26 11:50:42: pid 762205: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:50:42: pid 762205: LOCATION: pool_connection_pool.c:661 2023-10-26 11:50:43: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:50:43: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:50:43: pid 762010: LOG: reaper handler 2023-10-26 11:50:43: pid 762010: LOCATION: pgpool_main.c:1830 2023-10-26 11:50:44: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:50:44: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:50:45: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:50:45: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:50:46: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:50:46: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:50:47: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:50:47: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:50:48: pid 762013: LOG: watchdog received the failover command from remote pgpool-II node "paqcxast02.aaa.es:9999 Linux paqcxast02.aaa.es" 2023-10-26 11:50:48: pid 762013: LOCATION: watchdog.c:2525 2023-10-26 11:50:48: pid 762013: LOG: watchdog is processing the failover command [DEGENERATE_BACKEND_REQUEST] received from paqcxast02.aaa.es:9999 Linux paqcxast02.aaa.es 2023-10-26 11:50:48: pid 762013: LOCATION: watchdog.c:2778 2023-10-26 11:50:48: pid 762013: LOG: failover requires the majority vote, waiting for consensus 2023-10-26 11:50:48: pid 762013: DETAIL: failover request noted 2023-10-26 11:50:48: pid 762013: LOCATION: watchdog.c:2639 2023-10-26 11:50:48: pid 762013: LOG: failover command [DEGENERATE_BACKEND_REQUEST] request from pgpool-II node "paqcxast02.aaa.es:9999 Linux paqcxast02.aaa.es" is queued, waiting for the confirmation from other nodes 2023-10-26 11:50:48: pid 762013: LOCATION: watchdog.c:2832 2023-10-26 11:50:48: pid 762013: LOG: signal_user1_to_parent_with_reason(4) 2023-10-26 11:50:48: pid 762013: LOCATION: pgpool_main.c:773 2023-10-26 11:50:48: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:50:48: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:50:48: pid 762010: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:50:48: pid 762010: LOCATION: pgpool_main.c:1417 2023-10-26 11:50:48: pid 762010: LOG: Pgpool-II parent process received inform quarantine nodes signal from watchdog 2023-10-26 11:50:48: pid 762010: LOCATION: pgpool_main.c:1437 2023-10-26 11:50:48: pid 762010: LOG: reaper handler 2023-10-26 11:50:48: pid 762010: LOCATION: pgpool_main.c:1830 2023-10-26 11:50:49: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:50:49: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:50:50: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:50:50: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:50:51: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:50:51: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:50:52: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:50:52: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:50:53: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:50:53: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:50:54: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:50:54: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:50:55: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:50:55: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:50:56: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:50:56: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:50:57: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:50:57: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:50:58: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:50:58: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:50:59: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:50:59: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:00: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:00: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:01: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:01: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:02: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:02: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:03: pid 762013: LOG: watchdog received the failover command from remote pgpool-II node "paqcxast02.aaa.es:9999 Linux paqcxast02.aaa.es" 2023-10-26 11:51:03: pid 762013: LOCATION: watchdog.c:2525 2023-10-26 11:51:03: pid 762013: LOG: watchdog is processing the failover command [DEGENERATE_BACKEND_REQUEST] received from paqcxast02.aaa.es:9999 Linux paqcxast02.aaa.es 2023-10-26 11:51:03: pid 762013: LOCATION: watchdog.c:2778 2023-10-26 11:51:03: pid 762013: LOG: Duplicate failover request from "paqcxast02.aaa.es:9999 Linux paqcxast02.aaa.es" node 2023-10-26 11:51:03: pid 762013: DETAIL: request ignored 2023-10-26 11:51:03: pid 762013: LOCATION: watchdog.c:2702 2023-10-26 11:51:03: pid 762013: LOG: failover requires the majority vote, waiting for consensus 2023-10-26 11:51:03: pid 762013: DETAIL: failover request noted 2023-10-26 11:51:03: pid 762013: LOCATION: watchdog.c:2639 2023-10-26 11:51:03: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:03: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:04: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:04: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:05: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:05: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:06: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:06: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:07: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:07: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:08: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:08: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:09: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:09: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:10: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:10: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:11: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:11: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:12: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:12: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:13: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:13: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:15: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:15: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:16: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:16: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:17: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:17: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:18: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:18: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:19: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:19: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:20: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:20: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:21: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:21: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:22: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:22: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:23: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:23: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:24: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:24: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:25: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:25: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:26: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:26: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:27: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:27: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:28: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:28: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:29: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:29: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:30: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:30: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:31: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:31: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:32: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:32: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:33: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:33: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:34: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:34: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:35: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:35: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:36: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:36: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:37: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:37: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:38: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:38: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:39: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:39: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:40: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:40: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:41: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:41: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:42: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:42: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:43: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:43: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:44: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:44: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:44: pid 762204: LOG: forked new pcp worker, pid=765107 socket=8 2023-10-26 11:51:44: pid 762204: LOCATION: pcp_child.c:308 2023-10-26 11:51:44: pid 762204: LOG: PCP process with pid: 765107 exit with SUCCESS. 2023-10-26 11:51:44: pid 762204: LOCATION: pcp_child.c:364 2023-10-26 11:51:44: pid 762204: LOG: PCP process with pid: 765107 exits with status 0 2023-10-26 11:51:44: pid 762204: LOCATION: pcp_child.c:378 2023-10-26 11:51:45: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:45: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:46: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:46: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:47: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:47: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:48: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:48: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:49: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:49: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:50: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:50: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:51: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:51: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:52: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:52: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:53: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:53: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:54: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:54: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:55: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:55: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:57: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:57: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:58: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:58: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:51:59: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:51:59: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:00: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:00: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:01: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:01: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:02: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:02: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:03: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:03: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:04: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:04: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:05: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:05: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:06: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:06: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:07: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:07: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:08: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:08: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:09: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:09: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:10: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:10: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:11: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:11: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:12: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:12: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:13: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:13: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:14: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:14: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:15: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:15: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:16: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:16: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:17: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:17: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:18: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:18: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:19: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:19: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:20: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:20: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:21: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:21: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:22: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:22: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:23: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:23: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:24: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:24: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:25: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:25: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:26: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:26: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:27: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:27: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:28: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:28: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:29: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:29: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:30: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:30: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:31: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:31: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:32: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:32: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:33: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:33: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:34: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:34: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:35: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:35: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:36: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:36: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:37: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:37: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:39: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:39: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:40: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:40: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:41: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:41: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:42: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:42: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:43: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:43: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:44: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:44: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:45: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:45: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:46: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:46: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:47: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:47: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:48: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:48: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:49: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:49: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:50: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:50: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:51: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:51: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:52: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:52: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:53: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:53: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:54: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:54: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:55: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:55: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:56: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:56: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:57: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:57: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:58: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:58: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:52:59: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:52:59: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:00: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:00: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:01: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:01: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:02: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:02: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:03: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:03: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:04: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:04: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:05: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:05: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:06: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:06: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:07: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:07: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:08: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:08: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:09: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:09: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:10: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:10: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:11: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:11: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:12: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:12: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:13: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:13: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:14: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:14: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:15: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:15: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:16: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:16: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:17: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:17: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:18: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:18: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:19: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:19: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:20: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:20: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:21: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:21: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:23: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:23: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:24: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:24: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:25: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:25: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:26: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:26: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:27: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:27: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:28: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:28: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:29: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:29: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:30: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:30: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:31: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:31: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:32: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:32: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:33: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:33: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:34: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:34: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:35: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:35: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:36: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:36: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:37: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:37: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:38: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:38: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:39: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:39: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:40: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:40: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:41: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:41: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:42: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:42: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:43: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:43: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:44: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:44: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:45: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:45: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:46: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:46: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:47: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:47: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:48: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:48: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:49: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:49: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:50: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:50: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:51: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:51: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:52: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:52: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:53: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:53: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:54: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:54: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:55: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:55: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:56: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:56: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:57: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:57: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:58: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:58: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:53:59: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:53:59: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:00: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:00: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:01: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:01: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:02: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:02: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:03: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:03: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:04: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:04: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:05: pid 762204: LOG: forked new pcp worker, pid=765404 socket=8 2023-10-26 11:54:05: pid 762204: LOCATION: pcp_child.c:308 2023-10-26 11:54:05: pid 762204: LOG: PCP process with pid: 765404 exit with SUCCESS. 2023-10-26 11:54:05: pid 762204: LOCATION: pcp_child.c:364 2023-10-26 11:54:05: pid 762204: LOG: PCP process with pid: 765404 exits with status 0 2023-10-26 11:54:05: pid 762204: LOCATION: pcp_child.c:378 2023-10-26 11:54:06: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:06: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:07: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:07: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:08: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:08: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:09: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:09: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:10: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:10: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:11: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:11: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:12: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:12: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:13: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:13: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:14: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:14: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:15: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:15: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:16: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:16: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:17: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:17: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:18: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:18: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:19: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:19: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:20: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:20: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:21: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:21: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:22: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:22: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:23: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:23: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:24: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:24: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:25: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:25: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:26: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:26: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:27: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:27: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:28: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:28: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:29: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:29: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:30: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:30: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:31: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:31: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:32: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:32: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:33: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:33: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:34: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:34: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:35: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:35: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:36: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:36: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:37: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:37: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:38: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:38: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:39: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:39: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:40: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:40: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:41: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:41: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:42: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:42: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:43: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:43: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:44: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:44: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:45: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:45: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:46: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:46: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:48: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:48: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:49: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:49: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:50: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:50: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:51: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:51: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:52: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:52: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:53: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:53: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:54: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:54: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:55: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:55: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:56: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:56: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:57: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:57: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:58: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:58: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:54:59: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:54:59: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:00: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:00: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:01: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:01: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:02: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:02: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:03: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:03: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:04: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:04: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:05: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:05: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:06: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:06: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:07: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:07: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:08: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:08: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:09: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:09: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:10: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:10: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:11: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:11: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:12: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:12: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:13: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:13: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:14: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:14: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:15: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:15: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:16: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:16: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:17: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:17: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:18: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:18: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:19: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:19: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:20: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:20: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:21: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:21: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:22: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:22: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:23: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:23: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:24: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:24: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:25: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:25: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:26: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:26: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:28: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:28: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:29: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:29: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:30: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:30: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:31: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:31: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:32: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:32: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:33: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:33: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:34: pid 762010: LOG: find_primary_node: standby node is 1 2023-10-26 11:55:34: pid 762010: LOCATION: pgpool_main.c:2790 2023-10-26 11:55:35: pid 762010: LOG: failed to find primary node 2023-10-26 11:55:35: pid 762010: DETAIL: find_primary_node_repeatedly: expired after 300 seconds 2023-10-26 11:55:35: pid 762010: LOCATION: pgpool_main.c:2886 2023-10-26 11:55:35: pid 762010: LOG: failover: set new primary node: -1 2023-10-26 11:55:35: pid 762010: LOCATION: pgpool_main.c:4595 2023-10-26 11:55:35: pid 762010: LOG: failover: set new main node: 1 2023-10-26 11:55:35: pid 762010: LOCATION: pgpool_main.c:4602 2023-10-26 11:55:35: pid 762205: LOG: worker process received restart request 2023-10-26 11:55:35: pid 762205: LOCATION: pool_worker_child.c:167 2023-10-26 11:55:35: pid 762013: LOG: received the failover indication from Pgpool-II on IPC interface 2023-10-26 11:55:35: pid 762013: LOCATION: watchdog.c:3003 2023-10-26 11:55:35: pid 762013: LOG: watchdog is informed of failover end by the main process 2023-10-26 11:55:35: pid 762013: LOCATION: watchdog.c:3105 2023-10-26 11:55:35: pid 762010: LOG: === Failover done. shutdown host paqcxast01.aaa.es(5432) === 2023-10-26 11:55:35: pid 762010: LOCATION: pgpool_main.c:4740 2023-10-26 11:55:36: pid 762204: LOG: restart request received in pcp child process 2023-10-26 11:55:36: pid 762204: LOCATION: pcp_child.c:167 2023-10-26 11:55:36: pid 762010: LOG: PCP child 762204 exits with status 0 in failover() 2023-10-26 11:55:36: pid 762010: LOCATION: pgpool_main.c:4785 2023-10-26 11:55:36: pid 762010: LOG: fork a new PCP child pid 765715 in failover() 2023-10-26 11:55:36: pid 762010: LOCATION: pgpool_main.c:4789 2023-10-26 11:55:36: pid 762010: LOG: reaper handler 2023-10-26 11:55:36: pid 762010: LOCATION: pgpool_main.c:1830 2023-10-26 11:55:36: pid 762010: LOG: reaper handler: exiting normally 2023-10-26 11:55:36: pid 762010: LOCATION: pgpool_main.c:2050 2023-10-26 11:55:36: pid 765715: LOG: PCP process: 765715 started 2023-10-26 11:55:36: pid 765715: LOCATION: pcp_child.c:160 2023-10-26 11:55:36: pid 765716: LOG: process started 2023-10-26 11:55:36: pid 765716: LOCATION: pgpool_main.c:890 nodo2-pgpool-2023-10-26_113913.log (44,166 bytes)
2023-10-26 11:39:13: pid 60567: LOG: health_check_stats_shared_memory_size: requested size: 12288 2023-10-26 11:39:13: pid 60567: LOCATION: health_check.c:541 2023-10-26 11:39:13: pid 60567: LOG: memory cache initialized 2023-10-26 11:39:13: pid 60567: DETAIL: memcache blocks :64 2023-10-26 11:39:13: pid 60567: LOCATION: pool_memqcache.c:2061 2023-10-26 11:39:13: pid 60567: LOG: allocating (138460248) bytes of shared memory segment 2023-10-26 11:39:13: pid 60567: LOCATION: pgpool_main.c:3024 2023-10-26 11:39:13: pid 60567: LOG: allocating shared memory segment of size: 138460248 2023-10-26 11:39:13: pid 60567: LOCATION: pool_shmem.c:61 2023-10-26 11:39:13: pid 60567: LOG: health_check_stats_shared_memory_size: requested size: 12288 2023-10-26 11:39:13: pid 60567: LOCATION: health_check.c:541 2023-10-26 11:39:13: pid 60567: LOG: health_check_stats_shared_memory_size: requested size: 12288 2023-10-26 11:39:13: pid 60567: LOCATION: health_check.c:541 2023-10-26 11:39:13: pid 60567: LOG: memory cache initialized 2023-10-26 11:39:13: pid 60567: DETAIL: memcache blocks :64 2023-10-26 11:39:13: pid 60567: LOCATION: pool_memqcache.c:2061 2023-10-26 11:39:13: pid 60567: LOG: pool_discard_oid_maps: discarded memqcache oid maps 2023-10-26 11:39:13: pid 60567: LOCATION: pgpool_main.c:3108 2023-10-26 11:39:13: pid 60567: LOG: waiting for watchdog to initialize 2023-10-26 11:39:13: pid 60567: LOCATION: pgpool_main.c:428 2023-10-26 11:39:13: pid 60570: LOG: setting the local watchdog node name to "paqcxast02.aaa.es:9999 Linux paqcxast02.aaa.es" 2023-10-26 11:39:13: pid 60570: LOCATION: watchdog.c:772 2023-10-26 11:39:13: pid 60570: LOG: watchdog cluster is configured with 2 remote nodes 2023-10-26 11:39:13: pid 60570: LOCATION: watchdog.c:782 2023-10-26 11:39:13: pid 60570: LOG: watchdog remote node:0 on paqcxast01.aaa.es:9000 2023-10-26 11:39:13: pid 60570: LOCATION: watchdog.c:799 2023-10-26 11:39:13: pid 60570: LOG: watchdog remote node:1 on paqcxast04.aaa.es:9000 2023-10-26 11:39:13: pid 60570: LOCATION: watchdog.c:799 2023-10-26 11:39:13: pid 60570: LOG: interface monitoring is disabled in watchdog 2023-10-26 11:39:13: pid 60570: LOCATION: watchdog.c:668 2023-10-26 11:39:13: pid 60570: LOG: watchdog node state changed from [DEAD] to [LOADING] 2023-10-26 11:39:13: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:39:13: pid 60570: LOG: new outbound connection to paqcxast01.aaa.es:9000 2023-10-26 11:39:13: pid 60570: LOCATION: watchdog.c:3484 2023-10-26 11:39:17: pid 60570: LOG: watchdog node state changed from [LOADING] to [INITIALIZING] 2023-10-26 11:39:17: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:39:18: pid 60570: LOG: watchdog node state changed from [INITIALIZING] to [STANDING FOR LEADER] 2023-10-26 11:39:18: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:39:18: pid 60570: LOG: our stand for coordinator request is rejected by node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:39:18: pid 60570: LOCATION: watchdog.c:5925 2023-10-26 11:39:18: pid 60570: LOG: watchdog node state changed from [STANDING FOR LEADER] to [PARTICIPATING IN ELECTION] 2023-10-26 11:39:18: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:39:18: pid 60570: LOG: watchdog node state changed from [PARTICIPATING IN ELECTION] to [INITIALIZING] 2023-10-26 11:39:18: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:39:18: pid 60570: LOG: setting the remote node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" as watchdog cluster leader 2023-10-26 11:39:18: pid 60570: LOCATION: watchdog.c:7966 2023-10-26 11:39:19: pid 60570: LOG: watchdog node state changed from [INITIALIZING] to [STANDBY] 2023-10-26 11:39:19: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:39:19: pid 60570: LOG: signal_user1_to_parent_with_reason(1) 2023-10-26 11:39:19: pid 60570: LOCATION: pgpool_main.c:773 2023-10-26 11:39:19: pid 60570: LOG: successfully joined the watchdog cluster as standby node 2023-10-26 11:39:19: pid 60570: DETAIL: our join coordinator request is accepted by cluster leader node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:39:19: pid 60570: LOCATION: watchdog.c:6887 2023-10-26 11:39:19: pid 60567: LOG: watchdog process is initialized 2023-10-26 11:39:19: pid 60567: DETAIL: watchdog messaging data version: 1.2 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:443 2023-10-26 11:39:19: pid 60570: LOG: signal_user1_to_parent_with_reason(3) 2023-10-26 11:39:19: pid 60570: LOCATION: pgpool_main.c:773 2023-10-26 11:39:19: pid 60567: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:1417 2023-10-26 11:39:19: pid 60567: LOG: Pgpool-II parent process received watchdog quorum change signal from watchdog 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:1422 2023-10-26 11:39:19: pid 60567: LOG: watchdog cluster now holds the quorum 2023-10-26 11:39:19: pid 60567: DETAIL: updating the state of quarantine backend nodes 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:1429 2023-10-26 11:39:19: pid 60567: LOG: Pgpool-II parent process received watchdog state change signal from watchdog 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:1461 2023-10-26 11:39:19: pid 60567: LOG: we have joined the watchdog cluster as STANDBY node 2023-10-26 11:39:19: pid 60567: DETAIL: syncing the backend states from the LEADER watchdog node 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:1468 2023-10-26 11:39:19: pid 60576: LOG: 3 watchdog nodes are configured for lifecheck 2023-10-26 11:39:19: pid 60576: LOCATION: wd_lifecheck.c:493 2023-10-26 11:39:19: pid 60570: LOG: received the get data request from local pgpool-II on IPC interface 2023-10-26 11:39:19: pid 60570: LOCATION: watchdog.c:2944 2023-10-26 11:39:19: pid 60576: LOG: watchdog nodes ID:1 Name:"paqcxast02.aaa.es:9999 Linux paqcxast02.aaa.es" 2023-10-26 11:39:19: pid 60576: DETAIL: Host:"paqcxast02.aaa.es" WD Port:9000 pgpool-II port:9999 2023-10-26 11:39:19: pid 60576: LOCATION: wd_lifecheck.c:501 2023-10-26 11:39:19: pid 60576: LOG: watchdog nodes ID:0 Name:"paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:39:19: pid 60576: DETAIL: Host:"paqcxast01.aaa.es" WD Port:9000 pgpool-II port:9999 2023-10-26 11:39:19: pid 60576: LOCATION: wd_lifecheck.c:501 2023-10-26 11:39:19: pid 60570: LOG: get data request from local pgpool-II node received on IPC interface is forwarded to leader watchdog node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:39:19: pid 60570: DETAIL: waiting for the reply... 2023-10-26 11:39:19: pid 60570: LOCATION: watchdog.c:2971 2023-10-26 11:39:19: pid 60576: LOG: watchdog nodes ID:2 Name:"Not_Set" 2023-10-26 11:39:19: pid 60576: DETAIL: Host:"paqcxast04.aaa.es" WD Port:9000 pgpool-II port:9999 2023-10-26 11:39:19: pid 60576: LOCATION: wd_lifecheck.c:501 2023-10-26 11:39:19: pid 60567: LOG: leader watchdog node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" returned status for 2 backend nodes 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:3587 2023-10-26 11:39:19: pid 60567: LOG: backend:0 is set to UP status 2023-10-26 11:39:19: pid 60567: DETAIL: backend:0 is UP on cluster leader "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:3629 2023-10-26 11:39:19: pid 60567: LOG: backend:1 is set to UP status 2023-10-26 11:39:19: pid 60567: DETAIL: backend:1 is UP on cluster leader "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:3629 2023-10-26 11:39:19: pid 60567: LOG: unix_socket_directories[0]: /run/pgpool/.s.PGSQL.9999 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:4823 2023-10-26 11:39:19: pid 60567: LOG: listen address[0]: * 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:4855 2023-10-26 11:39:19: pid 60567: LOG: Setting up socket for 0.0.0.0:9999 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:984 2023-10-26 11:39:19: pid 60567: LOG: Setting up socket for :::9999 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:984 2023-10-26 11:39:19: pid 60567: LOG: listen address[0]: * 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:4855 2023-10-26 11:39:19: pid 60567: LOG: Setting up socket for 0.0.0.0:9898 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:984 2023-10-26 11:39:19: pid 60567: LOG: Setting up socket for :::9898 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:984 2023-10-26 11:39:19: pid 60683: LOG: PCP process: 60683 started 2023-10-26 11:39:19: pid 60683: LOCATION: pcp_child.c:160 2023-10-26 11:39:19: pid 60684: LOG: process started 2023-10-26 11:39:19: pid 60684: LOCATION: pgpool_main.c:890 2023-10-26 11:39:19: pid 60685: LOG: process started 2023-10-26 11:39:19: pid 60685: LOCATION: pgpool_main.c:890 2023-10-26 11:39:19: pid 60686: LOG: process started 2023-10-26 11:39:19: pid 60686: LOCATION: pgpool_main.c:890 2023-10-26 11:39:19: pid 60567: LOG: pgpool-II successfully started. version 4.4.4 (nurikoboshi) 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:647 2023-10-26 11:39:19: pid 60567: LOG: node status[0]: 0 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:658 2023-10-26 11:39:19: pid 60567: LOG: node status[1]: 0 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:658 2023-10-26 11:39:19: pid 60570: LOG: new watchdog node connection is received from "10.151.18.84:576" 2023-10-26 11:39:19: pid 60570: LOCATION: watchdog.c:3405 2023-10-26 11:39:19: pid 60570: LOG: new node joined the cluster hostname:"paqcxast04.aaa.es" port:9000 pgpool_port:9999 2023-10-26 11:39:19: pid 60570: DETAIL: Pgpool-II version:"4.4.4" watchdog messaging version: 1.2 2023-10-26 11:39:19: pid 60570: LOCATION: watchdog.c:1663 2023-10-26 11:39:20: pid 60577: LOG: set SO_REUSEPORT option to the socket 2023-10-26 11:39:20: pid 60577: LOCATION: wd_heartbeat.c:691 2023-10-26 11:39:20: pid 60578: LOG: set SO_REUSEPORT option to the socket 2023-10-26 11:39:20: pid 60578: LOCATION: wd_heartbeat.c:691 2023-10-26 11:39:20: pid 60577: LOG: creating watchdog heartbeat receive socket. 2023-10-26 11:39:20: pid 60577: DETAIL: set SO_REUSEPORT 2023-10-26 11:39:20: pid 60577: LOCATION: wd_heartbeat.c:231 2023-10-26 11:39:20: pid 60578: LOG: creating socket for sending heartbeat 2023-10-26 11:39:20: pid 60578: DETAIL: set SO_REUSEPORT 2023-10-26 11:39:20: pid 60578: LOCATION: wd_heartbeat.c:148 2023-10-26 11:39:20: pid 60579: LOG: set SO_REUSEPORT option to the socket 2023-10-26 11:39:20: pid 60579: LOCATION: wd_heartbeat.c:691 2023-10-26 11:39:20: pid 60579: LOG: creating watchdog heartbeat receive socket. 2023-10-26 11:39:20: pid 60579: DETAIL: set SO_REUSEPORT 2023-10-26 11:39:20: pid 60579: LOCATION: wd_heartbeat.c:231 2023-10-26 11:39:20: pid 60580: LOG: set SO_REUSEPORT option to the socket 2023-10-26 11:39:20: pid 60580: LOCATION: wd_heartbeat.c:691 2023-10-26 11:39:20: pid 60580: LOG: creating socket for sending heartbeat 2023-10-26 11:39:20: pid 60580: DETAIL: set SO_REUSEPORT 2023-10-26 11:39:20: pid 60580: LOCATION: wd_heartbeat.c:148 2023-10-26 11:39:23: pid 60570: LOG: new watchdog node connection is received from "10.23.18.111:55435" 2023-10-26 11:39:23: pid 60570: LOCATION: watchdog.c:3405 2023-10-26 11:39:23: pid 60570: LOG: new node joined the cluster hostname:"paqcxast01.aaa.es" port:9000 pgpool_port:9999 2023-10-26 11:39:23: pid 60570: DETAIL: Pgpool-II version:"4.4.4" watchdog messaging version: 1.2 2023-10-26 11:39:23: pid 60570: LOCATION: watchdog.c:1663 2023-10-26 11:39:24: pid 60570: LOG: new outbound connection to paqcxast04.aaa.es:9000 2023-10-26 11:39:24: pid 60570: LOCATION: watchdog.c:3484 2023-10-26 11:39:29: pid 60570: LOG: We are connected to leader node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" and another node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" is trying to become a leader 2023-10-26 11:39:29: pid 60570: LOCATION: watchdog.c:6990 2023-10-26 11:40:59: pid 60576: LOG: watchdog: lifecheck started 2023-10-26 11:40:59: pid 60576: LOCATION: wd_lifecheck.c:431 2023-10-26 11:49:33: pid 60655: LOG: new connection received 2023-10-26 11:49:33: pid 60655: DETAIL: connecting host=127.0.0.1 port=31390 2023-10-26 11:49:33: pid 60655: LOCATION: child.c:1873 2023-10-26 11:49:35: pid 60655: LOG: frontend disconnection: session time: 0:00:02.308 user=usr_pg_pool database=postgres host=127.0.0.1 port=31390 2023-10-26 11:49:35: pid 60655: LOCATION: child.c:2089 2023-10-26 11:49:44: pid 60683: LOG: forked new pcp worker, pid=63280 socket=7 2023-10-26 11:49:44: pid 60683: LOCATION: pcp_child.c:308 2023-10-26 11:49:44: pid 60683: LOG: PCP process with pid: 63280 exit with SUCCESS. 2023-10-26 11:49:44: pid 60683: LOCATION: pcp_child.c:364 2023-10-26 11:49:44: pid 60683: LOG: PCP process with pid: 63280 exits with status 0 2023-10-26 11:49:44: pid 60683: LOCATION: pcp_child.c:378 2023-10-26 11:50:11: pid 60684: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:50:11: pid 60684: LOCATION: pool_connection_pool.c:661 2023-10-26 11:50:13: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:50:13: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:50:13: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:50:13: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:50:13: pid 60685: LOG: received degenerate backend request for node_id: 0 from pid [60685] 2023-10-26 11:50:13: pid 60685: LOCATION: pool_internal_comms.c:147 2023-10-26 11:50:13: pid 60570: LOG: failover request from local pgpool-II node received on IPC interface is forwarded to leader watchdog node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:50:13: pid 60570: DETAIL: waiting for the reply... 2023-10-26 11:50:13: pid 60570: LOCATION: watchdog.c:2884 2023-10-26 11:50:16: pid 60684: ERROR: Failed to check replication time lag 2023-10-26 11:50:16: pid 60684: DETAIL: No persistent db connection for the node 0 2023-10-26 11:50:16: pid 60684: HINT: check sr_check_user and sr_check_password 2023-10-26 11:50:16: pid 60684: CONTEXT: while checking replication time lag 2023-10-26 11:50:16: pid 60684: LOCATION: pool_worker_child.c:390 2023-10-26 11:50:16: pid 60570: WARNING: we have not received a beacon message from leader node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:50:16: pid 60570: DETAIL: requesting info message from leader node 2023-10-26 11:50:16: pid 60570: LOCATION: watchdog.c:7079 2023-10-26 11:50:18: pid 60570: LOG: remote node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" is not replying.. 2023-10-26 11:50:18: pid 60570: DETAIL: marking the node as lost 2023-10-26 11:50:18: pid 60570: LOCATION: watchdog.c:4808 2023-10-26 11:50:18: pid 60570: LOG: remote node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" is lost 2023-10-26 11:50:18: pid 60570: LOCATION: watchdog.c:5450 2023-10-26 11:50:18: pid 60570: LOG: watchdog cluster has lost the coordinator node 2023-10-26 11:50:18: pid 60570: LOCATION: watchdog.c:5457 2023-10-26 11:50:18: pid 60570: LOG: removing the remote node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" from watchdog cluster leader 2023-10-26 11:50:18: pid 60570: LOCATION: watchdog.c:7961 2023-10-26 11:50:18: pid 60570: LOG: We have lost the cluster leader node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:50:18: pid 60570: LOCATION: watchdog.c:6948 2023-10-26 11:50:18: pid 60570: LOG: watchdog node state changed from [STANDBY] to [JOINING] 2023-10-26 11:50:18: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:50:18: pid 60685: LOG: degenerate backend request for 1 node(s) from pid [60685] is canceled by other pgpool 2023-10-26 11:50:18: pid 60685: LOCATION: pool_internal_comms.c:221 2023-10-26 11:50:18: pid 60570: LOG: watchdog node state changed from [JOINING] to [INITIALIZING] 2023-10-26 11:50:18: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:50:19: pid 60570: LOG: watchdog node state changed from [INITIALIZING] to [STANDING FOR LEADER] 2023-10-26 11:50:19: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:50:19: pid 60570: LOG: our stand for coordinator request is rejected by node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:50:19: pid 60570: DETAIL: we might be in partial network isolation and cluster already have a valid leader 2023-10-26 11:50:19: pid 60570: HINT: please verify the watchdog life-check and network is working properly 2023-10-26 11:50:19: pid 60570: LOCATION: watchdog.c:5919 2023-10-26 11:50:19: pid 60570: LOG: watchdog node state changed from [STANDING FOR LEADER] to [NETWORK ISOLATION] 2023-10-26 11:50:19: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:50:25: pid 60570: LOG: setting the remote node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" as watchdog cluster leader 2023-10-26 11:50:25: pid 60570: LOCATION: watchdog.c:7966 2023-10-26 11:50:26: pid 60684: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:50:26: pid 60684: LOCATION: pool_connection_pool.c:661 2023-10-26 11:50:29: pid 60570: LOG: trying again to join the cluster 2023-10-26 11:50:29: pid 60570: LOCATION: watchdog.c:6514 2023-10-26 11:50:29: pid 60570: LOG: watchdog node state changed from [NETWORK ISOLATION] to [JOINING] 2023-10-26 11:50:29: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:50:29: pid 60570: LOG: removing the remote node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" from watchdog cluster leader 2023-10-26 11:50:29: pid 60570: LOCATION: watchdog.c:7961 2023-10-26 11:50:29: pid 60570: LOG: setting the remote node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" as watchdog cluster leader 2023-10-26 11:50:29: pid 60570: LOCATION: watchdog.c:7966 2023-10-26 11:50:29: pid 60570: LOG: watchdog node state changed from [JOINING] to [INITIALIZING] 2023-10-26 11:50:29: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:50:30: pid 60570: LOG: watchdog node state changed from [INITIALIZING] to [STANDBY] 2023-10-26 11:50:30: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:50:30: pid 60570: LOG: signal_user1_to_parent_with_reason(1) 2023-10-26 11:50:30: pid 60570: LOCATION: pgpool_main.c:773 2023-10-26 11:50:30: pid 60570: LOG: successfully joined the watchdog cluster as standby node 2023-10-26 11:50:30: pid 60570: DETAIL: our join coordinator request is accepted by cluster leader node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:50:30: pid 60570: LOCATION: watchdog.c:6887 2023-10-26 11:50:30: pid 60567: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:1417 2023-10-26 11:50:30: pid 60567: LOG: Pgpool-II parent process received watchdog state change signal from watchdog 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:1461 2023-10-26 11:50:30: pid 60570: LOG: signal_user1_to_parent_with_reason(3) 2023-10-26 11:50:30: pid 60570: LOCATION: pgpool_main.c:773 2023-10-26 11:50:30: pid 60567: LOG: we have joined the watchdog cluster as STANDBY node 2023-10-26 11:50:30: pid 60567: DETAIL: syncing the backend states from the LEADER watchdog node 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:1468 2023-10-26 11:50:30: pid 60570: LOG: received the get data request from local pgpool-II on IPC interface 2023-10-26 11:50:30: pid 60570: LOCATION: watchdog.c:2944 2023-10-26 11:50:30: pid 60570: LOG: get data request from local pgpool-II node received on IPC interface is forwarded to leader watchdog node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:50:30: pid 60570: DETAIL: waiting for the reply... 2023-10-26 11:50:30: pid 60570: LOCATION: watchdog.c:2971 2023-10-26 11:50:30: pid 60567: LOG: leader watchdog node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" returned status for 2 backend nodes 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:3587 2023-10-26 11:50:30: pid 60567: LOG: backend:0 is set to UP status 2023-10-26 11:50:30: pid 60567: DETAIL: backend:0 is UP on cluster leader "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:3629 2023-10-26 11:50:30: pid 60567: LOG: backend:1 is set to UP status 2023-10-26 11:50:30: pid 60567: DETAIL: backend:1 is UP on cluster leader "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:3629 2023-10-26 11:50:30: pid 60567: LOG: backend nodes status remains same after the sync from "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:3695 2023-10-26 11:50:30: pid 60567: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:1417 2023-10-26 11:50:30: pid 60567: LOG: Pgpool-II parent process received watchdog quorum change signal from watchdog 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:1422 2023-10-26 11:50:30: pid 60567: LOG: watchdog cluster now holds the quorum 2023-10-26 11:50:30: pid 60567: DETAIL: updating the state of quarantine backend nodes 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:1429 2023-10-26 11:50:31: pid 60684: ERROR: Failed to check replication time lag 2023-10-26 11:50:31: pid 60684: DETAIL: No persistent db connection for the node 0 2023-10-26 11:50:31: pid 60684: HINT: check sr_check_user and sr_check_password 2023-10-26 11:50:31: pid 60684: CONTEXT: while checking replication time lag 2023-10-26 11:50:31: pid 60684: LOCATION: pool_worker_child.c:390 2023-10-26 11:50:33: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:50:33: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:50:33: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:50:33: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:50:33: pid 60685: LOG: received degenerate backend request for node_id: 0 from pid [60685] 2023-10-26 11:50:33: pid 60685: LOCATION: pool_internal_comms.c:147 2023-10-26 11:50:33: pid 60570: LOG: failover request from local pgpool-II node received on IPC interface is forwarded to leader watchdog node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:50:33: pid 60570: DETAIL: waiting for the reply... 2023-10-26 11:50:33: pid 60570: LOCATION: watchdog.c:2884 2023-10-26 11:50:33: pid 60570: LOG: remote node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" is asking to inform about quarantined backend nodes 2023-10-26 11:50:33: pid 60570: LOCATION: watchdog.c:4202 2023-10-26 11:50:33: pid 60570: LOG: signal_user1_to_parent_with_reason(4) 2023-10-26 11:50:33: pid 60570: LOCATION: pgpool_main.c:773 2023-10-26 11:50:33: pid 60685: LOG: degenerate backend request for node_id: 0 from pid [60685], will be handled by watchdog, which is building consensus for request 2023-10-26 11:50:33: pid 60685: LOCATION: pool_internal_comms.c:208 2023-10-26 11:50:33: pid 60567: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:50:33: pid 60567: LOCATION: pgpool_main.c:1417 2023-10-26 11:50:33: pid 60567: LOG: Pgpool-II parent process received inform quarantine nodes signal from watchdog 2023-10-26 11:50:33: pid 60567: LOCATION: pgpool_main.c:1437 2023-10-26 11:50:39: pid 60576: LOG: informing the node status change to watchdog 2023-10-26 11:50:39: pid 60576: DETAIL: node id :0 status = "NODE DEAD" message:"No heartbeat signal from node" 2023-10-26 11:50:39: pid 60576: LOCATION: wd_lifecheck.c:529 2023-10-26 11:50:39: pid 60570: LOG: received node status change ipc message 2023-10-26 11:50:39: pid 60570: DETAIL: No heartbeat signal from node 2023-10-26 11:50:39: pid 60570: LOCATION: watchdog.c:2274 2023-10-26 11:50:39: pid 60570: LOG: remote node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" is lost 2023-10-26 11:50:39: pid 60570: LOCATION: watchdog.c:5450 2023-10-26 11:50:41: pid 60684: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:50:41: pid 60684: LOCATION: pool_connection_pool.c:661 2023-10-26 11:50:46: pid 60684: ERROR: Failed to check replication time lag 2023-10-26 11:50:46: pid 60684: DETAIL: No persistent db connection for the node 0 2023-10-26 11:50:46: pid 60684: HINT: check sr_check_user and sr_check_password 2023-10-26 11:50:46: pid 60684: CONTEXT: while checking replication time lag 2023-10-26 11:50:46: pid 60684: LOCATION: pool_worker_child.c:390 2023-10-26 11:50:48: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:50:48: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:50:48: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:50:48: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:50:48: pid 60685: LOG: received degenerate backend request for node_id: 0 from pid [60685] 2023-10-26 11:50:48: pid 60685: LOCATION: pool_internal_comms.c:147 2023-10-26 11:50:48: pid 60570: LOG: failover request from local pgpool-II node received on IPC interface is forwarded to leader watchdog node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:50:48: pid 60570: DETAIL: waiting for the reply... 2023-10-26 11:50:48: pid 60570: LOCATION: watchdog.c:2884 2023-10-26 11:50:48: pid 60570: LOG: remote node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" is asking to inform about quarantined backend nodes 2023-10-26 11:50:48: pid 60570: LOCATION: watchdog.c:4202 2023-10-26 11:50:48: pid 60570: LOG: signal_user1_to_parent_with_reason(4) 2023-10-26 11:50:48: pid 60570: LOCATION: pgpool_main.c:773 2023-10-26 11:50:48: pid 60567: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:50:48: pid 60567: LOCATION: pgpool_main.c:1417 2023-10-26 11:50:48: pid 60567: LOG: Pgpool-II parent process received inform quarantine nodes signal from watchdog 2023-10-26 11:50:48: pid 60567: LOCATION: pgpool_main.c:1437 2023-10-26 11:50:48: pid 60685: LOG: degenerate backend request for node_id: 0 from pid [60685], will be handled by watchdog, which is building consensus for request 2023-10-26 11:50:48: pid 60685: LOCATION: pool_internal_comms.c:208 2023-10-26 11:50:56: pid 60684: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:50:56: pid 60684: LOCATION: pool_connection_pool.c:661 2023-10-26 11:51:01: pid 60684: ERROR: Failed to check replication time lag 2023-10-26 11:51:01: pid 60684: DETAIL: No persistent db connection for the node 0 2023-10-26 11:51:01: pid 60684: HINT: check sr_check_user and sr_check_password 2023-10-26 11:51:01: pid 60684: CONTEXT: while checking replication time lag 2023-10-26 11:51:01: pid 60684: LOCATION: pool_worker_child.c:390 2023-10-26 11:51:03: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:51:03: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:51:03: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:51:03: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:51:03: pid 60685: LOG: received degenerate backend request for node_id: 0 from pid [60685] 2023-10-26 11:51:03: pid 60685: LOCATION: pool_internal_comms.c:147 2023-10-26 11:51:03: pid 60570: LOG: failover request from local pgpool-II node received on IPC interface is forwarded to leader watchdog node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:51:03: pid 60570: DETAIL: waiting for the reply... 2023-10-26 11:51:03: pid 60570: LOCATION: watchdog.c:2884 2023-10-26 11:51:03: pid 60685: LOG: degenerate backend request for 1 node(s) from pid [60685], is changed to quarantine node request by watchdog 2023-10-26 11:51:03: pid 60685: DETAIL: watchdog is taking time to build consensus 2023-10-26 11:51:03: pid 60685: LOCATION: pool_internal_comms.c:201 2023-10-26 11:51:03: pid 60685: LOG: signal_user1_to_parent_with_reason(0) 2023-10-26 11:51:03: pid 60685: LOCATION: pgpool_main.c:773 2023-10-26 11:51:03: pid 60567: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:51:03: pid 60567: LOCATION: pgpool_main.c:1417 2023-10-26 11:51:03: pid 60567: LOG: Pgpool-II parent process has received failover request 2023-10-26 11:51:03: pid 60567: LOCATION: pgpool_main.c:1482 2023-10-26 11:51:03: pid 60570: LOG: received the failover indication from Pgpool-II on IPC interface 2023-10-26 11:51:03: pid 60570: LOCATION: watchdog.c:3003 2023-10-26 11:51:03: pid 60570: LOG: received the failover indication from Pgpool-II on IPC interface, but only leader can do failover 2023-10-26 11:51:03: pid 60570: LOCATION: watchdog.c:3060 2023-10-26 11:51:03: pid 60567: LOG: === Starting quarantine. shutdown host paqcxast01.aaa.es(5432) === 2023-10-26 11:51:03: pid 60567: LOCATION: pgpool_main.c:4205 2023-10-26 11:51:03: pid 60567: LOG: Restart all children 2023-10-26 11:51:03: pid 60567: LOCATION: pgpool_main.c:4368 2023-10-26 11:51:03: pid 60567: LOG: failover: set new primary node: -1 2023-10-26 11:51:03: pid 60567: LOCATION: pgpool_main.c:4595 2023-10-26 11:51:03: pid 60567: LOG: failover: set new main node: 1 2023-10-26 11:51:03: pid 60567: LOCATION: pgpool_main.c:4602 2023-10-26 11:51:03: pid 60684: LOG: connect_inet_domain_socket: select() interrupted by certain signal. retrying... 2023-10-26 11:51:03: pid 60684: LOCATION: pool_connection_pool.c:726 2023-10-26 11:51:03: pid 60570: LOG: received the failover indication from Pgpool-II on IPC interface 2023-10-26 11:51:03: pid 60570: LOCATION: watchdog.c:3003 2023-10-26 11:51:03: pid 60570: LOG: received the failover indication from Pgpool-II on IPC interface, but only leader can do failover 2023-10-26 11:51:03: pid 60570: LOCATION: watchdog.c:3060 2023-10-26 11:51:03: pid 60567: LOG: === Quarantine done. shutdown host paqcxast01.aaa.es(5432) === 2023-10-26 11:51:03: pid 60567: LOCATION: pgpool_main.c:4740 2023-10-26 11:51:04: pid 60683: LOG: restart request received in pcp child process 2023-10-26 11:51:04: pid 60683: LOCATION: pcp_child.c:167 2023-10-26 11:51:04: pid 60567: LOG: PCP child 60683 exits with status 0 in failover() 2023-10-26 11:51:04: pid 60567: LOCATION: pgpool_main.c:4785 2023-10-26 11:51:04: pid 60567: LOG: fork a new PCP child pid 63689 in failover() 2023-10-26 11:51:04: pid 60567: LOCATION: pgpool_main.c:4789 2023-10-26 11:51:04: pid 60567: LOG: reaper handler 2023-10-26 11:51:04: pid 60567: LOCATION: pgpool_main.c:1830 2023-10-26 11:51:04: pid 60567: LOG: reaper handler: exiting normally 2023-10-26 11:51:04: pid 60567: LOCATION: pgpool_main.c:2050 2023-10-26 11:51:04: pid 63689: LOG: PCP process: 63689 started 2023-10-26 11:51:04: pid 63689: LOCATION: pcp_child.c:160 2023-10-26 11:51:13: pid 60684: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:51:13: pid 60684: LOCATION: pool_connection_pool.c:661 2023-10-26 11:51:18: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:51:18: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:51:18: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:51:18: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:51:18: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:51:18: pid 60685: DETAIL: ignoring.. 2023-10-26 11:51:18: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:51:18: pid 60684: LOG: worker process received restart request 2023-10-26 11:51:18: pid 60684: LOCATION: pool_worker_child.c:167 2023-10-26 11:51:18: pid 60567: LOG: reaper handler 2023-10-26 11:51:18: pid 60567: LOCATION: pgpool_main.c:1830 2023-10-26 11:51:18: pid 60567: LOG: reaper handler: exiting normally 2023-10-26 11:51:18: pid 60567: LOCATION: pgpool_main.c:2050 2023-10-26 11:51:18: pid 63712: LOG: process started 2023-10-26 11:51:18: pid 63712: LOCATION: pgpool_main.c:890 2023-10-26 11:51:33: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:51:33: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:51:33: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:51:33: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:51:33: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:51:33: pid 60685: DETAIL: ignoring.. 2023-10-26 11:51:33: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:51:41: pid 63689: LOG: forked new pcp worker, pid=63854 socket=7 2023-10-26 11:51:41: pid 63689: LOCATION: pcp_child.c:308 2023-10-26 11:51:41: pid 63689: LOG: PCP process with pid: 63854 exit with SUCCESS. 2023-10-26 11:51:41: pid 63689: LOCATION: pcp_child.c:364 2023-10-26 11:51:41: pid 63689: LOG: PCP process with pid: 63854 exits with status 0 2023-10-26 11:51:41: pid 63689: LOCATION: pcp_child.c:378 2023-10-26 11:51:48: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:51:48: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:51:48: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:51:48: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:51:48: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:51:48: pid 60685: DETAIL: ignoring.. 2023-10-26 11:51:48: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:52:03: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:52:03: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:52:03: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:52:03: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:52:03: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:52:03: pid 60685: DETAIL: ignoring.. 2023-10-26 11:52:03: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:52:18: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:52:18: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:52:18: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:52:18: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:52:18: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:52:18: pid 60685: DETAIL: ignoring.. 2023-10-26 11:52:18: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:52:33: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:52:33: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:52:33: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:52:33: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:52:33: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:52:33: pid 60685: DETAIL: ignoring.. 2023-10-26 11:52:33: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:52:48: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:52:48: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:52:48: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:52:48: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:52:48: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:52:48: pid 60685: DETAIL: ignoring.. 2023-10-26 11:52:48: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:53:03: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:53:03: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:53:03: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:53:03: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:53:03: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:53:03: pid 60685: DETAIL: ignoring.. 2023-10-26 11:53:03: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:53:18: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:53:18: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:53:18: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:53:18: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:53:18: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:53:18: pid 60685: DETAIL: ignoring.. 2023-10-26 11:53:18: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:53:33: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:53:33: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:53:33: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:53:33: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:53:33: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:53:33: pid 60685: DETAIL: ignoring.. 2023-10-26 11:53:33: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:53:48: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:53:48: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:53:48: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:53:48: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:53:48: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:53:48: pid 60685: DETAIL: ignoring.. 2023-10-26 11:53:48: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:54:03: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:54:03: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:54:03: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:54:03: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:54:03: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:54:03: pid 60685: DETAIL: ignoring.. 2023-10-26 11:54:03: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:54:05: pid 63689: LOG: forked new pcp worker, pid=64533 socket=7 2023-10-26 11:54:05: pid 63689: LOCATION: pcp_child.c:308 2023-10-26 11:54:05: pid 63689: LOG: PCP process with pid: 64533 exit with SUCCESS. 2023-10-26 11:54:05: pid 63689: LOCATION: pcp_child.c:364 2023-10-26 11:54:05: pid 63689: LOG: PCP process with pid: 64533 exits with status 0 2023-10-26 11:54:05: pid 63689: LOCATION: pcp_child.c:378 2023-10-26 11:54:18: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:54:18: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:54:18: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:54:18: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:54:18: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:54:18: pid 60685: DETAIL: ignoring.. 2023-10-26 11:54:18: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:54:33: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:54:33: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:54:33: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:54:33: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:54:33: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:54:33: pid 60685: DETAIL: ignoring.. 2023-10-26 11:54:33: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:54:48: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:54:48: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:54:48: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:54:48: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:54:48: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:54:48: pid 60685: DETAIL: ignoring.. 2023-10-26 11:54:48: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:55:03: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:55:03: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:55:03: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:55:03: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:55:03: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:55:03: pid 60685: DETAIL: ignoring.. 2023-10-26 11:55:03: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:55:18: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:55:18: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:55:18: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:55:18: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:55:18: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:55:18: pid 60685: DETAIL: ignoring.. 2023-10-26 11:55:18: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:55:33: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:55:33: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:55:33: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:55:33: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:55:33: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:55:33: pid 60685: DETAIL: ignoring.. 2023-10-26 11:55:33: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:55:35: pid 60570: LOG: signal_user1_to_parent_with_reason(2) 2023-10-26 11:55:35: pid 60570: LOCATION: pgpool_main.c:773 2023-10-26 11:55:35: pid 60567: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:55:35: pid 60567: LOCATION: pgpool_main.c:1417 2023-10-26 11:55:35: pid 60567: LOG: Pgpool-II parent process received sync backend signal from watchdog 2023-10-26 11:55:35: pid 60567: LOCATION: pgpool_main.c:1446 2023-10-26 11:55:35: pid 60567: LOG: leader watchdog has performed failover 2023-10-26 11:55:35: pid 60567: DETAIL: syncing the backend states from the LEADER watchdog node 2023-10-26 11:55:35: pid 60567: LOCATION: pgpool_main.c:1453 2023-10-26 11:55:35: pid 60570: LOG: received the get data request from local pgpool-II on IPC interface 2023-10-26 11:55:35: pid 60570: LOCATION: watchdog.c:2944 2023-10-26 11:55:35: pid 60570: LOG: get data request from local pgpool-II node received on IPC interface is forwarded to leader watchdog node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:55:35: pid 60570: DETAIL: waiting for the reply... 2023-10-26 11:55:35: pid 60570: LOCATION: watchdog.c:2971 2023-10-26 11:55:35: pid 60567: LOG: leader watchdog node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" returned status for 2 backend nodes 2023-10-26 11:55:35: pid 60567: LOCATION: pgpool_main.c:3587 2023-10-26 11:55:35: pid 60567: LOG: backend nodes status remains same after the sync from "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:55:35: pid 60567: LOCATION: pgpool_main.c:3695 2023-10-26 12:03:02: pid 63689: LOG: forked new pcp worker, pid=66713 socket=7 2023-10-26 12:03:02: pid 63689: LOCATION: pcp_child.c:308 2023-10-26 12:03:02: pid 63689: LOG: PCP process with pid: 66713 exit with SUCCESS. 2023-10-26 12:03:02: pid 63689: LOCATION: pcp_child.c:364 2023-10-26 12:03:02: pid 63689: LOG: PCP process with pid: 66713 exits with status 0 2023-10-26 12:03:02: pid 63689: LOCATION: pcp_child.c:378 salida-comandos-nodo2.txt (37,733 bytes)
[TST][paqcxast02].root:~ # su postgres -c 'psql -h localhost -p 9999 -U usr_pg_pool -d postgres -c "PGPOOL SHOW ALL;"'
psql: /usr/pgsql-15/lib/libpq.so.5: no version information available (required by psql)
psql: /usr/pgsql-15/lib/libpq.so.5: no version information available (required by psql)
psql: /usr/pgsql-15/lib/libpq.so.5: no version information available (required by psql)
could not change directory to "/root": Permission denied
item | value | description
--------------------------------------------+-------------------------------------------------------------------------+--------------------------------------------------------------------------------------------
------------------------------
backend_hostname0 | paqcxast01.aaa.es | hostname or IP address of PostgreSQL backend.
backend_port0 | 5432 | port number of PostgreSQL backend.
backend_weight0 | 1 | load balance weight of backend.
backend_data_directory0 | /var/lib/pgsql/data | data directory of the backend.
backend_application_name0 | paqcxast01 | application_name of the backend.
backend_flag0 | ALLOW_TO_FAILOVER | Controls various backend behavior.
backend_hostname1 | paqcxast02.aaa.es | hostname or IP address of PostgreSQL backend.
backend_port1 | 5432 | port number of PostgreSQL backend.
backend_weight1 | 1 | load balance weight of backend.
backend_data_directory1 | /var/lib/pgsql/data | data directory of the backend.
backend_application_name1 | paqcxast02 | application_name of the backend.
backend_flag1 | ALLOW_TO_FAILOVER | Controls various backend behavior.
hostname0 | paqcxast01.aaa.es | Hostname of pgpool node for watchdog connection.
pgpool_port0 | 9999 | tcp/ip pgpool port number of other pgpool node for watchdog connection.
wd_port0 | 9000 | tcp/ip watchdog port number of other pgpool node for watchdog connection..
hostname1 | paqcxast02.aaa.es | Hostname of pgpool node for watchdog connection.
pgpool_port1 | 9999 | tcp/ip pgpool port number of other pgpool node for watchdog connection.
wd_port1 | 9000 | tcp/ip watchdog port number of other pgpool node for watchdog connection..
hostname2 | paqcxast04.aaa.es | Hostname of pgpool node for watchdog connection.
pgpool_port2 | 9999 | tcp/ip pgpool port number of other pgpool node for watchdog connection.
wd_port2 | 9000 | tcp/ip watchdog port number of other pgpool node for watchdog connection..
heartbeat_device0 | | Name of NIC device for sending heartbeat.
heartbeat_hostname0 | paqcxast01.aaa.es | Hostname for sending heartbeat signal.
heartbeat_port0 | 9694 | Port for sending heartbeat.
heartbeat_device1 | | Name of NIC device for sending heartbeat.
heartbeat_hostname1 | paqcxast02.aaa.es | Hostname for sending heartbeat signal.
heartbeat_port1 | 9694 | Port for sending heartbeat.
heartbeat_device2 | | Name of NIC device for sending heartbeat.
heartbeat_hostname2 | paqcxast04.aaa.es | Hostname for sending heartbeat signal.
heartbeat_port2 | 9694 | Port for sending heartbeat.
health_check_period | 5 | Time interval in seconds between the health checks.
health_check_timeout | 20 | Backend node health check timeout value in seconds.
health_check_user | usr_pg_pool | User name for PostgreSQL backend health check.
health_check_password | ***** | Password for PostgreSQL backend health check database user.
health_check_database | | The database name to be used to perform PostgreSQL backend health check.
health_check_max_retries | 0 | The maximum number of times to retry a failed health check before giving up and initiating
failover.
health_check_retry_delay | 2 | The amount of time in seconds to wait between failed health check retries.
connect_timeout | 10000 | Timeout in milliseconds before giving up connecting to backend.
health_check_period0 | 5 | Time interval in seconds between the health checks.
health_check_timeout0 | 20 | Backend node health check timeout value in seconds.
health_check_user0 | usr_pg_pool | User name for PostgreSQL backend health check.
health_check_password0 | ***** | Password for PostgreSQL backend health check database user.
health_check_database0 | | The database name to be used to perform PostgreSQL backend health check.
health_check_max_retries0 | 0 | The maximum number of times to retry a failed health check before giving up and initiating
failover.
health_check_retry_delay0 | 2 | The amount of time in seconds to wait between failed health check retries.
connect_timeout0 | 10000 | Timeout in milliseconds before giving up connecting to backend.
health_check_period1 | 5 | Time interval in seconds between the health checks.
health_check_timeout1 | 20 | Backend node health check timeout value in seconds.
health_check_user1 | usr_pg_pool | User name for PostgreSQL backend health check.
health_check_password1 | ***** | Password for PostgreSQL backend health check database user.
health_check_database1 | | The database name to be used to perform PostgreSQL backend health check.
health_check_max_retries1 | 0 | The maximum number of times to retry a failed health check before giving up and initiating
failover.
health_check_retry_delay1 | 2 | The amount of time in seconds to wait between failed health check retries.
connect_timeout1 | 10000 | Timeout in milliseconds before giving up connecting to backend.
allow_multiple_failover_requests_from_node | off | A Pgpool-II node can send multiple failover requests to build consensus.
dml_adaptive_object_relationship_list | | list of relationships between objects.
failover_if_affected_tuples_mismatch | off | Starts degeneration, If there's a data mismatch between primary and secondary.
primary_routing_query_pattern_list | | list of query patterns that should be sent to primary node.
app_name_redirect_preference_list | | redirect by application name.
memqcache_auto_cache_invalidation | on | Automatically deletes the cache related to the updated tables.
cache_unsafe_memqcache_table_list | | list of tables should not be cached.
database_redirect_preference_list | | redirect by database name.
enable_consensus_with_half_votes | off | apply majority rule for consensus and quorum computation at 50% of votes in a cluster with
an even number of nodes.
wd_no_show_node_removal_timeout | 0 | Timeout in seconds to revoke the cluster membership of NO-SHOW watchdog nodes.
cache_safe_memqcache_table_list | | list of tables to be cached.
allow_clear_text_frontend_auth | off | allow to use clear text password authentication with clients, when pool_passwd does not con
tain the user password.
clear_memqcache_on_escalation | on | Clears the query cache in the shared memory when pgpool-II escalates to leader watchdog nod
e.
client_idle_limit_in_recovery | 0 | Time limit is seconds for the child connection, before it is terminated during the 2nd stag
e recovery.
disable_load_balance_on_write | transaction | Load balance behavior when write query is received.
wd_monitoring_interfaces_list | | List of network device names, to be monitored by the watchdog process for the network link
state.
statement_level_load_balance | off | Enables statement level load balancing
failover_on_backend_shutdown | on | Triggers fail over when backend is shutdown.
wd_lost_node_removal_timeout | 0 | Timeout in seconds to revoke the cluster membership of LOST watchdog nodes.
replication_stop_on_mismatch | off | Starts degeneration and stops replication, If there's a data mismatch between primary and s
econdary.
process_management_strategy | gentle | child process management strategy.
search_primary_node_timeout | 5min | Max time in seconds to search for primary node after failover.
failover_when_quorum_exists | on | Do failover only when cluster has the quorum.
recovery_2nd_stage_command | | Command to execute in second stage recovery.
ignore_leading_white_space | on | Ignores leading white spaces of each query string.
memqcache_cache_block_size | 1MB | Cache block size in bytes.
prefer_lower_delay_standby | off | If the load balance node is delayed over delay_threshold on SR, pgpool find another standby
node which is lower delayed.
failover_require_consensus | on | Only do failover when majority aggrees.
recovery_1st_stage_command | recovery_1st_stage | Command to execute in first stage recovery.
failover_on_backend_error | on | Triggers fail over when reading/writing to backend socket fails.
listen_backlog_multiplier | 2 | length of connection queue from frontend to pgpool-II
ssl_prefer_server_ciphers | off | Use server's SSL cipher preferences, rather than the client's
wd_remove_shutdown_nodes | off | Revoke the cluster membership of properly shutdown watchdog nodes.
memqcache_memcached_port | 11211 | Port number of Memcached server.
wd_de_escalation_command | | Command to execute when watchdog node resigns from the cluster leader node.
memqcache_memcached_host | localhost | Hostname or IP address of memcached.
log_truncate_on_rotation | off | If on, an existing log file gets truncated on time based log rotation.
memqcache_max_num_cache | 1000000 | Total number of cache entries.
backend_clustering_mode | streaming_replication | backend clustering mode.
process_management_mode | static | child process management mode.
unix_socket_permissions | 0777 | The access permissions of the Unix domain sockets.
delay_threshold_by_time | 0 | standby delay threshold by time.
unix_socket_directories | /run/pgpool | The directories to create the UNIX domain sockets for accepting pgpool-II client connection
s.
read_only_function_list | | list of functions that does not writes to database.
trusted_server_command | ping -q -c3 %h | Command to excute when communicate with trusted server.
ssl_passphrase_command | | Path to the Diffie-Hellman parameters contained file
authentication_timeout | 1min | Time out value in seconds for client authentication.
log_per_node_statement | off | Logs per node detailed SQL statements.
follow_primary_command | | Command to execute in streaming replication mode after a primary node failover.
auto_failback_interval | 1min | min interval of executing auto_failback in seconds
wd_heartbeat_keepalive | 5s | Time interval in seconds between sending the heartbeat signal.
enable_shared_relcache | on | relation cache stored in memory cache.
child_max_connections | 0 | A pgpool-II child process will be terminated after this many connections from clients.
wd_lifecheck_password | ***** | Password for watchdog user in lifecheck.
wd_escalation_command | /etc/pgpool-II/escalation.sh | Command to execute when watchdog node becomes cluster leader node.
wd_heartbeat_deadtime | 30s | Deadtime interval in seconds for heartbeat signal.
relcache_query_target | primary | Target node to send relache queries.
reserved_connections | 0 | Number of reserved connections.
pcp_listen_addresses | * | hostname(s) or IP address(es) on which pcp will listen on.
memqcache_total_size | 64MB | Total memory size in bytes for storing memory cache.
detach_false_primary | off | Automatically detaches false primary node.
connection_life_time | 2min | Cached connections expiration time in seconds.
check_unlogged_table | on | Enables unlogged table check.
memory_cache_enabled | off | Enables the memory cache functionality.
write_function_list | | list of functions that writes to database.
log_client_messages | off | Logs any client messages in the pgpool logs.
wd_lifecheck_dbname | template1 | Database name to be used for by watchdog lifecheck.
log_error_verbosity | verbose | How much details about error should be emitted.
client_min_messages | notice | Which messages should be sent to client.
wd_lifecheck_method | heartbeat | method for watchdog lifecheck.
memqcache_maxcache | 400kB | Maximum SELECT result size in bytes.
ssl_dh_params_file | | Path to the Diffie-Hellman parameters contained file
allow_sql_comments | off | Ignore SQL comments, while judging if load balance or query cache is possible.
min_spare_children | 5 | Minimum number of spare child processes.
max_spare_children | 10 | Maximum number of spare child processes.
wd_lifecheck_query | SELECT 1 | SQL query to be used by watchdog lifecheck.
log_disconnections | on | Logs end of a session.
client_idle_limit | 0 | idle time in seconds to disconnects a client.
load_balance_mode | on | Enables load balancing of queries.
health_check_test | off | If on, enable health check testing.
logging_collector | on | Enable capturing of stderr into log files.
num_init_children | 100 | Maximim number of child processs to handle client connections.
log_standby_delay | if_over_threshold | When to log standby delay.
unix_socket_group | | The owning user of the sockets that always starts the server.
wd_ipc_socket_dir | /tmp | The directory to create the UNIX domain socket for accepting pgpool-II watchdog IPC connect
ions.
wd_lifecheck_user | nobody | User name to be used for by watchdog lifecheck.
sr_check_password | ***** | The password for user to perform streaming replication delay check.
sr_check_database | postgres | The database name to perform streaming replication delay check.
log_rotation_size | 10MB | Automatic rotation of logfiles will happen after that much (kilobytes) log output.
recovery_password | ***** | Password for online recovery.
failback_command | | Command to execute when backend node is attached.
serialize_accept | off | whether to serialize accept() call to avoid thundering herd problem
failover_command | /etc/pgpool-II/failover.sh %d %h %p %D %m %H %M %P %r %R %N %S | Command to execute when backend node is detached.
log_min_messages | warning | Which messages should be emitted to server log.
recovery_timeout | 90s | Maximum time in seconds to wait for the recovering PostgreSQL node.
replicate_select | off | Replicate SELECT statements when load balancing is disabled.
memqcache_method | shmem | Cache store method. either shmem(shared memory) or Memcached. shmem by default.
log_rotation_age | 1d | Automatic rotation of logfiles will happen after that (minutes) time.
memqcache_oiddir | /var/log/pgpool/oiddir | Temporary directory to record table oids.
check_temp_table | catalog | Enables temporary table check.
reset_query_list | ABORT; DISCARD ALL | list of commands sent to reset the backend connection when user session exits.
listen_addresses | * | hostname(s) or IP address(es) on which pgpool will listen on.
replication_mode | off | Enables replication mode.
connection_cache | on | Caches connections to backends.
memqcache_expire | 0 | Memory cache entry life time specified in seconds.
child_life_time | 2min | pgpool-II child process life time in seconds.
trusted_servers | | List of servers to verify connectivity.
log_connections | on | Logs each successful connection.
sr_check_period | 5s | Time interval in seconds between the streaming replication delay checks.
syslog_facility | LOCAL0 | syslog local facility.
delay_threshold | 0 | standby delay threshold in bytes.
relcache_expire | 0 | Relation cache expiration time in seconds.
log_destination | stderr | destination of pgpool-II log
ssl_ca_cert_dir | | Directory containing CA root certificate(s).
log_line_prefix | %t: pid %p: | printf-style string to output at beginning of each log line.
lobj_lock_table | | Table name used for large object replication control.
enable_pool_hba | on | Use pool_hba.conf for client authentication.
ssl_ecdh_curve | prime256v1 | The curve to use in ECDH key exchange.
pcp_socket_dir | /run/pgpool | The directory to create the UNIX domain socket for accepting pgpool-II PCP connections.
log_file_mode | 384 | creation mode for log files.
wd_life_point | 3 | Maximum number of retries before failing the life check.
pid_file_name | /run/pgpool/pgpool.pid | Path to store pgpool-II pid file.
recovery_user | postgres | User name for online recovery.
log_statement | off | Logs all statements in the pgpool logs.
sr_check_user | usr_pg_pool | The User name to perform streaming replication delay check.
log_directory | /var/log/pgpool-II | directory where log files are written.
auto_failback | off | Enables nodes automatically reattach, when detached node continue streaming replication.
relcache_size | 256 | Number of relation cache entry.
ssl_crl_file | | SSL certificate revocation list file
use_watchdog | on | Enables the pgpool-II watchdog.
syslog_ident | pgpool | syslog program ident string.
log_filename | pgpool-%Y-%m-%d_%H%M%S.log | log file name pattern.
log_hostname | off | Logs the host name in the connection logs.
wd_interval | 10s | Time interval in seconds between life check.
insert_lock | on | Automatically locks table with INSERT to keep SERIAL data consistency
pool_passwd | ***** | File name of pool_passwd for md5 authentication.
ssl_ca_cert | | Single PEM format file containing CA root certificate(s).
delegate_ip | 10.151.18.11 | Delegate IP address to be used when pgpool node become a watchdog cluster leader.
ssl_ciphers | HIGH:MEDIUM:+3DES:!aNULL | Allowed SSL ciphers.
if_cmd_path | /usr/sbin/ | Path to interface command.
if_down_cmd | /usr/bin/sudo /usr/sbin/ip addr del $_IP_$/24 dev ens192 | Complete command to bring down virtual interface.
arping_path | /usr/sbin | path to arping command.
wd_priority | 5 | Watchdog node priority for leader election.
wd_authkey | | Authentication key to be used in watchdog communication.
arping_cmd | /usr/bin/sudo /usr/sbin/arping -U $_IP_$ -w 1 -I ens192 | arping command.
if_up_cmd | /usr/bin/sudo /usr/sbin/ip addr add $_IP_$/24 dev ens192 label ens192:0 | Complete command to bring UP virtual interface.
ping_path | /bin | path to ping command.
ssl_cert | | SSL public certificate file.
pcp_port | 9898 | tcp/IP port number on which pgpool PCP process will listen on.
max_pool | 2 | Maximum number of connection pools per child process.
ssl_key | | SSL private key file.
logdir | /var/log/pgpool-II/pgpool-status | PgPool status file logging directory.
port | 9999 | tcp/IP port number on which pgpool will listen on.
ssl | off | Enables SSL support for frontend and backend connections
(208 rows)
[TST][paqcxast02].root:~ # ^C
[TST][paqcxast02].root:~ # tail ^C
[TST][paqcxast02].root:~ # less /var/log/pgpool-II/pgpool-2023-10-26_113
pgpool-2023-10-26_113621.log pgpool-2023-10-26_113913.log
[TST][paqcxast02].root:~ # less /var/log/pgpool-II/pgpool-2023-10-26_113913.log
[TST][paqcxast02].root:~ # less /var/log/pgpool-II/pgpool-2023-10-26_113913.log
[TST][paqcxast02].root:~ # su postgres -c 'pcp_watchdog_info -h localhost -p 9898 -U pgpool -w'
3 3 NO paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es paqcxast04.aaa.es
paqcxast02.aaa.es:9999 Linux paqcxast02.aaa.es paqcxast02.aaa.es 9999 9000 7 STANDBY 0 MEMBER
paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es paqcxast01.aaa.es 9999 9000 8 LOST 0 MEMBER
paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es paqcxast04.aaa.es 9999 9000 4 LEADER 0 MEMBER
salida-comandos-nodo1.txt (42,002 bytes)
[TST][paqcxast01].root:~ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8f:f4:1d brd ff:ff:ff:ff:ff:ff
altname enp11s0
inet 10.23.18.111/24 brd 10.23.18.255 scope global noprefixroute ens192
valid_lft forever preferred_lft forever
inet 10.23.18.118/24 scope global secondary ens192:0
valid_lft forever preferred_lft forever
[TST][paqcxast01].root:~ #
[TST][paqcxast01].root:~ #
[TST][paqcxast01].root:~ #
[TST][paqcxast01].root:~ # date
Thu Oct 26 11:49:13 CEST 2023
[TST][paqcxast01].root:~ # su postgres -c 'pcp_watchdog_info -h localhost -p 9898 -U pgpool -w'
3 3 YES paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es paqcxast01.aaa.es
paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es paqcxast01.aaa.es 9999 9000 4 LEADER 0 MEMBER
paqcxast02.aaa.es:9999 Linux paqcxast02.aaa.es paqcxast02.aaa.es 9999 9000 7 STANDBY 0 MEMBER
paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es paqcxast04.aaa.es 9999 9000 7 STANDBY 0 MEMBER
[TST][paqcxast01].root:~ # su postgres -c 'psql -h localhost -p 9999 -U usr_pg_pool -d postgres -c "show pool_nodes;"'
psql: /usr/pgsql-15/lib/libpq.so.5: no version information available (required by psql)
psql: /usr/pgsql-15/lib/libpq.so.5: no version information available (required by psql)
psql: /usr/pgsql-15/lib/libpq.so.5: no version information available (required by psql)
could not change directory to "/root": Permission denied
node_id | hostname | port | status | pg_status | lb_weight | role | pg_role | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_state | last_status_change
---------+-----------------------+------+--------+-----------+-----------+---------+---------+------------+-------------------+-------------------+-------------------+------------------------+-------------------
--
0 | paqcxast01.aaa.es | 5432 | up | up | 0.500000 | primary | primary | 423 | true | 0 | | | 2023-10-26 11:39:1
2
1 | paqcxast02.aaa.es | 5432 | up | up | 0.500000 | standby | standby | 144 | false | 0 | | | 2023-10-26 11:39:1
2
(2 rows)
[TST][paqcxast01].root:~ # su postgres -c 'psql -h localhost -U usr_pg_pool -d postgres -c "select * from pg_stat_replication;"'
psql: /usr/pgsql-15/lib/libpq.so.5: no version information available (required by psql)
psql: /usr/pgsql-15/lib/libpq.so.5: no version information available (required by psql)
psql: /usr/pgsql-15/lib/libpq.so.5: no version information available (required by psql)
could not change directory to "/root": Permission denied
pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | backend_xmin | state | sent_lsn | write_lsn | flush_lsn | replay_ls
n | write_lag | flush_lag | replay_lag | sync_priority | sync_state | reply_time
-------+----------+-----------------+------------------+--------------+-----------------+-------------+-------------------------------+--------------+-----------+------------+------------+------------+----------
--+-----------+-----------+------------+---------------+------------+-------------------------------
20301 | 221118 | usr_replication | walreceiver | 10.151.18.82 | | 56948 | 2023-10-26 09:36:01.110523+00 | | streaming | 1/A000C488 | 1/A000C488 | 1/A000C488 | 1/A000C48
8 | | | | 0 | async | 2023-10-26 09:49:14.440119+00
(1 row)
[TST][paqcxast01].root:~ # su postgres -c 'psql -h localhost -U usr_pg_pool -d postgres -c "select * from pg_replication_slots;"'
psql: /usr/pgsql-15/lib/libpq.so.5: no version information available (required by psql)
psql: /usr/pgsql-15/lib/libpq.so.5: no version information available (required by psql)
psql: /usr/pgsql-15/lib/libpq.so.5: no version information available (required by psql)
could not change directory to "/root": Permission denied
slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn | wal_status | safe_wal_size
-----------+--------+-----------+--------+----------+-----------+--------+------------+--------+--------------+-------------+---------------------+------------+---------------
replica | | physical | | | f | t | 20301 | 351034 | | 1/A000C488 | | reserved |
(1 row)
[TST][paqcxast01].root:~ # su postgres -c 'psql -h localhost -p 9999 -U usr_pg_pool -d postgres -c "PGPOOL SHOW ALL;"'
psql: /usr/pgsql-15/lib/libpq.so.5: no version information available (required by psql)
psql: /usr/pgsql-15/lib/libpq.so.5: no version information available (required by psql)
psql: /usr/pgsql-15/lib/libpq.so.5: no version information available (required by psql)
could not change directory to "/root": Permission denied
item | value | description
--------------------------------------------+-------------------------------------------------------------------------+--------------------------------------------------------------------------------------------
------------------------------
backend_hostname0 | paqcxast01.aaa.es | hostname or IP address of PostgreSQL backend.
backend_port0 | 5432 | port number of PostgreSQL backend.
backend_weight0 | 1 | load balance weight of backend.
backend_data_directory0 | /var/lib/pgsql/data | data directory of the backend.
backend_application_name0 | paqcxast01 | application_name of the backend.
backend_flag0 | ALLOW_TO_FAILOVER | Controls various backend behavior.
backend_hostname1 | paqcxast02.aaa.es | hostname or IP address of PostgreSQL backend.
backend_port1 | 5432 | port number of PostgreSQL backend.
backend_weight1 | 1 | load balance weight of backend.
backend_data_directory1 | /var/lib/pgsql/data | data directory of the backend.
backend_application_name1 | paqcxast02 | application_name of the backend.
backend_flag1 | ALLOW_TO_FAILOVER | Controls various backend behavior.
hostname0 | paqcxast01.aaa.es | Hostname of pgpool node for watchdog connection.
pgpool_port0 | 9999 | tcp/ip pgpool port number of other pgpool node for watchdog connection.
wd_port0 | 9000 | tcp/ip watchdog port number of other pgpool node for watchdog connection..
hostname1 | paqcxast02.aaa.es | Hostname of pgpool node for watchdog connection.
pgpool_port1 | 9999 | tcp/ip pgpool port number of other pgpool node for watchdog connection.
wd_port1 | 9000 | tcp/ip watchdog port number of other pgpool node for watchdog connection..
hostname2 | paqcxast04.aaa.es | Hostname of pgpool node for watchdog connection.
pgpool_port2 | 9999 | tcp/ip pgpool port number of other pgpool node for watchdog connection.
wd_port2 | 9000 | tcp/ip watchdog port number of other pgpool node for watchdog connection..
heartbeat_device0 | | Name of NIC device for sending heartbeat.
heartbeat_hostname0 | paqcxast01.aaa.es | Hostname for sending heartbeat signal.
heartbeat_port0 | 9694 | Port for sending heartbeat.
heartbeat_device1 | | Name of NIC device for sending heartbeat.
heartbeat_hostname1 | paqcxast02.aaa.es | Hostname for sending heartbeat signal.
heartbeat_port1 | 9694 | Port for sending heartbeat.
heartbeat_device2 | | Name of NIC device for sending heartbeat.
heartbeat_hostname2 | paqcxast04.aaa.es | Hostname for sending heartbeat signal.
heartbeat_port2 | 9694 | Port for sending heartbeat.
health_check_period | 5 | Time interval in seconds between the health checks.
health_check_timeout | 20 | Backend node health check timeout value in seconds.
health_check_user | usr_pg_pool | User name for PostgreSQL backend health check.
health_check_password | ***** | Password for PostgreSQL backend health check database user.
health_check_database | | The database name to be used to perform PostgreSQL backend health check.
health_check_max_retries | 0 | The maximum number of times to retry a failed health check before giving up and initiating
failover.
health_check_retry_delay | 2 | The amount of time in seconds to wait between failed health check retries.
connect_timeout | 10000 | Timeout in milliseconds before giving up connecting to backend.
health_check_period0 | 5 | Time interval in seconds between the health checks.
health_check_timeout0 | 20 | Backend node health check timeout value in seconds.
health_check_user0 | usr_pg_pool | User name for PostgreSQL backend health check.
health_check_password0 | ***** | Password for PostgreSQL backend health check database user.
health_check_database0 | | The database name to be used to perform PostgreSQL backend health check.
health_check_max_retries0 | 0 | The maximum number of times to retry a failed health check before giving up and initiating
failover.
health_check_retry_delay0 | 2 | The amount of time in seconds to wait between failed health check retries.
connect_timeout0 | 10000 | Timeout in milliseconds before giving up connecting to backend.
health_check_period1 | 5 | Time interval in seconds between the health checks.
health_check_timeout1 | 20 | Backend node health check timeout value in seconds.
health_check_user1 | usr_pg_pool | User name for PostgreSQL backend health check.
health_check_password1 | ***** | Password for PostgreSQL backend health check database user.
health_check_database1 | | The database name to be used to perform PostgreSQL backend health check.
health_check_max_retries1 | 0 | The maximum number of times to retry a failed health check before giving up and initiating
failover.
health_check_retry_delay1 | 2 | The amount of time in seconds to wait between failed health check retries.
connect_timeout1 | 10000 | Timeout in milliseconds before giving up connecting to backend.
allow_multiple_failover_requests_from_node | off | A Pgpool-II node can send multiple failover requests to build consensus.
dml_adaptive_object_relationship_list | | list of relationships between objects.
failover_if_affected_tuples_mismatch | off | Starts degeneration, If there's a data mismatch between primary and secondary.
primary_routing_query_pattern_list | | list of query patterns that should be sent to primary node.
app_name_redirect_preference_list | | redirect by application name.
memqcache_auto_cache_invalidation | on | Automatically deletes the cache related to the updated tables.
cache_unsafe_memqcache_table_list | | list of tables should not be cached.
database_redirect_preference_list | | redirect by database name.
enable_consensus_with_half_votes | off | apply majority rule for consensus and quorum computation at 50% of votes in a cluster with
an even number of nodes.
wd_no_show_node_removal_timeout | 0 | Timeout in seconds to revoke the cluster membership of NO-SHOW watchdog nodes.
cache_safe_memqcache_table_list | | list of tables to be cached.
allow_clear_text_frontend_auth | off | allow to use clear text password authentication with clients, when pool_passwd does not con
tain the user password.
clear_memqcache_on_escalation | on | Clears the query cache in the shared memory when pgpool-II escalates to leader watchdog nod
e.
client_idle_limit_in_recovery | 0 | Time limit is seconds for the child connection, before it is terminated during the 2nd stag
e recovery.
disable_load_balance_on_write | transaction | Load balance behavior when write query is received.
wd_monitoring_interfaces_list | | List of network device names, to be monitored by the watchdog process for the network link
state.
statement_level_load_balance | off | Enables statement level load balancing
failover_on_backend_shutdown | on | Triggers fail over when backend is shutdown.
wd_lost_node_removal_timeout | 0 | Timeout in seconds to revoke the cluster membership of LOST watchdog nodes.
replication_stop_on_mismatch | off | Starts degeneration and stops replication, If there's a data mismatch between primary and s
econdary.
process_management_strategy | gentle | child process management strategy.
search_primary_node_timeout | 5min | Max time in seconds to search for primary node after failover.
failover_when_quorum_exists | on | Do failover only when cluster has the quorum.
recovery_2nd_stage_command | | Command to execute in second stage recovery.
ignore_leading_white_space | on | Ignores leading white spaces of each query string.
memqcache_cache_block_size | 1MB | Cache block size in bytes.
prefer_lower_delay_standby | off | If the load balance node is delayed over delay_threshold on SR, pgpool find another standby
node which is lower delayed.
failover_require_consensus | on | Only do failover when majority aggrees.
recovery_1st_stage_command | recovery_1st_stage | Command to execute in first stage recovery.
failover_on_backend_error | on | Triggers fail over when reading/writing to backend socket fails.
listen_backlog_multiplier | 2 | length of connection queue from frontend to pgpool-II
ssl_prefer_server_ciphers | off | Use server's SSL cipher preferences, rather than the client's
wd_remove_shutdown_nodes | off | Revoke the cluster membership of properly shutdown watchdog nodes.
memqcache_memcached_port | 11211 | Port number of Memcached server.
wd_de_escalation_command | | Command to execute when watchdog node resigns from the cluster leader node.
memqcache_memcached_host | localhost | Hostname or IP address of memcached.
log_truncate_on_rotation | off | If on, an existing log file gets truncated on time based log rotation.
memqcache_max_num_cache | 1000000 | Total number of cache entries.
backend_clustering_mode | streaming_replication | backend clustering mode.
process_management_mode | static | child process management mode.
unix_socket_permissions | 0777 | The access permissions of the Unix domain sockets.
delay_threshold_by_time | 0 | standby delay threshold by time.
unix_socket_directories | /run/pgpool | The directories to create the UNIX domain sockets for accepting pgpool-II client connection
s.
read_only_function_list | | list of functions that does not writes to database.
trusted_server_command | ping -q -c3 %h | Command to excute when communicate with trusted server.
ssl_passphrase_command | | Path to the Diffie-Hellman parameters contained file
authentication_timeout | 1min | Time out value in seconds for client authentication.
log_per_node_statement | off | Logs per node detailed SQL statements.
follow_primary_command | | Command to execute in streaming replication mode after a primary node failover.
auto_failback_interval | 1min | min interval of executing auto_failback in seconds
wd_heartbeat_keepalive | 5s | Time interval in seconds between sending the heartbeat signal.
enable_shared_relcache | on | relation cache stored in memory cache.
child_max_connections | 0 | A pgpool-II child process will be terminated after this many connections from clients.
wd_lifecheck_password | ***** | Password for watchdog user in lifecheck.
wd_escalation_command | /etc/pgpool-II/escalation.sh | Command to execute when watchdog node becomes cluster leader node.
wd_heartbeat_deadtime | 30s | Deadtime interval in seconds for heartbeat signal.
relcache_query_target | primary | Target node to send relache queries.
reserved_connections | 0 | Number of reserved connections.
pcp_listen_addresses | * | hostname(s) or IP address(es) on which pcp will listen on.
memqcache_total_size | 64MB | Total memory size in bytes for storing memory cache.
detach_false_primary | off | Automatically detaches false primary node.
connection_life_time | 2min | Cached connections expiration time in seconds.
check_unlogged_table | on | Enables unlogged table check.
memory_cache_enabled | off | Enables the memory cache functionality.
write_function_list | | list of functions that writes to database.
log_client_messages | off | Logs any client messages in the pgpool logs.
wd_lifecheck_dbname | template1 | Database name to be used for by watchdog lifecheck.
log_error_verbosity | verbose | How much details about error should be emitted.
client_min_messages | notice | Which messages should be sent to client.
wd_lifecheck_method | heartbeat | method for watchdog lifecheck.
memqcache_maxcache | 400kB | Maximum SELECT result size in bytes.
ssl_dh_params_file | | Path to the Diffie-Hellman parameters contained file
allow_sql_comments | off | Ignore SQL comments, while judging if load balance or query cache is possible.
min_spare_children | 5 | Minimum number of spare child processes.
max_spare_children | 10 | Maximum number of spare child processes.
wd_lifecheck_query | SELECT 1 | SQL query to be used by watchdog lifecheck.
log_disconnections | on | Logs end of a session.
client_idle_limit | 0 | idle time in seconds to disconnects a client.
load_balance_mode | on | Enables load balancing of queries.
health_check_test | off | If on, enable health check testing.
logging_collector | on | Enable capturing of stderr into log files.
num_init_children | 100 | Maximim number of child processs to handle client connections.
log_standby_delay | if_over_threshold | When to log standby delay.
unix_socket_group | | The owning user of the sockets that always starts the server.
wd_ipc_socket_dir | /tmp | The directory to create the UNIX domain socket for accepting pgpool-II watchdog IPC connect
ions.
wd_lifecheck_user | nobody | User name to be used for by watchdog lifecheck.
sr_check_password | ***** | The password for user to perform streaming replication delay check.
sr_check_database | postgres | The database name to perform streaming replication delay check.
log_rotation_size | 10MB | Automatic rotation of logfiles will happen after that much (kilobytes) log output.
recovery_password | ***** | Password for online recovery.
failback_command | | Command to execute when backend node is attached.
serialize_accept | off | whether to serialize accept() call to avoid thundering herd problem
failover_command | /etc/pgpool-II/failover.sh %d %h %p %D %m %H %M %P %r %R %N %S | Command to execute when backend node is detached.
log_min_messages | warning | Which messages should be emitted to server log.
recovery_timeout | 90s | Maximum time in seconds to wait for the recovering PostgreSQL node.
replicate_select | off | Replicate SELECT statements when load balancing is disabled.
memqcache_method | shmem | Cache store method. either shmem(shared memory) or Memcached. shmem by default.
log_rotation_age | 1d | Automatic rotation of logfiles will happen after that (minutes) time.
memqcache_oiddir | /var/log/pgpool/oiddir | Temporary directory to record table oids.
check_temp_table | catalog | Enables temporary table check.
reset_query_list | ABORT; DISCARD ALL | list of commands sent to reset the backend connection when user session exits.
listen_addresses | * | hostname(s) or IP address(es) on which pgpool will listen on.
replication_mode | off | Enables replication mode.
connection_cache | on | Caches connections to backends.
memqcache_expire | 0 | Memory cache entry life time specified in seconds.
child_life_time | 2min | pgpool-II child process life time in seconds.
trusted_servers | | List of servers to verify connectivity.
log_connections | on | Logs each successful connection.
sr_check_period | 5s | Time interval in seconds between the streaming replication delay checks.
syslog_facility | LOCAL0 | syslog local facility.
delay_threshold | 0 | standby delay threshold in bytes.
relcache_expire | 0 | Relation cache expiration time in seconds.
log_destination | stderr | destination of pgpool-II log
ssl_ca_cert_dir | | Directory containing CA root certificate(s).
log_line_prefix | %t: pid %p: | printf-style string to output at beginning of each log line.
lobj_lock_table | | Table name used for large object replication control.
enable_pool_hba | on | Use pool_hba.conf for client authentication.
ssl_ecdh_curve | prime256v1 | The curve to use in ECDH key exchange.
pcp_socket_dir | /run/pgpool | The directory to create the UNIX domain socket for accepting pgpool-II PCP connections.
log_file_mode | 384 | creation mode for log files.
wd_life_point | 3 | Maximum number of retries before failing the life check.
pid_file_name | /run/pgpool/pgpool.pid | Path to store pgpool-II pid file.
recovery_user | postgres | User name for online recovery.
log_statement | off | Logs all statements in the pgpool logs.
sr_check_user | usr_pg_pool | The User name to perform streaming replication delay check.
log_directory | /var/log/pgpool-II | directory where log files are written.
auto_failback | off | Enables nodes automatically reattach, when detached node continue streaming replication.
relcache_size | 256 | Number of relation cache entry.
ssl_crl_file | | SSL certificate revocation list file
use_watchdog | on | Enables the pgpool-II watchdog.
syslog_ident | pgpool | syslog program ident string.
log_filename | pgpool-%Y-%m-%d_%H%M%S.log | log file name pattern.
log_hostname | off | Logs the host name in the connection logs.
wd_interval | 10s | Time interval in seconds between life check.
insert_lock | on | Automatically locks table with INSERT to keep SERIAL data consistency
pool_passwd | ***** | File name of pool_passwd for md5 authentication.
ssl_ca_cert | | Single PEM format file containing CA root certificate(s).
delegate_ip | 10.23.18.118 | Delegate IP address to be used when pgpool node become a watchdog cluster leader.
ssl_ciphers | HIGH:MEDIUM:+3DES:!aNULL | Allowed SSL ciphers.
if_cmd_path | /usr/sbin/ | Path to interface command.
if_down_cmd | /usr/bin/sudo /usr/sbin/ip addr del $_IP_$/24 dev ens192 | Complete command to bring down virtual interface.
arping_path | /usr/sbin | path to arping command.
wd_priority | 7 | Watchdog node priority for leader election.
wd_authkey | | Authentication key to be used in watchdog communication.
arping_cmd | /usr/bin/sudo /usr/sbin/arping -U $_IP_$ -w 1 -I ens192 | arping command.
if_up_cmd | /usr/bin/sudo /usr/sbin/ip addr add $_IP_$/24 dev ens192 label ens192:0 | Complete command to bring UP virtual interface.
ping_path | /bin | path to ping command.
ssl_cert | | SSL public certificate file.
pcp_port | 9898 | tcp/IP port number on which pgpool PCP process will listen on.
max_pool | 2 | Maximum number of connection pools per child process.
ssl_key | | SSL private key file.
logdir | /var/log/pgpool-II/pgpool-status | PgPool status file logging directory.
port | 9999 | tcp/IP port number on which pgpool will listen on.
ssl | off | Enables SSL support for frontend and backend connections
(208 rows)
[TST][paqcxast01].root:~ # ^C
[TST][paqcxast01].root:~ # poweroff -ff
Powering off.
|
| Date Modified | Username | Field | Change |
|---|---|---|---|
| 2023-10-27 16:47 | jsoler | New Issue | |
| 2023-10-27 16:47 | jsoler | Tag Attached: consensus | |
| 2023-10-27 16:47 | jsoler | Tag Attached: vip | |
| 2023-10-27 16:47 | jsoler | Tag Attached: virtual ip | |
| 2023-10-27 16:47 | jsoler | Tag Attached: watchdog | |
| 2023-10-27 16:47 | jsoler | File Added: nodo3-pgpool-2023-10-26_113919.log | |
| 2023-10-27 16:47 | jsoler | File Added: nodo2-pgpool-2023-10-26_113913.log | |
| 2023-10-27 16:47 | jsoler | File Added: nodo1-pgpool-2023-10-26_113912.log | |
| 2023-10-27 16:47 | jsoler | File Added: salida-comandos-nodo2.txt | |
| 2023-10-27 16:47 | jsoler | File Added: salida-comandos-nodo1.txt | |
| 2023-11-14 15:21 | pengbo | Assigned To | => Muhammad Usama |
| 2023-11-14 15:21 | pengbo | Status | new => assigned |