2023-10-26 11:39:13: pid 60567: LOG: health_check_stats_shared_memory_size: requested size: 12288 2023-10-26 11:39:13: pid 60567: LOCATION: health_check.c:541 2023-10-26 11:39:13: pid 60567: LOG: memory cache initialized 2023-10-26 11:39:13: pid 60567: DETAIL: memcache blocks :64 2023-10-26 11:39:13: pid 60567: LOCATION: pool_memqcache.c:2061 2023-10-26 11:39:13: pid 60567: LOG: allocating (138460248) bytes of shared memory segment 2023-10-26 11:39:13: pid 60567: LOCATION: pgpool_main.c:3024 2023-10-26 11:39:13: pid 60567: LOG: allocating shared memory segment of size: 138460248 2023-10-26 11:39:13: pid 60567: LOCATION: pool_shmem.c:61 2023-10-26 11:39:13: pid 60567: LOG: health_check_stats_shared_memory_size: requested size: 12288 2023-10-26 11:39:13: pid 60567: LOCATION: health_check.c:541 2023-10-26 11:39:13: pid 60567: LOG: health_check_stats_shared_memory_size: requested size: 12288 2023-10-26 11:39:13: pid 60567: LOCATION: health_check.c:541 2023-10-26 11:39:13: pid 60567: LOG: memory cache initialized 2023-10-26 11:39:13: pid 60567: DETAIL: memcache blocks :64 2023-10-26 11:39:13: pid 60567: LOCATION: pool_memqcache.c:2061 2023-10-26 11:39:13: pid 60567: LOG: pool_discard_oid_maps: discarded memqcache oid maps 2023-10-26 11:39:13: pid 60567: LOCATION: pgpool_main.c:3108 2023-10-26 11:39:13: pid 60567: LOG: waiting for watchdog to initialize 2023-10-26 11:39:13: pid 60567: LOCATION: pgpool_main.c:428 2023-10-26 11:39:13: pid 60570: LOG: setting the local watchdog node name to "paqcxast02.aaa.es:9999 Linux paqcxast02.aaa.es" 2023-10-26 11:39:13: pid 60570: LOCATION: watchdog.c:772 2023-10-26 11:39:13: pid 60570: LOG: watchdog cluster is configured with 2 remote nodes 2023-10-26 11:39:13: pid 60570: LOCATION: watchdog.c:782 2023-10-26 11:39:13: pid 60570: LOG: watchdog remote node:0 on paqcxast01.aaa.es:9000 2023-10-26 11:39:13: pid 60570: LOCATION: watchdog.c:799 2023-10-26 11:39:13: pid 60570: LOG: watchdog remote node:1 on paqcxast04.aaa.es:9000 2023-10-26 11:39:13: pid 60570: LOCATION: watchdog.c:799 2023-10-26 11:39:13: pid 60570: LOG: interface monitoring is disabled in watchdog 2023-10-26 11:39:13: pid 60570: LOCATION: watchdog.c:668 2023-10-26 11:39:13: pid 60570: LOG: watchdog node state changed from [DEAD] to [LOADING] 2023-10-26 11:39:13: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:39:13: pid 60570: LOG: new outbound connection to paqcxast01.aaa.es:9000 2023-10-26 11:39:13: pid 60570: LOCATION: watchdog.c:3484 2023-10-26 11:39:17: pid 60570: LOG: watchdog node state changed from [LOADING] to [INITIALIZING] 2023-10-26 11:39:17: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:39:18: pid 60570: LOG: watchdog node state changed from [INITIALIZING] to [STANDING FOR LEADER] 2023-10-26 11:39:18: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:39:18: pid 60570: LOG: our stand for coordinator request is rejected by node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:39:18: pid 60570: LOCATION: watchdog.c:5925 2023-10-26 11:39:18: pid 60570: LOG: watchdog node state changed from [STANDING FOR LEADER] to [PARTICIPATING IN ELECTION] 2023-10-26 11:39:18: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:39:18: pid 60570: LOG: watchdog node state changed from [PARTICIPATING IN ELECTION] to [INITIALIZING] 2023-10-26 11:39:18: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:39:18: pid 60570: LOG: setting the remote node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" as watchdog cluster leader 2023-10-26 11:39:18: pid 60570: LOCATION: watchdog.c:7966 2023-10-26 11:39:19: pid 60570: LOG: watchdog node state changed from [INITIALIZING] to [STANDBY] 2023-10-26 11:39:19: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:39:19: pid 60570: LOG: signal_user1_to_parent_with_reason(1) 2023-10-26 11:39:19: pid 60570: LOCATION: pgpool_main.c:773 2023-10-26 11:39:19: pid 60570: LOG: successfully joined the watchdog cluster as standby node 2023-10-26 11:39:19: pid 60570: DETAIL: our join coordinator request is accepted by cluster leader node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:39:19: pid 60570: LOCATION: watchdog.c:6887 2023-10-26 11:39:19: pid 60567: LOG: watchdog process is initialized 2023-10-26 11:39:19: pid 60567: DETAIL: watchdog messaging data version: 1.2 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:443 2023-10-26 11:39:19: pid 60570: LOG: signal_user1_to_parent_with_reason(3) 2023-10-26 11:39:19: pid 60570: LOCATION: pgpool_main.c:773 2023-10-26 11:39:19: pid 60567: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:1417 2023-10-26 11:39:19: pid 60567: LOG: Pgpool-II parent process received watchdog quorum change signal from watchdog 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:1422 2023-10-26 11:39:19: pid 60567: LOG: watchdog cluster now holds the quorum 2023-10-26 11:39:19: pid 60567: DETAIL: updating the state of quarantine backend nodes 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:1429 2023-10-26 11:39:19: pid 60567: LOG: Pgpool-II parent process received watchdog state change signal from watchdog 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:1461 2023-10-26 11:39:19: pid 60567: LOG: we have joined the watchdog cluster as STANDBY node 2023-10-26 11:39:19: pid 60567: DETAIL: syncing the backend states from the LEADER watchdog node 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:1468 2023-10-26 11:39:19: pid 60576: LOG: 3 watchdog nodes are configured for lifecheck 2023-10-26 11:39:19: pid 60576: LOCATION: wd_lifecheck.c:493 2023-10-26 11:39:19: pid 60570: LOG: received the get data request from local pgpool-II on IPC interface 2023-10-26 11:39:19: pid 60570: LOCATION: watchdog.c:2944 2023-10-26 11:39:19: pid 60576: LOG: watchdog nodes ID:1 Name:"paqcxast02.aaa.es:9999 Linux paqcxast02.aaa.es" 2023-10-26 11:39:19: pid 60576: DETAIL: Host:"paqcxast02.aaa.es" WD Port:9000 pgpool-II port:9999 2023-10-26 11:39:19: pid 60576: LOCATION: wd_lifecheck.c:501 2023-10-26 11:39:19: pid 60576: LOG: watchdog nodes ID:0 Name:"paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:39:19: pid 60576: DETAIL: Host:"paqcxast01.aaa.es" WD Port:9000 pgpool-II port:9999 2023-10-26 11:39:19: pid 60576: LOCATION: wd_lifecheck.c:501 2023-10-26 11:39:19: pid 60570: LOG: get data request from local pgpool-II node received on IPC interface is forwarded to leader watchdog node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:39:19: pid 60570: DETAIL: waiting for the reply... 2023-10-26 11:39:19: pid 60570: LOCATION: watchdog.c:2971 2023-10-26 11:39:19: pid 60576: LOG: watchdog nodes ID:2 Name:"Not_Set" 2023-10-26 11:39:19: pid 60576: DETAIL: Host:"paqcxast04.aaa.es" WD Port:9000 pgpool-II port:9999 2023-10-26 11:39:19: pid 60576: LOCATION: wd_lifecheck.c:501 2023-10-26 11:39:19: pid 60567: LOG: leader watchdog node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" returned status for 2 backend nodes 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:3587 2023-10-26 11:39:19: pid 60567: LOG: backend:0 is set to UP status 2023-10-26 11:39:19: pid 60567: DETAIL: backend:0 is UP on cluster leader "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:3629 2023-10-26 11:39:19: pid 60567: LOG: backend:1 is set to UP status 2023-10-26 11:39:19: pid 60567: DETAIL: backend:1 is UP on cluster leader "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:3629 2023-10-26 11:39:19: pid 60567: LOG: unix_socket_directories[0]: /run/pgpool/.s.PGSQL.9999 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:4823 2023-10-26 11:39:19: pid 60567: LOG: listen address[0]: * 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:4855 2023-10-26 11:39:19: pid 60567: LOG: Setting up socket for 0.0.0.0:9999 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:984 2023-10-26 11:39:19: pid 60567: LOG: Setting up socket for :::9999 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:984 2023-10-26 11:39:19: pid 60567: LOG: listen address[0]: * 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:4855 2023-10-26 11:39:19: pid 60567: LOG: Setting up socket for 0.0.0.0:9898 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:984 2023-10-26 11:39:19: pid 60567: LOG: Setting up socket for :::9898 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:984 2023-10-26 11:39:19: pid 60683: LOG: PCP process: 60683 started 2023-10-26 11:39:19: pid 60683: LOCATION: pcp_child.c:160 2023-10-26 11:39:19: pid 60684: LOG: process started 2023-10-26 11:39:19: pid 60684: LOCATION: pgpool_main.c:890 2023-10-26 11:39:19: pid 60685: LOG: process started 2023-10-26 11:39:19: pid 60685: LOCATION: pgpool_main.c:890 2023-10-26 11:39:19: pid 60686: LOG: process started 2023-10-26 11:39:19: pid 60686: LOCATION: pgpool_main.c:890 2023-10-26 11:39:19: pid 60567: LOG: pgpool-II successfully started. version 4.4.4 (nurikoboshi) 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:647 2023-10-26 11:39:19: pid 60567: LOG: node status[0]: 0 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:658 2023-10-26 11:39:19: pid 60567: LOG: node status[1]: 0 2023-10-26 11:39:19: pid 60567: LOCATION: pgpool_main.c:658 2023-10-26 11:39:19: pid 60570: LOG: new watchdog node connection is received from "10.151.18.84:576" 2023-10-26 11:39:19: pid 60570: LOCATION: watchdog.c:3405 2023-10-26 11:39:19: pid 60570: LOG: new node joined the cluster hostname:"paqcxast04.aaa.es" port:9000 pgpool_port:9999 2023-10-26 11:39:19: pid 60570: DETAIL: Pgpool-II version:"4.4.4" watchdog messaging version: 1.2 2023-10-26 11:39:19: pid 60570: LOCATION: watchdog.c:1663 2023-10-26 11:39:20: pid 60577: LOG: set SO_REUSEPORT option to the socket 2023-10-26 11:39:20: pid 60577: LOCATION: wd_heartbeat.c:691 2023-10-26 11:39:20: pid 60578: LOG: set SO_REUSEPORT option to the socket 2023-10-26 11:39:20: pid 60578: LOCATION: wd_heartbeat.c:691 2023-10-26 11:39:20: pid 60577: LOG: creating watchdog heartbeat receive socket. 2023-10-26 11:39:20: pid 60577: DETAIL: set SO_REUSEPORT 2023-10-26 11:39:20: pid 60577: LOCATION: wd_heartbeat.c:231 2023-10-26 11:39:20: pid 60578: LOG: creating socket for sending heartbeat 2023-10-26 11:39:20: pid 60578: DETAIL: set SO_REUSEPORT 2023-10-26 11:39:20: pid 60578: LOCATION: wd_heartbeat.c:148 2023-10-26 11:39:20: pid 60579: LOG: set SO_REUSEPORT option to the socket 2023-10-26 11:39:20: pid 60579: LOCATION: wd_heartbeat.c:691 2023-10-26 11:39:20: pid 60579: LOG: creating watchdog heartbeat receive socket. 2023-10-26 11:39:20: pid 60579: DETAIL: set SO_REUSEPORT 2023-10-26 11:39:20: pid 60579: LOCATION: wd_heartbeat.c:231 2023-10-26 11:39:20: pid 60580: LOG: set SO_REUSEPORT option to the socket 2023-10-26 11:39:20: pid 60580: LOCATION: wd_heartbeat.c:691 2023-10-26 11:39:20: pid 60580: LOG: creating socket for sending heartbeat 2023-10-26 11:39:20: pid 60580: DETAIL: set SO_REUSEPORT 2023-10-26 11:39:20: pid 60580: LOCATION: wd_heartbeat.c:148 2023-10-26 11:39:23: pid 60570: LOG: new watchdog node connection is received from "10.23.18.111:55435" 2023-10-26 11:39:23: pid 60570: LOCATION: watchdog.c:3405 2023-10-26 11:39:23: pid 60570: LOG: new node joined the cluster hostname:"paqcxast01.aaa.es" port:9000 pgpool_port:9999 2023-10-26 11:39:23: pid 60570: DETAIL: Pgpool-II version:"4.4.4" watchdog messaging version: 1.2 2023-10-26 11:39:23: pid 60570: LOCATION: watchdog.c:1663 2023-10-26 11:39:24: pid 60570: LOG: new outbound connection to paqcxast04.aaa.es:9000 2023-10-26 11:39:24: pid 60570: LOCATION: watchdog.c:3484 2023-10-26 11:39:29: pid 60570: LOG: We are connected to leader node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" and another node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" is trying to become a leader 2023-10-26 11:39:29: pid 60570: LOCATION: watchdog.c:6990 2023-10-26 11:40:59: pid 60576: LOG: watchdog: lifecheck started 2023-10-26 11:40:59: pid 60576: LOCATION: wd_lifecheck.c:431 2023-10-26 11:49:33: pid 60655: LOG: new connection received 2023-10-26 11:49:33: pid 60655: DETAIL: connecting host=127.0.0.1 port=31390 2023-10-26 11:49:33: pid 60655: LOCATION: child.c:1873 2023-10-26 11:49:35: pid 60655: LOG: frontend disconnection: session time: 0:00:02.308 user=usr_pg_pool database=postgres host=127.0.0.1 port=31390 2023-10-26 11:49:35: pid 60655: LOCATION: child.c:2089 2023-10-26 11:49:44: pid 60683: LOG: forked new pcp worker, pid=63280 socket=7 2023-10-26 11:49:44: pid 60683: LOCATION: pcp_child.c:308 2023-10-26 11:49:44: pid 60683: LOG: PCP process with pid: 63280 exit with SUCCESS. 2023-10-26 11:49:44: pid 60683: LOCATION: pcp_child.c:364 2023-10-26 11:49:44: pid 60683: LOG: PCP process with pid: 63280 exits with status 0 2023-10-26 11:49:44: pid 60683: LOCATION: pcp_child.c:378 2023-10-26 11:50:11: pid 60684: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:50:11: pid 60684: LOCATION: pool_connection_pool.c:661 2023-10-26 11:50:13: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:50:13: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:50:13: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:50:13: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:50:13: pid 60685: LOG: received degenerate backend request for node_id: 0 from pid [60685] 2023-10-26 11:50:13: pid 60685: LOCATION: pool_internal_comms.c:147 2023-10-26 11:50:13: pid 60570: LOG: failover request from local pgpool-II node received on IPC interface is forwarded to leader watchdog node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:50:13: pid 60570: DETAIL: waiting for the reply... 2023-10-26 11:50:13: pid 60570: LOCATION: watchdog.c:2884 2023-10-26 11:50:16: pid 60684: ERROR: Failed to check replication time lag 2023-10-26 11:50:16: pid 60684: DETAIL: No persistent db connection for the node 0 2023-10-26 11:50:16: pid 60684: HINT: check sr_check_user and sr_check_password 2023-10-26 11:50:16: pid 60684: CONTEXT: while checking replication time lag 2023-10-26 11:50:16: pid 60684: LOCATION: pool_worker_child.c:390 2023-10-26 11:50:16: pid 60570: WARNING: we have not received a beacon message from leader node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:50:16: pid 60570: DETAIL: requesting info message from leader node 2023-10-26 11:50:16: pid 60570: LOCATION: watchdog.c:7079 2023-10-26 11:50:18: pid 60570: LOG: remote node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" is not replying.. 2023-10-26 11:50:18: pid 60570: DETAIL: marking the node as lost 2023-10-26 11:50:18: pid 60570: LOCATION: watchdog.c:4808 2023-10-26 11:50:18: pid 60570: LOG: remote node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" is lost 2023-10-26 11:50:18: pid 60570: LOCATION: watchdog.c:5450 2023-10-26 11:50:18: pid 60570: LOG: watchdog cluster has lost the coordinator node 2023-10-26 11:50:18: pid 60570: LOCATION: watchdog.c:5457 2023-10-26 11:50:18: pid 60570: LOG: removing the remote node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" from watchdog cluster leader 2023-10-26 11:50:18: pid 60570: LOCATION: watchdog.c:7961 2023-10-26 11:50:18: pid 60570: LOG: We have lost the cluster leader node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" 2023-10-26 11:50:18: pid 60570: LOCATION: watchdog.c:6948 2023-10-26 11:50:18: pid 60570: LOG: watchdog node state changed from [STANDBY] to [JOINING] 2023-10-26 11:50:18: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:50:18: pid 60685: LOG: degenerate backend request for 1 node(s) from pid [60685] is canceled by other pgpool 2023-10-26 11:50:18: pid 60685: LOCATION: pool_internal_comms.c:221 2023-10-26 11:50:18: pid 60570: LOG: watchdog node state changed from [JOINING] to [INITIALIZING] 2023-10-26 11:50:18: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:50:19: pid 60570: LOG: watchdog node state changed from [INITIALIZING] to [STANDING FOR LEADER] 2023-10-26 11:50:19: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:50:19: pid 60570: LOG: our stand for coordinator request is rejected by node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:50:19: pid 60570: DETAIL: we might be in partial network isolation and cluster already have a valid leader 2023-10-26 11:50:19: pid 60570: HINT: please verify the watchdog life-check and network is working properly 2023-10-26 11:50:19: pid 60570: LOCATION: watchdog.c:5919 2023-10-26 11:50:19: pid 60570: LOG: watchdog node state changed from [STANDING FOR LEADER] to [NETWORK ISOLATION] 2023-10-26 11:50:19: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:50:25: pid 60570: LOG: setting the remote node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" as watchdog cluster leader 2023-10-26 11:50:25: pid 60570: LOCATION: watchdog.c:7966 2023-10-26 11:50:26: pid 60684: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:50:26: pid 60684: LOCATION: pool_connection_pool.c:661 2023-10-26 11:50:29: pid 60570: LOG: trying again to join the cluster 2023-10-26 11:50:29: pid 60570: LOCATION: watchdog.c:6514 2023-10-26 11:50:29: pid 60570: LOG: watchdog node state changed from [NETWORK ISOLATION] to [JOINING] 2023-10-26 11:50:29: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:50:29: pid 60570: LOG: removing the remote node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" from watchdog cluster leader 2023-10-26 11:50:29: pid 60570: LOCATION: watchdog.c:7961 2023-10-26 11:50:29: pid 60570: LOG: setting the remote node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" as watchdog cluster leader 2023-10-26 11:50:29: pid 60570: LOCATION: watchdog.c:7966 2023-10-26 11:50:29: pid 60570: LOG: watchdog node state changed from [JOINING] to [INITIALIZING] 2023-10-26 11:50:29: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:50:30: pid 60570: LOG: watchdog node state changed from [INITIALIZING] to [STANDBY] 2023-10-26 11:50:30: pid 60570: LOCATION: watchdog.c:7227 2023-10-26 11:50:30: pid 60570: LOG: signal_user1_to_parent_with_reason(1) 2023-10-26 11:50:30: pid 60570: LOCATION: pgpool_main.c:773 2023-10-26 11:50:30: pid 60570: LOG: successfully joined the watchdog cluster as standby node 2023-10-26 11:50:30: pid 60570: DETAIL: our join coordinator request is accepted by cluster leader node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:50:30: pid 60570: LOCATION: watchdog.c:6887 2023-10-26 11:50:30: pid 60567: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:1417 2023-10-26 11:50:30: pid 60567: LOG: Pgpool-II parent process received watchdog state change signal from watchdog 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:1461 2023-10-26 11:50:30: pid 60570: LOG: signal_user1_to_parent_with_reason(3) 2023-10-26 11:50:30: pid 60570: LOCATION: pgpool_main.c:773 2023-10-26 11:50:30: pid 60567: LOG: we have joined the watchdog cluster as STANDBY node 2023-10-26 11:50:30: pid 60567: DETAIL: syncing the backend states from the LEADER watchdog node 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:1468 2023-10-26 11:50:30: pid 60570: LOG: received the get data request from local pgpool-II on IPC interface 2023-10-26 11:50:30: pid 60570: LOCATION: watchdog.c:2944 2023-10-26 11:50:30: pid 60570: LOG: get data request from local pgpool-II node received on IPC interface is forwarded to leader watchdog node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:50:30: pid 60570: DETAIL: waiting for the reply... 2023-10-26 11:50:30: pid 60570: LOCATION: watchdog.c:2971 2023-10-26 11:50:30: pid 60567: LOG: leader watchdog node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" returned status for 2 backend nodes 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:3587 2023-10-26 11:50:30: pid 60567: LOG: backend:0 is set to UP status 2023-10-26 11:50:30: pid 60567: DETAIL: backend:0 is UP on cluster leader "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:3629 2023-10-26 11:50:30: pid 60567: LOG: backend:1 is set to UP status 2023-10-26 11:50:30: pid 60567: DETAIL: backend:1 is UP on cluster leader "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:3629 2023-10-26 11:50:30: pid 60567: LOG: backend nodes status remains same after the sync from "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:3695 2023-10-26 11:50:30: pid 60567: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:1417 2023-10-26 11:50:30: pid 60567: LOG: Pgpool-II parent process received watchdog quorum change signal from watchdog 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:1422 2023-10-26 11:50:30: pid 60567: LOG: watchdog cluster now holds the quorum 2023-10-26 11:50:30: pid 60567: DETAIL: updating the state of quarantine backend nodes 2023-10-26 11:50:30: pid 60567: LOCATION: pgpool_main.c:1429 2023-10-26 11:50:31: pid 60684: ERROR: Failed to check replication time lag 2023-10-26 11:50:31: pid 60684: DETAIL: No persistent db connection for the node 0 2023-10-26 11:50:31: pid 60684: HINT: check sr_check_user and sr_check_password 2023-10-26 11:50:31: pid 60684: CONTEXT: while checking replication time lag 2023-10-26 11:50:31: pid 60684: LOCATION: pool_worker_child.c:390 2023-10-26 11:50:33: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:50:33: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:50:33: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:50:33: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:50:33: pid 60685: LOG: received degenerate backend request for node_id: 0 from pid [60685] 2023-10-26 11:50:33: pid 60685: LOCATION: pool_internal_comms.c:147 2023-10-26 11:50:33: pid 60570: LOG: failover request from local pgpool-II node received on IPC interface is forwarded to leader watchdog node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:50:33: pid 60570: DETAIL: waiting for the reply... 2023-10-26 11:50:33: pid 60570: LOCATION: watchdog.c:2884 2023-10-26 11:50:33: pid 60570: LOG: remote node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" is asking to inform about quarantined backend nodes 2023-10-26 11:50:33: pid 60570: LOCATION: watchdog.c:4202 2023-10-26 11:50:33: pid 60570: LOG: signal_user1_to_parent_with_reason(4) 2023-10-26 11:50:33: pid 60570: LOCATION: pgpool_main.c:773 2023-10-26 11:50:33: pid 60685: LOG: degenerate backend request for node_id: 0 from pid [60685], will be handled by watchdog, which is building consensus for request 2023-10-26 11:50:33: pid 60685: LOCATION: pool_internal_comms.c:208 2023-10-26 11:50:33: pid 60567: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:50:33: pid 60567: LOCATION: pgpool_main.c:1417 2023-10-26 11:50:33: pid 60567: LOG: Pgpool-II parent process received inform quarantine nodes signal from watchdog 2023-10-26 11:50:33: pid 60567: LOCATION: pgpool_main.c:1437 2023-10-26 11:50:39: pid 60576: LOG: informing the node status change to watchdog 2023-10-26 11:50:39: pid 60576: DETAIL: node id :0 status = "NODE DEAD" message:"No heartbeat signal from node" 2023-10-26 11:50:39: pid 60576: LOCATION: wd_lifecheck.c:529 2023-10-26 11:50:39: pid 60570: LOG: received node status change ipc message 2023-10-26 11:50:39: pid 60570: DETAIL: No heartbeat signal from node 2023-10-26 11:50:39: pid 60570: LOCATION: watchdog.c:2274 2023-10-26 11:50:39: pid 60570: LOG: remote node "paqcxast01.aaa.es:9999 Linux paqcxast01.aaa.es" is lost 2023-10-26 11:50:39: pid 60570: LOCATION: watchdog.c:5450 2023-10-26 11:50:41: pid 60684: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:50:41: pid 60684: LOCATION: pool_connection_pool.c:661 2023-10-26 11:50:46: pid 60684: ERROR: Failed to check replication time lag 2023-10-26 11:50:46: pid 60684: DETAIL: No persistent db connection for the node 0 2023-10-26 11:50:46: pid 60684: HINT: check sr_check_user and sr_check_password 2023-10-26 11:50:46: pid 60684: CONTEXT: while checking replication time lag 2023-10-26 11:50:46: pid 60684: LOCATION: pool_worker_child.c:390 2023-10-26 11:50:48: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:50:48: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:50:48: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:50:48: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:50:48: pid 60685: LOG: received degenerate backend request for node_id: 0 from pid [60685] 2023-10-26 11:50:48: pid 60685: LOCATION: pool_internal_comms.c:147 2023-10-26 11:50:48: pid 60570: LOG: failover request from local pgpool-II node received on IPC interface is forwarded to leader watchdog node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:50:48: pid 60570: DETAIL: waiting for the reply... 2023-10-26 11:50:48: pid 60570: LOCATION: watchdog.c:2884 2023-10-26 11:50:48: pid 60570: LOG: remote node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" is asking to inform about quarantined backend nodes 2023-10-26 11:50:48: pid 60570: LOCATION: watchdog.c:4202 2023-10-26 11:50:48: pid 60570: LOG: signal_user1_to_parent_with_reason(4) 2023-10-26 11:50:48: pid 60570: LOCATION: pgpool_main.c:773 2023-10-26 11:50:48: pid 60567: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:50:48: pid 60567: LOCATION: pgpool_main.c:1417 2023-10-26 11:50:48: pid 60567: LOG: Pgpool-II parent process received inform quarantine nodes signal from watchdog 2023-10-26 11:50:48: pid 60567: LOCATION: pgpool_main.c:1437 2023-10-26 11:50:48: pid 60685: LOG: degenerate backend request for node_id: 0 from pid [60685], will be handled by watchdog, which is building consensus for request 2023-10-26 11:50:48: pid 60685: LOCATION: pool_internal_comms.c:208 2023-10-26 11:50:56: pid 60684: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:50:56: pid 60684: LOCATION: pool_connection_pool.c:661 2023-10-26 11:51:01: pid 60684: ERROR: Failed to check replication time lag 2023-10-26 11:51:01: pid 60684: DETAIL: No persistent db connection for the node 0 2023-10-26 11:51:01: pid 60684: HINT: check sr_check_user and sr_check_password 2023-10-26 11:51:01: pid 60684: CONTEXT: while checking replication time lag 2023-10-26 11:51:01: pid 60684: LOCATION: pool_worker_child.c:390 2023-10-26 11:51:03: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:51:03: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:51:03: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:51:03: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:51:03: pid 60685: LOG: received degenerate backend request for node_id: 0 from pid [60685] 2023-10-26 11:51:03: pid 60685: LOCATION: pool_internal_comms.c:147 2023-10-26 11:51:03: pid 60570: LOG: failover request from local pgpool-II node received on IPC interface is forwarded to leader watchdog node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:51:03: pid 60570: DETAIL: waiting for the reply... 2023-10-26 11:51:03: pid 60570: LOCATION: watchdog.c:2884 2023-10-26 11:51:03: pid 60685: LOG: degenerate backend request for 1 node(s) from pid [60685], is changed to quarantine node request by watchdog 2023-10-26 11:51:03: pid 60685: DETAIL: watchdog is taking time to build consensus 2023-10-26 11:51:03: pid 60685: LOCATION: pool_internal_comms.c:201 2023-10-26 11:51:03: pid 60685: LOG: signal_user1_to_parent_with_reason(0) 2023-10-26 11:51:03: pid 60685: LOCATION: pgpool_main.c:773 2023-10-26 11:51:03: pid 60567: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:51:03: pid 60567: LOCATION: pgpool_main.c:1417 2023-10-26 11:51:03: pid 60567: LOG: Pgpool-II parent process has received failover request 2023-10-26 11:51:03: pid 60567: LOCATION: pgpool_main.c:1482 2023-10-26 11:51:03: pid 60570: LOG: received the failover indication from Pgpool-II on IPC interface 2023-10-26 11:51:03: pid 60570: LOCATION: watchdog.c:3003 2023-10-26 11:51:03: pid 60570: LOG: received the failover indication from Pgpool-II on IPC interface, but only leader can do failover 2023-10-26 11:51:03: pid 60570: LOCATION: watchdog.c:3060 2023-10-26 11:51:03: pid 60567: LOG: === Starting quarantine. shutdown host paqcxast01.aaa.es(5432) === 2023-10-26 11:51:03: pid 60567: LOCATION: pgpool_main.c:4205 2023-10-26 11:51:03: pid 60567: LOG: Restart all children 2023-10-26 11:51:03: pid 60567: LOCATION: pgpool_main.c:4368 2023-10-26 11:51:03: pid 60567: LOG: failover: set new primary node: -1 2023-10-26 11:51:03: pid 60567: LOCATION: pgpool_main.c:4595 2023-10-26 11:51:03: pid 60567: LOG: failover: set new main node: 1 2023-10-26 11:51:03: pid 60567: LOCATION: pgpool_main.c:4602 2023-10-26 11:51:03: pid 60684: LOG: connect_inet_domain_socket: select() interrupted by certain signal. retrying... 2023-10-26 11:51:03: pid 60684: LOCATION: pool_connection_pool.c:726 2023-10-26 11:51:03: pid 60570: LOG: received the failover indication from Pgpool-II on IPC interface 2023-10-26 11:51:03: pid 60570: LOCATION: watchdog.c:3003 2023-10-26 11:51:03: pid 60570: LOG: received the failover indication from Pgpool-II on IPC interface, but only leader can do failover 2023-10-26 11:51:03: pid 60570: LOCATION: watchdog.c:3060 2023-10-26 11:51:03: pid 60567: LOG: === Quarantine done. shutdown host paqcxast01.aaa.es(5432) === 2023-10-26 11:51:03: pid 60567: LOCATION: pgpool_main.c:4740 2023-10-26 11:51:04: pid 60683: LOG: restart request received in pcp child process 2023-10-26 11:51:04: pid 60683: LOCATION: pcp_child.c:167 2023-10-26 11:51:04: pid 60567: LOG: PCP child 60683 exits with status 0 in failover() 2023-10-26 11:51:04: pid 60567: LOCATION: pgpool_main.c:4785 2023-10-26 11:51:04: pid 60567: LOG: fork a new PCP child pid 63689 in failover() 2023-10-26 11:51:04: pid 60567: LOCATION: pgpool_main.c:4789 2023-10-26 11:51:04: pid 60567: LOG: reaper handler 2023-10-26 11:51:04: pid 60567: LOCATION: pgpool_main.c:1830 2023-10-26 11:51:04: pid 60567: LOG: reaper handler: exiting normally 2023-10-26 11:51:04: pid 60567: LOCATION: pgpool_main.c:2050 2023-10-26 11:51:04: pid 63689: LOG: PCP process: 63689 started 2023-10-26 11:51:04: pid 63689: LOCATION: pcp_child.c:160 2023-10-26 11:51:13: pid 60684: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:51:13: pid 60684: LOCATION: pool_connection_pool.c:661 2023-10-26 11:51:18: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:51:18: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:51:18: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:51:18: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:51:18: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:51:18: pid 60685: DETAIL: ignoring.. 2023-10-26 11:51:18: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:51:18: pid 60684: LOG: worker process received restart request 2023-10-26 11:51:18: pid 60684: LOCATION: pool_worker_child.c:167 2023-10-26 11:51:18: pid 60567: LOG: reaper handler 2023-10-26 11:51:18: pid 60567: LOCATION: pgpool_main.c:1830 2023-10-26 11:51:18: pid 60567: LOG: reaper handler: exiting normally 2023-10-26 11:51:18: pid 60567: LOCATION: pgpool_main.c:2050 2023-10-26 11:51:18: pid 63712: LOG: process started 2023-10-26 11:51:18: pid 63712: LOCATION: pgpool_main.c:890 2023-10-26 11:51:33: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:51:33: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:51:33: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:51:33: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:51:33: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:51:33: pid 60685: DETAIL: ignoring.. 2023-10-26 11:51:33: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:51:41: pid 63689: LOG: forked new pcp worker, pid=63854 socket=7 2023-10-26 11:51:41: pid 63689: LOCATION: pcp_child.c:308 2023-10-26 11:51:41: pid 63689: LOG: PCP process with pid: 63854 exit with SUCCESS. 2023-10-26 11:51:41: pid 63689: LOCATION: pcp_child.c:364 2023-10-26 11:51:41: pid 63689: LOG: PCP process with pid: 63854 exits with status 0 2023-10-26 11:51:41: pid 63689: LOCATION: pcp_child.c:378 2023-10-26 11:51:48: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:51:48: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:51:48: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:51:48: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:51:48: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:51:48: pid 60685: DETAIL: ignoring.. 2023-10-26 11:51:48: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:52:03: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:52:03: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:52:03: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:52:03: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:52:03: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:52:03: pid 60685: DETAIL: ignoring.. 2023-10-26 11:52:03: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:52:18: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:52:18: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:52:18: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:52:18: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:52:18: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:52:18: pid 60685: DETAIL: ignoring.. 2023-10-26 11:52:18: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:52:33: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:52:33: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:52:33: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:52:33: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:52:33: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:52:33: pid 60685: DETAIL: ignoring.. 2023-10-26 11:52:33: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:52:48: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:52:48: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:52:48: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:52:48: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:52:48: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:52:48: pid 60685: DETAIL: ignoring.. 2023-10-26 11:52:48: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:53:03: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:53:03: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:53:03: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:53:03: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:53:03: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:53:03: pid 60685: DETAIL: ignoring.. 2023-10-26 11:53:03: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:53:18: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:53:18: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:53:18: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:53:18: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:53:18: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:53:18: pid 60685: DETAIL: ignoring.. 2023-10-26 11:53:18: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:53:33: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:53:33: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:53:33: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:53:33: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:53:33: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:53:33: pid 60685: DETAIL: ignoring.. 2023-10-26 11:53:33: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:53:48: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:53:48: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:53:48: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:53:48: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:53:48: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:53:48: pid 60685: DETAIL: ignoring.. 2023-10-26 11:53:48: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:54:03: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:54:03: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:54:03: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:54:03: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:54:03: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:54:03: pid 60685: DETAIL: ignoring.. 2023-10-26 11:54:03: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:54:05: pid 63689: LOG: forked new pcp worker, pid=64533 socket=7 2023-10-26 11:54:05: pid 63689: LOCATION: pcp_child.c:308 2023-10-26 11:54:05: pid 63689: LOG: PCP process with pid: 64533 exit with SUCCESS. 2023-10-26 11:54:05: pid 63689: LOCATION: pcp_child.c:364 2023-10-26 11:54:05: pid 63689: LOG: PCP process with pid: 64533 exits with status 0 2023-10-26 11:54:05: pid 63689: LOCATION: pcp_child.c:378 2023-10-26 11:54:18: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:54:18: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:54:18: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:54:18: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:54:18: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:54:18: pid 60685: DETAIL: ignoring.. 2023-10-26 11:54:18: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:54:33: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:54:33: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:54:33: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:54:33: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:54:33: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:54:33: pid 60685: DETAIL: ignoring.. 2023-10-26 11:54:33: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:54:48: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:54:48: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:54:48: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:54:48: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:54:48: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:54:48: pid 60685: DETAIL: ignoring.. 2023-10-26 11:54:48: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:55:03: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:55:03: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:55:03: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:55:03: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:55:03: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:55:03: pid 60685: DETAIL: ignoring.. 2023-10-26 11:55:03: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:55:18: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:55:18: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:55:18: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:55:18: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:55:18: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:55:18: pid 60685: DETAIL: ignoring.. 2023-10-26 11:55:18: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:55:33: pid 60685: LOG: failed to connect to PostgreSQL server on "paqcxast01.aaa.es:5432", timed out 2023-10-26 11:55:33: pid 60685: LOCATION: pool_connection_pool.c:661 2023-10-26 11:55:33: pid 60685: LOG: health check failed on node 0 (timeout:0) 2023-10-26 11:55:33: pid 60685: LOCATION: health_check.c:218 2023-10-26 11:55:33: pid 60685: LOG: health check failed on quarantine node 0 (timeout:0) 2023-10-26 11:55:33: pid 60685: DETAIL: ignoring.. 2023-10-26 11:55:33: pid 60685: LOCATION: health_check.c:224 2023-10-26 11:55:35: pid 60570: LOG: signal_user1_to_parent_with_reason(2) 2023-10-26 11:55:35: pid 60570: LOCATION: pgpool_main.c:773 2023-10-26 11:55:35: pid 60567: LOG: Pgpool-II parent process received SIGUSR1 2023-10-26 11:55:35: pid 60567: LOCATION: pgpool_main.c:1417 2023-10-26 11:55:35: pid 60567: LOG: Pgpool-II parent process received sync backend signal from watchdog 2023-10-26 11:55:35: pid 60567: LOCATION: pgpool_main.c:1446 2023-10-26 11:55:35: pid 60567: LOG: leader watchdog has performed failover 2023-10-26 11:55:35: pid 60567: DETAIL: syncing the backend states from the LEADER watchdog node 2023-10-26 11:55:35: pid 60567: LOCATION: pgpool_main.c:1453 2023-10-26 11:55:35: pid 60570: LOG: received the get data request from local pgpool-II on IPC interface 2023-10-26 11:55:35: pid 60570: LOCATION: watchdog.c:2944 2023-10-26 11:55:35: pid 60570: LOG: get data request from local pgpool-II node received on IPC interface is forwarded to leader watchdog node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:55:35: pid 60570: DETAIL: waiting for the reply... 2023-10-26 11:55:35: pid 60570: LOCATION: watchdog.c:2971 2023-10-26 11:55:35: pid 60567: LOG: leader watchdog node "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" returned status for 2 backend nodes 2023-10-26 11:55:35: pid 60567: LOCATION: pgpool_main.c:3587 2023-10-26 11:55:35: pid 60567: LOG: backend nodes status remains same after the sync from "paqcxast04.aaa.es:9999 Linux paqcxast04.aaa.es" 2023-10-26 11:55:35: pid 60567: LOCATION: pgpool_main.c:3695 2023-10-26 12:03:02: pid 63689: LOG: forked new pcp worker, pid=66713 socket=7 2023-10-26 12:03:02: pid 63689: LOCATION: pcp_child.c:308 2023-10-26 12:03:02: pid 63689: LOG: PCP process with pid: 66713 exit with SUCCESS. 2023-10-26 12:03:02: pid 63689: LOCATION: pcp_child.c:364 2023-10-26 12:03:02: pid 63689: LOG: PCP process with pid: 66713 exits with status 0 2023-10-26 12:03:02: pid 63689: LOCATION: pcp_child.c:378