[pgpool-hackers: 4166] Re: Allow pgpool_setup to test sample scripts and config files included in RPMs
Tatsuo Ishii
ishii at sraoss.co.jp
Tue Jun 7 15:58:53 JST 2022
> Currently, pgpool_setup generates scripts (e.g. failover.sh, follow_primary.sh)
> to test pgpool.
>
> The attached patch addes a new option "-c" in src/test/regression/regress.sh
> and allows pgpool_setup to test sample scripts and config files included in RPMs.
> This patch also changes "backend_hostnameX = '/tmp'" to "backend_hostnameX = 'localhost'".
>
> In addition, for automated RPMs testing, the following repository
> has been updated to automatically create RPMs and use the created
> RPMs to run regression tests.
>
> https://github.com/pgpool/pgpool-II-buildfarm
Thank you for the patch. Unfortunately after applying the patch,
055.backend_all_down test constantly fails with timeout
error. Attached is pgpool.log. This is on master branch.
Best reagards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp
-------------- next part --------------
2022-06-07 14:23:24.994: main pid 645313: LOG: Backend status file /home/t-ishii/work/Pgpool-II/current/pgpool2/src/test/regression/tests/055.backend_all_down/testdir/log/pgpool_status discarded
2022-06-07 14:23:24.994: main pid 645313: LOG: health_check_stats_shared_memory_size: requested size: 12288
2022-06-07 14:23:24.994: main pid 645313: LOG: memory cache initialized
2022-06-07 14:23:24.994: main pid 645313: DETAIL: memcache blocks :64
2022-06-07 14:23:24.994: main pid 645313: LOG: allocating (136981824) bytes of shared memory segment
2022-06-07 14:23:24.994: main pid 645313: LOG: allocating shared memory segment of size: 136981824
2022-06-07 14:23:25.070: main pid 645313: LOG: health_check_stats_shared_memory_size: requested size: 12288
2022-06-07 14:23:25.070: main pid 645313: LOG: health_check_stats_shared_memory_size: requested size: 12288
2022-06-07 14:23:25.070: main pid 645313: LOG: memory cache initialized
2022-06-07 14:23:25.070: main pid 645313: DETAIL: memcache blocks :64
2022-06-07 14:23:25.072: main pid 645313: LOG: pool_discard_oid_maps: discarded memqcache oid maps
2022-06-07 14:23:25.077: main pid 645313: LOG: listen address[0]: localhost
2022-06-07 14:23:25.077: main pid 645313: LOG: Setting up socket for 127.0.0.1:11000
2022-06-07 14:23:25.080: main pid 645313: LOG: find_primary_node_repeatedly: waiting for finding a primary node
2022-06-07 14:23:25.080: main pid 645313: LOG: failed to connect to PostgreSQL server on "localhost:11002", getsockopt() failed
2022-06-07 14:23:25.080: main pid 645313: DETAIL: Operation now in progress
2022-06-07 14:23:25.080: main pid 645313: LOG: find_primary_node: make_persistent_db_connection_noerror failed on node 0
2022-06-07 14:23:25.080: main pid 645313: LOG: failed to connect to PostgreSQL server on "localhost:11003", getsockopt() failed
2022-06-07 14:23:25.080: main pid 645313: DETAIL: Operation now in progress
2022-06-07 14:23:25.080: main pid 645313: LOG: find_primary_node: make_persistent_db_connection_noerror failed on node 1
2022-06-07 14:23:26.082: main pid 645313: LOG: failed to connect to PostgreSQL server on "localhost:11002", getsockopt() failed
2022-06-07 14:23:26.082: main pid 645313: DETAIL: Operation now in progress
2022-06-07 14:23:26.082: main pid 645313: LOG: find_primary_node: make_persistent_db_connection_noerror failed on node 0
2022-06-07 14:23:26.082: main pid 645313: LOG: failed to connect to PostgreSQL server on "localhost:11003", getsockopt() failed
2022-06-07 14:23:26.082: main pid 645313: DETAIL: Operation now in progress
2022-06-07 14:23:26.082: main pid 645313: LOG: find_primary_node: make_persistent_db_connection_noerror failed on node 1
2022-06-07 14:23:27.083: main pid 645313: LOG: failed to connect to PostgreSQL server on "localhost:11002", getsockopt() failed
2022-06-07 14:23:27.083: main pid 645313: DETAIL: Operation now in progress
2022-06-07 14:23:27.083: main pid 645313: LOG: find_primary_node: make_persistent_db_connection_noerror failed on node 0
2022-06-07 14:23:27.084: main pid 645313: LOG: failed to connect to PostgreSQL server on "localhost:11003", getsockopt() failed
2022-06-07 14:23:27.084: main pid 645313: DETAIL: Operation now in progress
2022-06-07 14:23:27.084: main pid 645313: LOG: find_primary_node: make_persistent_db_connection_noerror failed on node 1
2022-06-07 14:23:28.085: main pid 645313: LOG: failed to connect to PostgreSQL server on "localhost:11002", getsockopt() failed
2022-06-07 14:23:28.085: main pid 645313: DETAIL: Operation now in progress
2022-06-07 14:23:28.085: main pid 645313: LOG: find_primary_node: make_persistent_db_connection_noerror failed on node 0
2022-06-07 14:23:28.085: main pid 645313: LOG: failed to connect to PostgreSQL server on "localhost:11003", getsockopt() failed
2022-06-07 14:23:28.085: main pid 645313: DETAIL: Operation now in progress
2022-06-07 14:23:28.085: main pid 645313: LOG: find_primary_node: make_persistent_db_connection_noerror failed on node 1
2022-06-07 14:23:29.087: main pid 645313: LOG: failed to connect to PostgreSQL server on "localhost:11002", getsockopt() failed
2022-06-07 14:23:29.087: main pid 645313: DETAIL: Operation now in progress
2022-06-07 14:23:29.087: main pid 645313: LOG: find_primary_node: make_persistent_db_connection_noerror failed on node 0
2022-06-07 14:23:29.087: main pid 645313: LOG: failed to connect to PostgreSQL server on "localhost:11003", getsockopt() failed
2022-06-07 14:23:29.087: main pid 645313: DETAIL: Operation now in progress
2022-06-07 14:23:29.087: main pid 645313: LOG: find_primary_node: make_persistent_db_connection_noerror failed on node 1
2022-06-07 14:23:30.088: main pid 645313: LOG: listen address[0]: localhost
2022-06-07 14:23:30.088: main pid 645313: LOG: Setting up socket for 127.0.0.1:11001
2022-06-07 14:23:30.089: pcp_main pid 645367: LOG: PCP process: 645367 started
2022-06-07 14:23:30.089: sr_check_worker pid 645368: LOG: process started
2022-06-07 14:23:30.090: health_check pid 645369: LOG: process started
2022-06-07 14:23:30.090: sr_check_worker pid 645368: LOG: failed to connect to PostgreSQL server on "localhost:11002", getsockopt() failed
2022-06-07 14:23:30.090: sr_check_worker pid 645368: DETAIL: Operation now in progress
2022-06-07 14:23:30.090: health_check pid 645370: LOG: process started
2022-06-07 14:23:30.090: sr_check_worker pid 645368: LOG: failed to connect to PostgreSQL server on "localhost:11003", getsockopt() failed
2022-06-07 14:23:30.090: sr_check_worker pid 645368: DETAIL: Operation now in progress
2022-06-07 14:23:30.090: health_check0 pid 645369: LOG: failed to connect to PostgreSQL server on "localhost:11002", getsockopt() failed
2022-06-07 14:23:30.090: health_check0 pid 645369: DETAIL: Operation now in progress
2022-06-07 14:23:30.090: health_check0 pid 645369: LOG: health check retrying on DB node: 0 (round:1)
2022-06-07 14:23:30.090: health_check1 pid 645370: LOG: failed to connect to PostgreSQL server on "localhost:11003", getsockopt() failed
2022-06-07 14:23:30.090: health_check1 pid 645370: DETAIL: Operation now in progress
2022-06-07 14:23:30.090: health_check1 pid 645370: LOG: health check retrying on DB node: 1 (round:1)
2022-06-07 14:23:30.097: main pid 645313: LOG: pgpool-II successfully started. version 4.4devel (nurikoboshi)
2022-06-07 14:23:30.097: main pid 645313: LOG: node status[0]: 0
2022-06-07 14:23:30.097: main pid 645313: LOG: node status[1]: 0
2022-06-07 14:23:31.091: health_check0 pid 645369: LOG: failed to connect to PostgreSQL server on "localhost:11002", getsockopt() failed
2022-06-07 14:23:31.091: health_check0 pid 645369: DETAIL: Operation now in progress
2022-06-07 14:23:31.091: health_check0 pid 645369: LOG: health check retrying on DB node: 0 (round:2)
2022-06-07 14:23:31.091: health_check1 pid 645370: LOG: failed to connect to PostgreSQL server on "localhost:11003", getsockopt() failed
2022-06-07 14:23:31.091: health_check1 pid 645370: DETAIL: Operation now in progress
2022-06-07 14:23:31.091: health_check1 pid 645370: LOG: health check retrying on DB node: 1 (round:2)
2022-06-07 14:23:32.091: health_check1 pid 645370: LOG: failed to connect to PostgreSQL server on "localhost:11003", getsockopt() failed
2022-06-07 14:23:32.091: health_check1 pid 645370: DETAIL: Operation now in progress
2022-06-07 14:23:32.091: health_check0 pid 645369: LOG: failed to connect to PostgreSQL server on "localhost:11002", getsockopt() failed
2022-06-07 14:23:32.091: health_check0 pid 645369: DETAIL: Operation now in progress
2022-06-07 14:23:32.091: health_check1 pid 645370: LOG: health check retrying on DB node: 1 (round:3)
2022-06-07 14:23:32.091: health_check0 pid 645369: LOG: health check retrying on DB node: 0 (round:3)
2022-06-07 14:23:33.092: health_check1 pid 645370: LOG: failed to connect to PostgreSQL server on "localhost:11003", getsockopt() failed
2022-06-07 14:23:33.092: health_check1 pid 645370: DETAIL: Operation now in progress
2022-06-07 14:23:33.092: health_check0 pid 645369: LOG: failed to connect to PostgreSQL server on "localhost:11002", getsockopt() failed
2022-06-07 14:23:33.092: health_check0 pid 645369: DETAIL: Operation now in progress
2022-06-07 14:23:33.092: health_check1 pid 645370: LOG: health check failed on node 1 (timeout:0)
2022-06-07 14:23:33.092: health_check0 pid 645369: LOG: health check failed on node 0 (timeout:0)
2022-06-07 14:23:33.092: health_check0 pid 645369: LOG: received degenerate backend request for node_id: 0 from pid [645369]
2022-06-07 14:23:33.092: health_check1 pid 645370: LOG: received degenerate backend request for node_id: 1 from pid [645370]
2022-06-07 14:23:33.092: health_check1 pid 645370: LOG: signal_user1_to_parent_with_reason(0)
2022-06-07 14:23:33.092: health_check0 pid 645369: LOG: signal_user1_to_parent_with_reason(0)
2022-06-07 14:23:33.092: main pid 645313: LOG: Pgpool-II parent process received SIGUSR1
2022-06-07 14:23:33.092: main pid 645313: LOG: Pgpool-II parent process has received failover request
2022-06-07 14:23:33.092: main pid 645313: LOG: === Starting degeneration. shutdown host localhost(11003) ===
2022-06-07 14:23:33.124: main pid 645313: LOG: Do not restart children because we are switching over node id 1 host: localhost port: 11003 and we are in streaming replication mode
2022-06-07 14:23:33.124: main pid 645313: LOG: execute command: /home/t-ishii/work/Pgpool-II/current/pgpool2/src/test/regression/tests/055.backend_all_down/testdir/etc/failover.sh 1 localhost 11003 /home/t-ishii/work/Pgpool-II/current/pgpool2/src/test/regression/tests/055.backend_all_down/testdir/data1 0 localhost 0 -1 11002 /home/t-ishii/work/Pgpool-II/current/pgpool2/src/test/regression/tests/055.backend_all_down/testdir/data0 "" ""
pg_ctl: PID file "/home/t-ishii/work/Pgpool-II/current/pgpool2/src/test/regression/tests/055.backend_all_down/testdir/data0/postmaster.pid" does not exist
Is server running?
2022-06-07 14:23:35.134: main pid 645313: LOG: find_primary_node_repeatedly: waiting for finding a primary node
2022-06-07 14:23:35.135: main pid 645313: LOG: failed to connect to PostgreSQL server on "localhost:11002", getsockopt() failed
2022-06-07 14:23:35.135: main pid 645313: DETAIL: Operation now in progress
2022-06-07 14:23:35.135: main pid 645313: LOG: find_primary_node: make_persistent_db_connection_noerror failed on node 0
2022-06-07 14:23:35.135: main pid 645313: LOG: reaper handler
2022-06-07 14:23:36.135: main pid 645313: LOG: failed to connect to PostgreSQL server on "localhost:11002", getsockopt() failed
2022-06-07 14:23:36.135: main pid 645313: DETAIL: Operation now in progress
2022-06-07 14:23:36.135: main pid 645313: LOG: find_primary_node: make_persistent_db_connection_noerror failed on node 0
2022-06-07 14:23:37.136: main pid 645313: LOG: failed to connect to PostgreSQL server on "localhost:11002", getsockopt() failed
2022-06-07 14:23:37.136: main pid 645313: DETAIL: Operation now in progress
2022-06-07 14:23:37.136: main pid 645313: LOG: find_primary_node: make_persistent_db_connection_noerror failed on node 0
2022-06-07 14:23:38.138: main pid 645313: LOG: failed to connect to PostgreSQL server on "localhost:11002", getsockopt() failed
2022-06-07 14:23:38.138: main pid 645313: DETAIL: Operation now in progress
2022-06-07 14:23:38.138: main pid 645313: LOG: find_primary_node: make_persistent_db_connection_noerror failed on node 0
2022-06-07 14:23:39.139: main pid 645313: LOG: failed to connect to PostgreSQL server on "localhost:11002", getsockopt() failed
2022-06-07 14:23:39.139: main pid 645313: DETAIL: Operation now in progress
2022-06-07 14:23:39.139: main pid 645313: LOG: find_primary_node: make_persistent_db_connection_noerror failed on node 0
2022-06-07 14:23:40.140: main pid 645313: LOG: failover: set new primary node: -1
2022-06-07 14:23:40.141: main pid 645313: LOG: failover: set new main node: 0
2022-06-07 14:23:40.141: main pid 645313: LOG: === Failover done. shutdown host localhost(11003) ===
2022-06-07 14:23:40.141: main pid 645313: LOG: === Starting degeneration. shutdown host localhost(11002) ===
2022-06-07 14:23:40.141: main pid 645313: WARNING: All the DB nodes are in down status and skip writing status file.
2022-06-07 14:23:40.141: sr_check_worker pid 645368: LOG: worker process received restart request
2022-06-07 14:23:40.141: main pid 645313: LOG: failover: no valid backend node found
2022-06-07 14:23:40.141: main pid 645313: LOG: Restart all children
2022-06-07 14:23:40.145: main pid 645313: LOG: execute command: /home/t-ishii/work/Pgpool-II/current/pgpool2/src/test/regression/tests/055.backend_all_down/testdir/etc/failover.sh 0 localhost 11002 /home/t-ishii/work/Pgpool-II/current/pgpool2/src/test/regression/tests/055.backend_all_down/testdir/data0 -1 "" 0 -1 "" "" "" ""
2022-06-07 14:23:40.148: main pid 645313: LOG: execute command: /home/t-ishii/work/Pgpool-II/current/pgpool2/src/test/regression/tests/055.backend_all_down/testdir/etc/failover.sh 1 localhost 11003 /home/t-ishii/work/Pgpool-II/current/pgpool2/src/test/regression/tests/055.backend_all_down/testdir/data1 -1 "" 0 -1 "" "" "" ""
2022-06-07 14:23:40.150: main pid 645313: LOG: find_primary_node_repeatedly: all of the backends are down. Giving up finding primary node
2022-06-07 14:23:40.150: main pid 645313: LOG: failover: set new primary node: -1
2022-06-07 14:23:40.150: child pid 645413: LOG: failover or failback event detected
2022-06-07 14:23:40.150: child pid 645413: DETAIL: restarting myself
2022-06-07 14:23:40.151: child pid 645414: LOG: failover or failback event detected
2022-06-07 14:23:40.151: child pid 645414: DETAIL: restarting myself
2022-06-07 14:23:40.151: child pid 645415: LOG: failover or failback event detected
2022-06-07 14:23:40.151: child pid 645415: DETAIL: restarting myself
2022-06-07 14:23:40.151: child pid 645416: LOG: failover or failback event detected
2022-06-07 14:23:40.151: child pid 645416: DETAIL: restarting myself
2022-06-07 14:23:40.151: child pid 645417: LOG: failover or failback event detected
2022-06-07 14:23:40.151: child pid 645417: DETAIL: restarting myself
2022-06-07 14:23:40.151: child pid 645418: LOG: failover or failback event detected
2022-06-07 14:23:40.151: child pid 645418: DETAIL: restarting myself
2022-06-07 14:23:40.151: child pid 645419: LOG: failover or failback event detected
2022-06-07 14:23:40.151: child pid 645419: DETAIL: restarting myself
2022-06-07 14:23:40.151: child pid 645420: LOG: failover or failback event detected
2022-06-07 14:23:40.151: child pid 645420: DETAIL: restarting myself
2022-06-07 14:23:40.151: child pid 645421: LOG: failover or failback event detected
2022-06-07 14:23:40.151: child pid 645421: DETAIL: restarting myself
2022-06-07 14:23:40.152: child pid 645422: LOG: failover or failback event detected
2022-06-07 14:23:40.152: child pid 645422: DETAIL: restarting myself
2022-06-07 14:23:40.152: child pid 645423: LOG: failover or failback event detected
2022-06-07 14:23:40.152: child pid 645423: DETAIL: restarting myself
2022-06-07 14:23:40.152: child pid 645424: LOG: failover or failback event detected
2022-06-07 14:23:40.152: child pid 645424: DETAIL: restarting myself
2022-06-07 14:23:40.152: child pid 645425: LOG: failover or failback event detected
2022-06-07 14:23:40.152: child pid 645425: DETAIL: restarting myself
2022-06-07 14:23:40.152: child pid 645426: LOG: failover or failback event detected
2022-06-07 14:23:40.152: child pid 645426: DETAIL: restarting myself
2022-06-07 14:23:40.152: child pid 645427: LOG: failover or failback event detected
2022-06-07 14:23:40.152: child pid 645427: DETAIL: restarting myself
2022-06-07 14:23:40.152: child pid 645428: LOG: failover or failback event detected
2022-06-07 14:23:40.152: child pid 645428: DETAIL: restarting myself
2022-06-07 14:23:40.153: child pid 645429: LOG: failover or failback event detected
2022-06-07 14:23:40.153: child pid 645429: DETAIL: restarting myself
2022-06-07 14:23:40.153: child pid 645430: LOG: failover or failback event detected
2022-06-07 14:23:40.153: child pid 645430: DETAIL: restarting myself
2022-06-07 14:23:40.153: child pid 645431: LOG: failover or failback event detected
2022-06-07 14:23:40.153: child pid 645431: DETAIL: restarting myself
2022-06-07 14:23:40.153: child pid 645432: LOG: failover or failback event detected
2022-06-07 14:23:40.153: child pid 645432: DETAIL: restarting myself
2022-06-07 14:23:40.153: child pid 645433: LOG: failover or failback event detected
2022-06-07 14:23:40.153: child pid 645433: DETAIL: restarting myself
2022-06-07 14:23:40.153: child pid 645434: LOG: failover or failback event detected
2022-06-07 14:23:40.153: child pid 645434: DETAIL: restarting myself
2022-06-07 14:23:40.154: child pid 645435: LOG: failover or failback event detected
2022-06-07 14:23:40.154: child pid 645435: DETAIL: restarting myself
2022-06-07 14:23:40.154: child pid 645436: LOG: failover or failback event detected
2022-06-07 14:23:40.154: child pid 645436: DETAIL: restarting myself
2022-06-07 14:23:40.154: child pid 645437: LOG: failover or failback event detected
2022-06-07 14:23:40.154: child pid 645437: DETAIL: restarting myself
2022-06-07 14:23:40.154: child pid 645438: LOG: failover or failback event detected
2022-06-07 14:23:40.154: child pid 645438: DETAIL: restarting myself
2022-06-07 14:23:40.154: child pid 645439: LOG: failover or failback event detected
2022-06-07 14:23:40.154: child pid 645439: DETAIL: restarting myself
2022-06-07 14:23:40.154: child pid 645440: LOG: failover or failback event detected
2022-06-07 14:23:40.154: child pid 645440: DETAIL: restarting myself
2022-06-07 14:23:40.155: child pid 645441: LOG: failover or failback event detected
2022-06-07 14:23:40.155: child pid 645441: DETAIL: restarting myself
2022-06-07 14:23:40.155: child pid 645442: LOG: failover or failback event detected
2022-06-07 14:23:40.155: child pid 645442: DETAIL: restarting myself
2022-06-07 14:23:40.155: main pid 645313: LOG: === Failover done. shutdown host localhost(11002) ===
2022-06-07 14:23:40.155: child pid 645443: LOG: failover or failback event detected
2022-06-07 14:23:40.155: child pid 645443: DETAIL: restarting myself
2022-06-07 14:23:40.155: child pid 645444: LOG: failover or failback event detected
2022-06-07 14:23:40.155: child pid 645444: DETAIL: restarting myself
2022-06-07 14:23:41.155: pcp_main pid 645367: LOG: restart request received in pcp child process
2022-06-07 14:23:41.156: main pid 645313: LOG: PCP child 645367 exits with status 0 in failover()
2022-06-07 14:23:41.157: main pid 645313: LOG: fork a new PCP child pid 645448 in failover()
2022-06-07 14:23:41.157: main pid 645313: LOG: reaper handler
2022-06-07 14:23:41.157: pcp_main pid 645448: LOG: PCP process: 645448 started
2022-06-07 14:23:41.158: sr_check_worker pid 645449: LOG: process started
2022-06-07 14:23:41.163: main pid 645313: LOG: reaper handler: exiting normally
More information about the pgpool-hackers
mailing list