View Issue Details
| ID | Project | Category | View Status | Date Submitted | Last Update |
|---|---|---|---|---|---|
| 0000451 | Pgpool-II | Bug | public | 2018-12-24 11:31 | 2018-12-26 17:54 |
| Reporter | ruicao | Assigned To | pengbo | ||
| Priority | normal | Severity | minor | Reproducibility | always |
| Status | closed | Resolution | open | ||
| Product Version | 3.6.0 | ||||
| Summary | 0000451: pgpool can only detect backend_hostname0 after failover switch | ||||
| Description | Hello I'm a newer of pgpool app. I want to do the test about failover of pgpool. At the beginning, 172.31.19.222 is the machine with my pg master and pgpool, and 172.31.23.220 is my pg slave. Everything went well and pgpool detected both of servers alive. And after i stopped the pgmaster, because of the failover config in the pgpool.conf, pgpool created a trigger in slave and pgslave upgraded. Then I use pg_rewind to recover my pgmaster, after rewind, old pgmaster become the slave.And the problem happened postgres=# show pool_nodes; node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay ---------+---------------+------+--------+-----------+---------+------------+-------------------+------------------- 0 | 172.31.19.222 | 5432 | up | 0.500000 | standby | 0 | true | 0 1 | 172.31.23.220 | 5432 | down | 0.500000 | standby | 0 | false | 0 (2 rows) But in fact, pg master is slave and ran well [postgres@ip-172-31-19-222 ~]$ ps -ef | grep post root 938 1 0 Dec21 ? 00:00:00 /usr/libexec/postfix/master -w postfix 940 938 0 Dec21 ? 00:00:00 qmgr -l -t unix -u postgres 1388 1 0 Dec21 ? 00:00:00 /usr/local/pgsql/bin/postgres -D /data/data postgres 1389 1388 0 Dec21 ? 00:00:00 postgres: logger postgres 1390 1388 0 Dec21 ? 00:00:02 postgres: startup recovering 00000002000000000000001F postgres 1391 1388 0 Dec21 ? 00:00:01 postgres: checkpointer postgres 1392 1388 0 Dec21 ? 00:00:02 postgres: background writer postgres 1393 1388 0 Dec21 ? 00:00:00 postgres: stats collector postgres 1632 1388 0 Dec21 ? 00:02:27 postgres: walreceiver streaming 0/1F011E80 postfix 8709 938 0 01:32 ? 00:00:00 pickup -l -t unix -u I restart pgpool many times, but didn't work. Also i couldn't detach or attach the false node. And i made some modification, i switched the backend hostname0 & backend hostname1, and the result is strange. postgres=# show pool_nodes; node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay ---------+---------------+------+--------+-----------+---------+------------+-------------------+------------------- 0 | 172.31.23.220 | 5432 | up | 0.500000 | primary | 0 | true | 0 1 | 172.31.19.222 | 5432 | down | 0.500000 | standby | 0 | false | 0 And that means, pgpool only detect backend_hostname0, this is a bug or my mistake configuration? | ||||
| Tags | failover | ||||
|
|
|
|
|
Could you share the pgpool debug log? Start pgpool with "-d" option to output debug log. |
|
|
yes,now i share with u. |
|
|
Pgpool-II refers to the status of the backend node from the pgpool_status file at startup. After recovering the backend node by yourself without using pcp_recovery_node command, you have to attach the node to Pgpool-II. If not, Pgpool-II will consider this node to be "down" status. To reset the status of backend nodes you have to start Pgpool-II using "-D" option, to ignore pgpool_status file. |
|
|
Thank you for your answer.You can close this issue. |
| Date Modified | Username | Field | Change |
|---|---|---|---|
| 2018-12-24 11:31 | ruicao | New Issue | |
| 2018-12-24 11:31 | ruicao | Tag Attached: failover | |
| 2018-12-24 11:32 | ruicao | File Added: pgpool.conf | |
| 2018-12-25 13:55 | pengbo | Note Added: 0002307 | |
| 2018-12-25 15:08 | pengbo | Note Edited: 0002307 | |
| 2018-12-25 18:18 | ruicao | File Added: pgpool.log | |
| 2018-12-25 18:18 | ruicao | Note Added: 0002308 | |
| 2018-12-26 11:40 | pengbo | Note Added: 0002309 | |
| 2018-12-26 12:47 | ruicao | Note Added: 0002310 | |
| 2018-12-26 17:54 | administrator | Assigned To | => pengbo |
| 2018-12-26 17:54 | administrator | Status | new => closed |