View Issue Details

IDProjectCategoryView StatusLast Update
0000417Pgpool-IIBugpublic2018-08-27 10:31
Reporterakash Assigned To 
PrioritynormalSeveritymajorReproducibilityalways
Status closedResolutionopen 
Product Version3.6.2 
Summary0000417: show pool_nodes is showing inconsistent roles(primary, standby) of postgresql nodes
DescriptionWe have following set up:
    2 PostgreSQL VMs with version 9.6.7
    3 Pgpool VMs with version 3.6.2
    
mode: master-slave streaming replication.

node_id0: primary node
node_id1: slave node

Show pgpool_nodes output:
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay
---------+--------------+------+--------+-----------+---------+------------+-------------------+-------------------
 0 | 10.11.32.208 | 5432 | up | 0.500000 | primary | 0 | true | 0
 1 | 10.11.32.209 | 5432 | up | 0.500000 | standby | 0 | false | 0


We have a bi-weekly update during which we update postgresql vms. While updating if master postgresql VM goes down i.e. node_id0 then we promote slave node node_id1, as new master. This step is completely handled by postgresql vms and pgpool has no involvement.

Now, when we run select pg_is_in_recovery(), we get following output:

node_id0 - t
node_id1 - f

That means node_id1 is master and node_id0 is slave as expected. However when we run show pool_nodes, we get following output:

node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay
---------+--------------+------+--------+-----------+---------+------------+-------------------+-------------------
 0 | 10.11.32.208 | 5432 | up | 0.500000 | primary | 0 | true | 0
 1 | 10.11.32.209 | 5432 | up | 0.500000 | standby | 0 | false | 0


Pgpool is not able to detect that role is changed. It does not update the state and remain in the same state unless we restart all 3 Pgpool-II.
Steps To Reproduce1. Set node_id0 as primary and node_id1 as slave
2. Run "select pg_is_in_recovery()" on Postgresql nodes
 On node_id0:
postgres=# SELECT pg_is_in_recovery();
pg_is_in_recovery
-------------------
 f
(1 row)
 
 On node_id1:

postgres=# SELECT pg_is_in_recovery();
pg_is_in_recovery
-------------------
 t
(1 row)

 On pgpool VMs
postgres=# SHOW pool_nodes;
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay
---------+--------------+------+--------+-----------+---------+------------+-------------------+-------------------
 0 | 10.11.32.208 | 5432 | up | 0.500000 | primary | 0 | true | 0
 1 | 10.11.32.209 | 5432 | up | 0.500000 | standby | 0 | false | 0

3. Promote node_id1 as master from node_id0 and restart node_id0 as slave
4. Now Run "select pg_is_in_recovery()" on Postgresql nodes and run "show pool_nodes;" on Pgpool-II nodes
On node_id0:
postgres=# SELECT pg_is_in_recovery();
pg_is_in_recovery
-------------------
 t
(1 row)
 
 On node_id1:

postgres=# SELECT pg_is_in_recovery();
pg_is_in_recovery
-------------------
 f
(1 row)

 On pgpool VMs
postgres=# SHOW pool_nodes;
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay
---------+--------------+------+--------+-----------+---------+------------+-------------------+-------------------
 0 | 10.11.32.208 | 5432 | up | 0.500000 | primary | 0 | true | 0
 1 | 10.11.32.209 | 5432 | up | 0.500000 | standby | 0 | false | 0
TagsNo tags attached.

Activities

pengbo

2018-07-27 10:11

developer   ~0002126

If you directly execute promote and recovery, not using Pgpool-II's failover and online recovery,
Pgpool-II will not detect the change of status.

Restart Pgpool-II to detect the correct status of backend nodes.
Or promote and recovery backend nodes by using Pgpool-II's failover and online recovery functions.

pengbo

2018-08-13 09:42

developer   ~0002149

If issue resolved, may I close this issue?

Issue History

Date Modified Username Field Change
2018-07-26 20:47 akash New Issue
2018-07-27 10:11 pengbo Note Added: 0002126
2018-08-13 09:42 pengbo Note Added: 0002149
2018-08-27 10:31 administrator Status new => closed