View Issue Details

IDProjectCategoryView StatusLast Update
0000259Pgpool-IIBugpublic2016-11-10 14:22
Reporternawazid Assigned Tot-ishii  
PrioritynormalSeveritymajorReproducibilityalways
Status closedResolutionopen 
Product Version3.5.4 
Summary0000259: status of nodes in load balancing mode
DescriptionHi Team,

Please feel free to downgrade the severity if you think this issue is not a "major" incident.

I have a setup of two Linux VMs on the same network with each of them having a Postgres installation and a pgpool installation configured to run in a Load Balancing mode. I am naming the postgres and pgpool installations as shown below for clarity.

Master PostgreSQL: postgres_m
Master pgPool: pgpool_m (configured for postgres_m)

Slave PostgreSQL: postgres_s
Slave pgPool: pgpool_s (configured for postgres_s)


Version of pgpool-II is 3.5.4
Version of PostgreSQL is 9.5.2
Version of both the Linux VMs is CentOS Linux release 7.2.1511 (Core)

Here are the steps I followed which breaks the pgpool.

1) Master and Slave databases are up along with their respective pgpools (pgpool_m and pgpool_s)

2) Shutdown slave db but pgpool_s still active, pgpool_status file reflects the status accordingly

3) Slave DB started and a connection made via pgpool_s but the status (both pgpool_status / show pool_modes) still does not reflect the actual state even after a reload of pgpool_s

4) Master and slave databases shutdown while their respective pgpools are still running, true status reflects after a couple of seconds in pgpool_status file.

5) Restarted both master and slave databases, tried connecting via pgpools but it throws the below errors on both master and slave database instances

./psql -p 9999 postgres
psql: ERROR: pgpool is not accepting any new connections
DETAIL: all backend nodes are down, pgpool requires at least one valid node
HINT: repair the backend nodes and restart pgpool

6) Tried to resolve it with a reload of pgpools on both the nodes but to no avail.

7) Restarted pgpools on both the nodes, it reflects the true status in pgpool_status and connections can go through as well



Now, my question is why does not the pgpool reflect true state of database instance. Also the issue in Step 5, when both the database instances are restarted but the connection to the database is not successful via pgpool.

I have made sure that the replication was still intact during the whole process by creating a database on master and confirming it to replicate to the slave.

I am sorry I could not attach the log of pgpool but I have attached the pgpool.conf configuration files of both the nodes (pgpool_m.conf, pgpool_s.conf). Please let me know if you require anymore details.

Thanks & Regards.
Steps To ReproduceThe same steps pasted above can be followed to reproduce the issue
Tagspgpool in load balancing mode

Activities

nawazid

2016-10-31 17:28

reporter  

pgpool_conf.zip (13,578 bytes)

t-ishii

2016-11-01 10:12

developer   ~0001144

It seems Pgpool-II works as expected. Points are:

- Once a PostgreSQL node goes down and Pgpool-II recognizes it,
  the PostgreSQL node will not come back online without manually
  attached by pcp_attach_node (or restart Pgpool-II with -D
  option). Reloading Pgpool-II does not help.

- The reason for this is, automatically attaching (make it online
  from Pgpool-II's point of view) a PostgreSQL node may not be an
  safe operation. For example, the PostgreSQL node may not follow
  the streaming replication primary node. It is possible that
  even it follows different primary node.

nawazid

2016-11-07 12:48

reporter   ~0001153

Hi,

Thanks for the response. I have some more follow up questions.

1) "The reason for this is, automatically attaching (make it online
  from Pgpool-II's point of view) a PostgreSQL node may not be an
  safe operation. For example, the PostgreSQL node may not follow
  the streaming replication primary node. It is possible that
  even it follows different primary node."

What if we have a mechanism that confirms that the replication is intact and we only need the pgpool to reflect the true status of the postgres node after it is manually restarted; is it not possible ?

2) I would like to have more information regarding your explanation above. Do you mean to say that if we have a two node master-slave configuration with each of the nodes having their own pgpools, even then, manually bringing up the standby node will not update the status in pgpool in order to not show a false state of standby node which MAY not be replicating from master node ?

3) "Reloading Pgpool-II does not help", Is this the reason why pgpool did not show the true state of standby node after a reload in step 3 above ?

4) what is pcp_attach_node, is this a parameter (I could not see it in pgpool.conf) or is it one of the background processes that gets started when pgpool is started ?

5) So when a postgresql node goes down and brought back online manually, the pgpool should be always restarted instead of a reload to get the true status of postgresql node; is that correct ?

6) is there a particular order to follow in order to stop and start two postgresql nodes (master-slave), each having their own pgpools like the configuration discussed above.

7) could you please point me to some documentation to read more about the different modes of pgpool, points to keep in mind when configuring different modes, expected behavior etc.

Best Regards,
Nawaz

t-ishii

2016-11-07 14:16

developer   ~0001154

Please move to the mailing list. This forum is not for Questions and Answers.

t-ishii

2016-11-10 14:22

developer   ~0001157

Issue closed.

Issue History

Date Modified Username Field Change
2016-10-31 17:28 nawazid New Issue
2016-10-31 17:28 nawazid File Added: pgpool_conf.zip
2016-10-31 17:28 nawazid Tag Attached: pgpool in load balancing mode
2016-11-01 10:01 t-ishii Assigned To => t-ishii
2016-11-01 10:01 t-ishii Status new => assigned
2016-11-01 10:12 t-ishii Note Added: 0001144
2016-11-01 10:12 t-ishii Status assigned => feedback
2016-11-07 12:48 nawazid Note Added: 0001153
2016-11-07 12:48 nawazid Status feedback => assigned
2016-11-07 14:16 t-ishii Note Added: 0001154
2016-11-10 14:22 t-ishii Note Added: 0001157
2016-11-10 14:22 t-ishii Status assigned => closed