View Issue Details
|ID||Project||Category||View Status||Date Submitted||Last Update|
|0000511||Pgpool-II||Bug||public||2019-05-14 13:29||2019-05-20 15:22|
|Target Version||Fixed in Version|
|Summary||0000511: PG Pool is initiating more than (max_pool * num_init_children connections) + super_user_connections when PG slave is coming UP|
|Description||We have a PG cluster with streaming replication running on PG pool 3.7.5 and PG 9.6.|
We have the below config on pgpool which decide the number of concurrent connections to backend nodes.
num_init_children = 980
max_pool = 1
Now, the number of concurrent connections to PG Master/PG Slave should not be more than 980.
When I am bringing up the PG Slave and join it back to PG Pool, I am seeing 1000 to 1005 connections to PG Master.
Since, we have the max_connections set to 1000, PG Master is flipping because of too many clients to master.
I can reproduce this issue all the time when I am bringing up the PG Slave. Once the 980 connections are initiated from PG Pool to PG Slave which will be in wait state, the connections from PG Pool to PG Master came down to 980 to 985 which is expected.
Please let me know if any other info is needed.
Would like to know in what cases PG Pool can initiate more connections than (max_pool * num_init_children connections) + super_user_connections. I feel this is bug on PG Pool.
|Steps To Reproduce||1. Have PG Pool with below config:|
num_init_children = 980
max_pool = 1
2. Set max_connections to 1000 on PG Master and PG Slave.
3. Take a PG base backup from PG Slave and start the services on PG Slave.
4. PG Master would have gone down by now and do a netstat -an to check the established connections from PG Pool.
|Tags||No tags attached.|
I think what happens here is:
1) there's few idle Pgpool-II process (that is, existing connections to Pgpool-II is slightly lower than num_init_children).
2) one of them accepts new connection from client.
3) It finds that the requested user and/or database pair does not match with the connection cache.
4) so it closes the existing connection.
5) kernel is busy and takes time to close the existing connection.
6) Pgpool-II opens a new connection to PostgreSQL.
7) PostgreSQL hist the max_connectios limit.
Unfortunately there's nothing Pgpool-II can do here. You need to increase max_connections or decrease num_init_children.
Thanks for the response.
Doesn't pgpool maintain a counter since it has hit the num_init_children * max_pool connections. At any point of time, even though the kernel is busy. the core functionality is not taken into consideration.
Eventhough kernel is taking time to close the existing connection, we shouldn't open a new connection until it really closes it.
I think this is something we should fix from pgpool.
> Doesn't pgpool maintain a counter since it has hit the num_init_children
I don't get what you mean. There are num_init_children process. Each process open one connection to PostgreSQL. So there's no more num_init_children connections to PostgreSQL from Pgpool-II's point of view.
> we shouldn't open a new connection until it really closes it.
I don't know how to do this. Pgpool-II issue close(2) and it returns. Is there any way for Pgpool-II to confirm that actually the connection closes?
Looks like pgpool issues close(2) and assumes that connection is closed and will open a new connection as per the num_init_children process.
This answers my query. Thanks for your support.
||May I close this issue?|
|2019-05-14 13:29||amar||New Issue|
|2019-05-17 08:53||administrator||Assigned To||=> t-ishii|
|2019-05-17 08:53||administrator||Status||new => assigned|
|2019-05-17 17:23||t-ishii||Note Added: 0002597|
|2019-05-17 17:23||t-ishii||Status||assigned => feedback|
|2019-05-17 21:53||amar||Note Added: 0002599|
|2019-05-17 21:53||amar||Status||feedback => assigned|
|2019-05-17 22:17||t-ishii||Note Added: 0002600|
|2019-05-18 01:29||amar||Note Added: 0002601|
|2019-05-20 10:00||t-ishii||Note Added: 0002602|
|2019-05-20 10:00||t-ishii||Status||assigned => feedback|
|2019-05-20 14:50||amar||Note Added: 0002603|
|2019-05-20 14:50||amar||Status||feedback => assigned|
|2019-05-20 15:22||t-ishii||Status||assigned => resolved|