View Issue Details
|ID||Project||Category||View Status||Date Submitted||Last Update|
|0000130||Pgpool-II||Bug||public||2015-02-20 01:03||2015-04-18 00:06|
|Reporter||arnold_s||Assigned To||Muhammad Usama|
|Target Version||Fixed in Version|
|Summary||0000130: failover command triggered, when pg backend reaches max_connections|
|Description||One of our pgpool-instances triggered the command for automatic failover.|
The only error in the Backend PostgreSQL server was
something like "max_connection is reached".
I guess you can adjust pgpool to handle connections,
so that max_connections is not reached, if all connections use pgpool.
But in this case, there are connections to the server, that
are not managed by pgpool.
Maybe we can workaround that by extending failover_command script,
but i.o.O this behaviour in general is not desirable, is it?
Can this be fixed (perhaps by adding a Parameter to consider reaching max_connections as normal)? That would be great.
|Additional Information||Although a backend pg server is reaching max_connections|
it should be considered operational:
This happens for example, if moodle clients loose SAN connectivity,
afterwards transactions don't get committed, no connection gets closed,
but several are opened. However it's a client problem, the DB server is fine.
Definitely no need to failover. Otherwise we are running out of options in
a real failover case...
|Tags||No tags attached.|