[pgpool-general: 7124] Error with pg_dump

Giancarlo Celli giancarlo.celli at flottaweb.com
Tue Jul 7 00:21:35 JST 2020

recently I'm having problems with pg_dump on pgpool. The master node 
server crashes and the switch occurs on the standby. Here are the lines 
of the pgpool.log:

LOG:  pool_read_kind: error message from master backend:sorry, too many 
clients already
ERROR:  unable to read message kind
DETAIL:  kind does not match between master(45) slot[1] (52)
LOG:  watchdog received the failover command from remote pgpool-II node 
"xx.xx.xx.xx:5432 Linux server1"
LOG:  watchdog is processing the failover command 
[DEGENERATE_BACKEND_REQUEST] received from xx.xx.xx.xx:5432 Linux 
LOG:  remote pgpool-II node "xx.xx.xx.xx:5432 Linux server1" is 
requesting to become a lock holder for failover ID: 0
LOG:  lock holder request denied to remote pgpool-II node 
"xx.xx.xx.xx:5432 Linux server1"
DETAIL:  local pgpool-II node "yy.yy.yy.yy:5432 Linux server2" is 
already holding the locks
LOG:  received the failover command lock request from remote pgpool-II 
node "xx.xx.xx.xx:5432 Linux server1"
LOG:  remote pgpool-II node "xx.xx.xx.xx:5432 Linux server1" is checking 
the status of [FAILOVER] lock for failover ID 0

It would seem to depend on the number of connections exceeded (from the 
string that appears in the log: "sorry, too many clients already"). I 
already have max_connections = 300. Is there a way to verify that this 
was the problem?
if I tried to execute the pg_dump directly on the standby postgresql 
server, without going through pgpool, using port 5433, could I avoid the 
problem according to you?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20200706/74ed3820/attachment.html>

More information about the pgpool-general mailing list