[pgpool-general: 7139] Re: Error with pg_dump

Giancarlo Celli giancarlo.celli at flottaweb.com
Wed Jul 8 22:25:24 JST 2020

I'm sending you pgpool.conf

I have 2 nodes:

eno1:         inet xx.xx.xx.xx
eno2:         inet

eno1:         inet yy.yy.yy.yy
eno1:0:      inet zz.zz.zz.zz
eno2:         inet

the two servers are also connected on the second network interface 
directly through a cross cable.

------ Messaggio originale ------
Da: "Bo Peng" <pengbo at sraoss.co.jp>
A: "Giancarlo Celli" <giancarlo.celli at flottaweb.com>
Cc: pgpool-general at pgpool.net
Inviato: 07/07/2020 15:01:59
Oggetto: Re: [pgpool-general: 7125] Re: Error with pg_dump

>Could you share your pgpool.conf and provide a scenario to reproduce this issue?
>On Mon, 06 Jul 2020 15:31:09 +0000
>"Giancarlo Celli" <giancarlo.celli at flottaweb.com> wrote:
>>  I add that I tried to check the active connections and I see numerous
>>  queries in idle state with the words:
>>  Why?
>>  ------ Messaggio originale ------
>>  Da: "Giancarlo Celli" <giancarlo.celli at flottaweb.com>
>>  A: pgpool-general at pgpool.net
>>  Inviato: 06/07/2020 17:21:35
>>  Oggetto: [pgpool-general: 7124] Error with pg_dump
>>  >Hi,
>>  >recently I'm having problems with pg_dump on pgpool. The master node
>>  >server crashes and the switch occurs on the standby. Here are the lines
>>  >of the pgpool.log:
>>  >
>>  >LOG:  pool_read_kind: error message from master backend:sorry, too many
>>  >clients already
>>  >ERROR:  unable to read message kind
>>  >DETAIL:  kind does not match between master(45) slot[1] (52)
>>  >LOG:  watchdog received the failover command from remote pgpool-II node
>>  >"xx.xx.xx.xx:5432 Linux server1"
>>  >LOG:  watchdog is processing the failover command
>>  >[DEGENERATE_BACKEND_REQUEST] received from xx.xx.xx.xx:5432 Linux
>>  >server1
>>  >LOG:  remote pgpool-II node "xx.xx.xx.xx:5432 Linux server1" is
>>  >requesting to become a lock holder for failover ID: 0
>>  >LOG:  lock holder request denied to remote pgpool-II node
>>  >"xx.xx.xx.xx:5432 Linux server1"
>>  >DETAIL:  local pgpool-II node "yy.yy.yy.yy:5432 Linux server2" is
>>  >already holding the locks
>>  >LOG:  received the failover command lock request from remote pgpool-II
>>  >node "xx.xx.xx.xx:5432 Linux server1"
>>  >LOG:  remote pgpool-II node "xx.xx.xx.xx:5432 Linux server1" is
>>  >checking the status of [FAILOVER] lock for failover ID 0
>>  >
>>  >It would seem to depend on the number of connections exceeded (from the
>>  >string that appears in the log: "sorry, too many clients already"). I
>>  >already have max_connections = 300. Is there a way to verify that this
>>  >was the problem?
>>  >if I tried to execute the pg_dump directly on the standby postgresql
>>  >server, without going through pgpool, using port 5433, could I avoid
>>  >the problem according to you?
>Bo Peng <pengbo at sraoss.co.jp>
>SRA OSS, Inc. Japan
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pgpool.conf
Type: application/octet-stream
Size: 4157 bytes
Desc: not available
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20200708/e369b6a3/attachment.obj>

More information about the pgpool-general mailing list