[pgpool-general: 7241] Re: Node status "lost" not recognized by standby PgPool

Anssi Kanninen anssi at iki.fi
Tue Sep 1 01:56:06 JST 2020


Yes it worked on the PRIMARY node but standby node didn't get any information about it.

On 31 August 2020 19:01:20 EEST, Bo Peng <pengbo at sraoss.co.jp> wrote:
>Hi,
>
>On Mon, 31 Aug 2020 15:38:29 +0300 (FLE Daylight Time)
>Anssi Kanninen <anssi at iki.fi> wrote:
>
>> > How did you shutdown pgpool node?
>> 
>> As I said, powering it straight off. In my case, closed the virtual 
>> machine without shutting it down properly.
>
>Could you check the pgpool.log to see if the "lifecheck" process
>worked?
>"lifecheck" process performs watchdog lifecheck every "wd_interval"
>seconds.
>
>If the interval since the last message was received exceeds
>"wd_heartbeat_deadtime", 
>pgpool will consider the node to be lost.
>
>
>You can see the log such as:
>
>=============
>DEBUG:  watchdog life checking by heartbeat
>DETAIL:  checking pgpool 2 (192.168.154.102:9999)
>DEBUG:  watchdog checking if pgpool is alive using heartbeat
>DETAIL:  the last heartbeat from "192.168.154.102:9999" received 38
>seconds ago
>...
>LOG:  remote node "192.168.154.102:9999 Linux server2" is lost
>=============
>
>> On Mon, 31 Aug 2020, Bo Peng wrote:
>> 
>> > Hello,
>> >
>> > On Fri, 28 Aug 2020 12:27:48 +0300 (FLE Daylight Time)
>> > Anssi Kanninen <anssi at iki.fi> wrote:
>> >
>> >> Hi everyone!
>> >>
>> >> I'm having a problem with information exchange between PgPool
>instances. I
>> >> have 3 nodes, each containing one DB backend instance and one
>PgPool
>> >> instance.
>> >>
>> >> If I shut down one standby node cleanly, everything seems to go
>ok. The
>> >> master PgPool notices that and informs the remaining standby
>PgPool about
>> >> it.
>> >>
>> >> But the situation changes if a standby node just vahishes from the
>network
>> >> by powering it off without clean shutdown. The master PgPool marks
>the
>> >> node as "lost" but the remaining standby PgPool still thinks we
>are having
>> >> another standby PgPool. It doesn't get any information about a
>lost node.
>> >
>> > How did you shutdown pgpool node?
>> > Could you share the pgpool.log of each node?
>> >
>> >> Here it goes. In the example I'm checking the statuses by
>connecting each
>> >> node with pcp_watchdog_info . I have sorted the results by node
>hostname.
>> >>
>> >> Nodes are:
>> >> * ID 0 (centos8i1-int)
>> >> * ID 1 (centos8i2-int)
>> >> * ID 2 (centos8i3-int).
>> >>
>> >> ***** INITIAL SETUP *****
>> >>
>> >> $ pcp_watchdog_info -w -h centos8i1-int
>> >> 3 YES centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int
>> >>
>> >> centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int 5432
>9000 4 MASTER
>> >> centos8i2-int:5432 Linux centos8i2.localdomain centos8i2-int 5432
>9000 7 STANDBY
>> >> centos8i3-int:5432 Linux centos8i3.localdomain centos8i3-int 5432
>9000 7 STANDBY
>> >>
>> >> $ pcp_watchdog_info -w -h centos8i2-int
>> >> 3 NO centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int
>> >>
>> >> centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int 5432
>9000 4 MASTER
>> >> centos8i2-int:5432 Linux centos8i2.localdomain centos8i2-int 5432
>9000 7 STANDBY
>> >> centos8i3-int:5432 Linux centos8i3.localdomain centos8i3-int 5432
>9000 7 STANDBY
>> >>
>> >> $ pcp_watchdog_info -w -h centos8i3-int
>> >> 3 NO centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int
>> >>
>> >> centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int 5432
>9000 4 MASTER
>> >> centos8i2-int:5432 Linux centos8i2.localdomain centos8i2-int 5432
>9000 7 STANDBY
>> >> centos8i3-int:5432 Linux centos8i3.localdomain centos8i3-int 5432
>9000 7 STANDBY
>> >>
>> >> ***** SHUTDOWN node ID 1 *****
>> >>
>> >> $ pcp_watchdog_info -w -h centos8i1-int
>> >> 3 YES centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int
>> >>
>> >> centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int 5432
>9000 4 MASTER
>> >> centos8i2-int:5432 Linux centos8i2.localdomain centos8i2-int 5432
>9000 10 SHUTDOWN
>> >> centos8i3-int:5432 Linux centos8i3.localdomain centos8i3-int 5432
>9000 7 STANDBY
>> >>
>> >> $ pcp_watchdog_info -w -h centos8i3-int
>> >> 3 NO centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int
>> >>
>> >> centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int 5432
>9000 4 MASTER
>> >> centos8i2-int:5432 Linux centos8i2.localdomain centos8i2-int 5432
>9000 10 SHUTDOWN
>> >> centos8i3-int:5432 Linux centos8i3.localdomain centos8i3-int 5432
>9000 7 STANDBY
>> >>
>> >> ***** RESTART node ID 1 *****
>> >>
>> >> $ pcp_watchdog_info -w -h centos8i1-int
>> >> 3 YES centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int
>> >>
>> >> centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int 5432
>9000 4 MASTER
>> >> centos8i2-int:5432 Linux centos8i2.localdomain centos8i2-int 5432
>9000 7 STANDBY
>> >> centos8i3-int:5432 Linux centos8i3.localdomain centos8i3-int 5432
>9000 7 STANDBY
>> >>
>> >> $ pcp_watchdog_info -w -h centos8i2-int
>> >> 3 NO centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int
>> >>
>> >> centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int 5432
>9000 4 MASTER
>> >> centos8i2-int:5432 Linux centos8i2.localdomain centos8i2-int 5432
>9000 7 STANDBY
>> >> centos8i3-int:5432 Linux centos8i3.localdomain centos8i3-int 5432
>9000 7 STANDBY
>> >>
>> >> $ pcp_watchdog_info -w -h centos8i3-int
>> >> 3 NO centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int
>> >>
>> >> centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int 5432
>9000 4 MASTER
>> >> centos8i2-int:5432 Linux centos8i2.localdomain centos8i2-int 5432
>9000 7 STANDBY
>> >> centos8i3-int:5432 Linux centos8i3.localdomain centos8i3-int 5432
>9000 7 STANDBY
>> >>
>> >> ***** POWER OFF node ID 1 *****
>> >>
>> >> $ pcp_watchdog_info -w -h centos8i1-int
>> >> 3 YES centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int
>> >>
>> >> centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int 5432
>9000 4 MASTER
>> >> centos8i2-int:5432 Linux centos8i2.localdomain centos8i2-int 5432
>9000 8 LOST
>> >> centos8i3-int:5432 Linux centos8i3.localdomain centos8i3-int 5432
>9000 7 STANDBY
>> >>
>> >> $ pcp_watchdog_info -w -h centos8i3-int
>> >> 3 NO centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int
>> >>
>> >> centos8i1-int:5432 Linux centos8i1.localdomain centos8i1-int 5432
>9000 4 MASTER
>> >> centos8i2-int:5432 Linux centos8i2.localdomain centos8i2-int 5432
>9000 7 STANDBY
>> >> centos8i3-int:5432 Linux centos8i3.localdomain centos8i3-int 5432
>9000 7 STANDBY
>> >>
>> >>
>> >> Best regards,
>> >> Anssi Kanninen
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> anssi at iki.fi
>> >> _______________________________________________
>> >> pgpool-general mailing list
>> >> pgpool-general at pgpool.net
>> >> http://www.pgpool.net/mailman/listinfo/pgpool-general
>> >
>> >
>> > -- 
>> > Bo Peng <pengbo at sraoss.co.jp>
>> > SRA OSS, Inc. Japan
>> > _______________________________________________
>> > pgpool-general mailing list
>> > pgpool-general at pgpool.net
>> > http://www.pgpool.net/mailman/listinfo/pgpool-general
>> >
>> 
>> -- 
>> anssi at iki.fi
>> _______________________________________________
>> pgpool-general mailing list
>> pgpool-general at pgpool.net
>> http://www.pgpool.net/mailman/listinfo/pgpool-general
>
>
>-- 
>Bo Peng <pengbo at sraoss.co.jp>
>SRA OSS, Inc. Japan
>_______________________________________________
>pgpool-general mailing list
>pgpool-general at pgpool.net
>http://www.pgpool.net/mailman/listinfo/pgpool-general
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20200831/844687ce/attachment.html>


More information about the pgpool-general mailing list