[pgpool-hackers: 4138] Re: Possible misleading info in pcp_watchdog_info

Muhammad Usama m.usama at gmail.com
Sat Feb 26 00:11:31 JST 2022


On Tue, Feb 22, 2022 at 10:31 AM Bo Peng <pengbo at sraoss.co.jp> wrote:

> Hello,
>
> > Thank you Peng for versification.
> > I have pushed the changes to master branch.
> > What do you think should we backport it to the released branches?
>
> Thank you, Usama.
> Sorry for the late response.
> Yes. I think we should backport it to the released branches.
>

Thanks, I have backported it to all supported branches

>
> > Best regards
> > Muhammad Usama
> >
> > On Thu, Jan 13, 2022 at 7:06 AM Bo Peng <pengbo at sraoss.co.jp> wrote:
> >
> > > Thank you, Usama.
> > >
> > > > VIP is up on current node is in fact misleading when the VIP is not
> set
> > > in
> > > > pgpool.conf,
> > > > while the quorum status gets already reported in Quorum state field.
> > > > Actually the value of "Is VIP up on current node" comes from
> > > 'node->escalated'
> > > > field
> > > > that gives the status of wd_escalation process execution status. So I
> > > think
> > > > the field name
> > > > should be canged to "node escalation" from "VIP up on node".
> > > >
> > > > I have cooked up a quick patch for that, Please let me know what you
> > > think
> > > > about that.
> > >
> > > Your patch looks good.
> > > Could you commit it?
> > >
> > > On Tue, 21 Dec 2021 11:30:49 +0500
> > > Muhammad Usama <m.usama at gmail.com> wrote:
> > >
> > > > Hi Ishii-San,
> > > >
> > > > Sorry for a very delayed response, and thank you for pointing out the
> > > issue.
> > > >
> > > > On Tue, Dec 7, 2021 at 10:55 AM Tatsuo Ishii <ishii at sraoss.co.jp>
> wrote:
> > > >
> > > > > Hi Usama,
> > > > >
> > > > > pcp_watchdog_info displays information of watchdog something like
> this:
> > > > >
> > > > > $ pcp_watchdog_info -w -p 50001
> > > > > 4 4 YES localhost:50000 Linux tishii-CFSV9-2 localhost
> > > > >
> > > > > localhost:50000 Linux tishii-CFSV9-2 localhost 50000 50002 4
> LEADER 0
> > > > > MEMBER
> > > > > localhost:50004 Linux tishii-CFSV9-2 localhost 50004 50006 7
> STANDBY 0
> > > > > MEMBER
> > > > > localhost:50008 Linux tishii-CFSV9-2 localhost 50008 50010 7
> STANDBY 0
> > > > > MEMBER
> > > > > localhost:50012 Linux tishii-CFSV9-2 localhost 50012 50014 7
> STANDBY 0
> > > > > MEMBER
> > > > >
> > > > > The "YES" in the very first line means that VIP is up on this node
> > > > > according to the manual. But actually this is not correct because I
> > > > > have created the watchdog cluster using watchdog_setup and it never
> > > > > enables VIP. I guess it actually means whether the quorum exists or
> > > > > not.
> > > > >
> > > >
> > > > VIP is up on current node is in fact misleading when the VIP is not
> set
> > > in
> > > > pgpool.conf,
> > > > while the quorum status gets already reported in Quorum state field.
> > > > Actually the value of "Is VIP up on current node" comes from
> > > 'node->escalated'
> > > > field
> > > > that gives the status of wd_escalation process execution status. So I
> > > think
> > > > the field name
> > > > should be canged to "node escalation" from "VIP up on node".
> > > >
> > > > I have cooked up a quick patch for that, Please let me know what you
> > > think
> > > > about that.
> > > >
> > > > Best regards
> > > > Muhammad Usama
> > > >
> > > >
> > > >
> > > > >
> > > > > Anyway this is misleading and should be fixed IMO.
> > > > >
> > > > > Note that "-v" (verbose) output also gives wrong information.
> > > > >
> > > > > $ pcp_watchdog_info -w -p 50001 -v
> > > > > Watchdog Cluster Information
> > > > > Total Nodes              : 4
> > > > > Remote Nodes             : 3
> > > > > Member Remote Nodes      : 3
> > > > > Alive Remote Nodes       : 3
> > > > > Nodes required for quorum: 3
> > > > > Quorum state             : QUORUM EXIST
> > > > > VIP up on local node     : YES
> > > > > Leader Node Name         : localhost:50000 Linux tishii-CFSV9-2
> > > > > Leader Host Name         : localhost
> > > > >
> > > > > Watchdog Node Information
> > > > > Node Name         : localhost:50000 Linux tishii-CFSV9-2
> > > > > Host Name         : localhost
> > > > > Delegate IP       : Not_Set
> > > > > Pgpool port       : 50000
> > > > > Watchdog port     : 50002
> > > > > Node priority     : 4
> > > > > Status            : 4
> > > > > Status Name       : LEADER
> > > > > Membership Status : MEMBER
> > > > >
> > > > > Node Name         : localhost:50004 Linux tishii-CFSV9-2
> > > > > Host Name         : localhost
> > > > > Delegate IP       : Not_Set
> > > > > Pgpool port       : 50004
> > > > > Watchdog port     : 50006
> > > > > Node priority     : 3
> > > > > Status            : 7
> > > > > Status Name       : STANDBY
> > > > > Membership Status : MEMBER
> > > > >
> > > > > Node Name         : localhost:50008 Linux tishii-CFSV9-2
> > > > > Host Name         : localhost
> > > > > Delegate IP       : Not_Set
> > > > > Pgpool port       : 50008
> > > > > Watchdog port     : 50010
> > > > > Node priority     : 2
> > > > > Status            : 7
> > > > > Status Name       : STANDBY
> > > > > Membership Status : MEMBER
> > > > >
> > > > > Node Name         : localhost:50012 Linux tishii-CFSV9-2
> > > > > Host Name         : localhost
> > > > > Delegate IP       : Not_Set
> > > > > Pgpool port       : 50012
> > > > > Watchdog port     : 50014
> > > > > Node priority     : 1
> > > > > Status            : 7
> > > > > Status Name       : STANDBY
> > > > > Membership Status : MEMBER
> > > > >
> > > > > Best reagards,
> > > > > --
> > > > > Tatsuo Ishii
> > > > > SRA OSS, Inc. Japan
> > > > > English: http://www.sraoss.co.jp/index_en.php
> > > > > Japanese:http://www.sraoss.co.jp
> > > > > _______________________________________________
> > > > > pgpool-hackers mailing list
> > > > > pgpool-hackers at pgpool.net
> > > > > http://www.pgpool.net/mailman/listinfo/pgpool-hackers
> > > > >
> > >
> > >
> > > --
> > > Bo Peng <pengbo at sraoss.co.jp>
> > > SRA OSS, Inc. Japan
> > > http://www.sraoss.co.jp/
> > >
>
>
> --
> Bo Peng <pengbo at sraoss.co.jp>
> SRA OSS, Inc. Japan
> http://www.sraoss.co.jp/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.pgpool.net/pipermail/pgpool-hackers/attachments/20220225/4e7fe1dd/attachment.htm>


More information about the pgpool-hackers mailing list