[pgpool-general: 4493] Re: Understanding node info and watchdog info

Muhammad Usama m.usama at gmail.com
Thu Feb 25 21:26:56 JST 2016


On Wed, Feb 24, 2016 at 6:08 PM, Jose Baez <pepote at gmail.com> wrote:

> *> And when the pgpool-II are configured to use watchdog, All pgpool-II
> connected with watchdog*
> *> exchanges the node status when ever the any backend node status
> is changed.*
> *>*
>
> If I use 2 pgpool instances *without watchdog* (only 1 of them is active
> and running), could I use a NFS network folder to save the "pgpool_status"
> file ?
> So when the second instance is running in a second machine, it will read
> the same pgpool_status file (with same owner, permissions, and so on...) ?
>

Yes theoretically you can do that. But since the pgpool-II only reads the
node status information at the time of start-up so it will not be any
useful to sync the runtime node statuses among multiple pgpool-II nodes.

Regards
Muhammad Usama

>
> Thanks.
>
>
>
>
>
>
> On 15 February 2016 at 18:39, Muhammad Usama <m.usama at gmail.com> wrote:
>
>> On Sun, Aug 2, 2015 at 7:20 PM, Thomas Bach <t.bach at ilexius.de> wrote:
>> > -----BEGIN PGP SIGNED MESSAGE-----
>> > Hash: SHA256
>> >
>> > Hi there,
>> >
>> > I configured pgpool-IIpgpool-II version 3.3.2 (tokakiboshi) and
>> > Postgres to run on two separate machines in replication mode. I just
>> > recently configured watchdog. At some point the two databases diverged
>> > which is ok because I am currently in the testing phase.
>> >
>> > Now on host02 I query watchdog and obtain
>> >
>> > host02 # pcp_watchdog_info -v 1 host01 9898 root ****
>> > Hostname     : host01
>> > Pgpool port  : 5432
>> > Watchdog port: 9000
>> > Status       : 2
>> > host02 # pcp_watchdog_info -v 1 host02 9898 root ****
>> > Hostname     : host02
>> > Pgpool port  : 5432
>> > Watchdog port: 9000
>> > Status       : 3
>> >
>> > So host02 is active. I obtain the exact same results when issuing
>> > these queries from host01.
>>
>> pcp_watchdog_info utility displays the information of pgpool-II
>> watchdog node, So if the pgpool-II watchdog
>> cluster is configured and working without any problem, executing the
>> pcp_watchdog_info on any pgpool-II node
>> should always displays the same results for all pgpool-II nodes in the
>> cluster.
>> >
>> > Anyway, when querying pcp_node_info I obtain the following:
>> >
>> > host02 # pcp_node_info -v 1 host01 9898 root **** 0
>> > Hostname: host01
>> > Port    : 5433
>> > Status  : 1          <----------------------
>> > Weight  : 1.000000
>> > host02 # pcp_node_info -v 1 host01 9898 root **** 1
>> > Hostname: host02
>> > Port    : 5433
>> > Status  : 1
>> > Weight  : 0.000000
>> > host02 # pcp_node_info -v 1 host02 9898 root **** 0
>> > Hostname: host01
>> > Port    : 5433
>> > Status  : 3         <----------------------
>> > Weight  : 0.000000
>> > host02 # pcp_node_info -v 1 host02 9898 root **** 1
>> > Hostname: host02
>> > Port    : 5433
>> > Status  : 1
>> > Weight  : 1.000000
>> >
>>
>> As pcp_node_info utility displays the status of Backend (PostgreSQL)
>> nodes connected with the pgpool-II. And when the pgpool-II are
>> configured to use watchdog, All pgpool-II connected with watchdog
>> exchanges the node status when ever the any backend node status is
>> changed. So agin when the pgpool-II watchdog is configured and working
>> correctly, All pgpool-IIs in the cluster should have the same status
>> of backend node. But there is some problem with your setup or in
>> pgpool-II since pgpool-II on Host1 and Host2 are showing different
>> status for node 0 which should never happen, Analyzing the pgpool-II
>> logs of both pgpool-II would be helpful in the situation to analyze
>> why pgpool-II on host02 thinks the status of PostgreSQL node 0 is down
>> and what error caused this status not to get replicated with pgpool-II
>> on host01.
>>
>> > So, as host02 sees it, host01 is down. But for host01 both itself as
>> > well as host02 are perfectly sane.
>> >
>> > 1) Is such a state intended? I.e. is pgpool-II fully operational in
>> > this mode?
>>
>> This is not intended and indicates a problem in pgpool-II or the
>> setup. pgpool-II logs would be helpful to locate the actual problem.
>> >
>> > 2) What will happen when I do an insert via host02? Will it replicate
>> > to host01?
>> >
>> > 3) And even more interesting: what will happen when I do an insert via
>> > host01?
>> >
>>
>> Both pgpool-II (on host01, and host02) will behave according to their
>> local view of the PostgreSQL backend node statuses.
>>
>> > 4) What ways do I have at hand to resolve this issue?
>>
>> As described above, analyzing the pgpool-II log will give more details
>> about the cause of the problem and a way to rectify it.
>>
>>
>> Thanks
>> Kind regards
>> Muhammad Usama
>>
>> >
>> > Regards
>> >
>> >         Thomas.
>> >
>> > - --
>> > ilexius GmbH
>> > Thomas Bach
>> > Unter den Eichen 5
>> > Haus i
>> > 65195 Wiesbaden
>> > Fon: +49-(0)611 - 180 33 49
>> > Fax: +49-(0)611 - 236 80 84 29
>> > - -------------------------------------
>> > ilexius GmbH
>> > vertreten durch die Geschäftsleitung:
>> > Thomas Schlüter und Sebastian Koch
>> > Registergericht: Wiesbaden
>> > Handelsregister: HRB 21723
>> > Steuernummer: 040 236 22640
>> > Ust-IdNr.: DE240822836
>> > - ------------------------------------
>> > -----BEGIN PGP SIGNATURE-----
>> > Version: GnuPG v2
>> >
>> > iQIcBAEBCAAGBQJVvicdAAoJEOpECs8ANDYkIrgP/0YW62GF3FuATWbEtwfomo8w
>> > o03nFvUnklTLKSFkF1Au9fKU4YIE+Ifm868rWlotW50mL7qbIy3hS+uyV/7sFOUW
>> > 87f7bGWrKvI9G73a+5c8002cTzA7te7CQr7T2gxsWAunOpcZFsN5FTAtuXp7HF35
>> > A+5yKsofnCyeH1ZKuWBpMIhoaaDbuzVC1HBjBw4duDsAQUR97lKsq6UikJ9DXLhz
>> > ii8YeRiFXX15uRkn1YCajcdS3n6R8GXUfYnuhYOiYu7/WDI1sIghppQQlEYvCqmJ
>> > pn4er4nuzd+Mlk/VJ/A3zb6Kds3fxhOBvlWWC8KUVWWm+Os2+AA1Xaihv7rTZBHd
>> > LVzRzdoi+5TCQ4MfYXMCBKDqGLjaOwN7/3R++KNVkR6kBR5XrTDztcCoHRzy6ILs
>> > R1T6WSzVD23bkybMHGSadZtL+5apy2cZ1X2WZo+jlFrXsLWpCNgIKSJyfxo5u1+8
>> > 1brCox7oO/VLdJCJo/TJO68yK5MAqil2aERuxul3VKGGvtWddZZD+3llf2mQRLnj
>> > F+Ydj+2i85/GXKmPQKg/aR17GDNTvEaK3gWLUTA4Xo429OtcF60OSx9ZPgc3n0yQ
>> > 6P6iGOOsCE1YQytuolKERhNrgrbIzNGLwdvTAVrarQ73PUfspaj12dyNlwD6V2zG
>> > Nbg5FB677iGWkLn+DYQA
>> > =k6TH
>> > -----END PGP SIGNATURE-----
>> >
>> > _______________________________________________
>> > pgpool-general mailing list
>> > pgpool-general at pgpool.net
>> > http://www.pgpool.net/mailman/listinfo/pgpool-general
>> >
>> _______________________________________________
>> pgpool-general mailing list
>> pgpool-general at pgpool.net
>> http://www.pgpool.net/mailman/listinfo/pgpool-general
>>
>
>
> _______________________________________________
> pgpool-general mailing list
> pgpool-general at pgpool.net
> http://www.pgpool.net/mailman/listinfo/pgpool-general
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20160225/0ba58342/attachment.html>


More information about the pgpool-general mailing list