[pgpool-general: 6676] Re: Query

Tatsuo Ishii ishii at sraoss.co.jp
Tue Aug 20 11:57:30 JST 2019

> On Sat, Aug 17, 2019 at 12:28 PM Tatsuo Ishii <ishii at sraoss.co.jp> wrote:
>> > Hi Pgpool Team,
>> >
>> >               *We are nearing a production release and running into the
>> > below issues.*
>> > Replies at the earliest would be highly helpful and greatly appreciated.
>> > Please let us know on how to get rid of the below issues.
>> >
>> > We have a 3 node pgpool + postgres cluster - M1 , M2, M3. The pgpool.conf
>> > is as attached.
>> >
>> > *Case I :  *
>> > M1 - Pgpool Master + Postgres Master
>> > M2 , M3 - Pgpool slave + Postgres slave
>> >
>> > - M1 goes out of network. its marked as LOST in the pgpool cluster
>> > - M2 becomes postgres master
>> > - M3 becomes pgpool master.
>> > - When M1 comes back to the network, pgpool is able to solve split brain.
>> > However, its changing the postgres master back to M1 by logging a
>> statement
>> > - "LOG:  primary node was chenged after the sync from new master", so
>> since
>> > M2 was already postgres master (and its trigger file is not touched) its
>> > not able to sync to the new master.
>> > *I somehow want to avoid this postgres master change..please let us know
>> if
>> > there is a way to avoid it*
>> Sorry but I don't know how to prevent this. Probably when former
>> watchdog master recovers from an network outage and there's already
>> PostgreSQL primary server, the watchdog master should not sync the
>> state. What do you think Usama?
> Yes, that's true, there is no functionality that exists in Pgpool-II to
> disable the backend node status synch. In fact that
> would be hazardous if we somehow disable the node status syncing.
> But having said that, In the mentioned scenario when the M1 comes back and
> join the watchdog cluster Pgpool-II should have
> kept the M2 as the true master while resolving the split-brain. The
> algorithm used to resolve the true master considers quite a
> few parameters and for the scenario, you explained, M2 should have kept the
> master node status while M1 should have resigned
> after joining back the cluster and effectively the M1 node should have been
> syncing the status from M2 ( keeping the proper primary node)
> not the other way around.
> Can you please share the Pgpool-II log files so that I can have a look at
> what went wrong in this case.


Ok, the scenario (PostgreSQL primary x 2 in the end) should have not
happend. That's a good news.


Can you please provide the Pgpool-II log files as Usama requested?

Best regards,
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php

More information about the pgpool-general mailing list