[pgpool-general: 6564] Re: Pgpool query

Lakshmi Raghavendra lakshmiym108 at gmail.com
Tue May 21 23:32:56 JST 2019


Hi,

   I had couple of queries as below :

On Thu, May 16, 2019 at 2:17 AM Muhammad Usama <m.usama at gmail.com> wrote:

>
>
> On Fri, May 10, 2019 at 6:38 PM Lakshmi Raghavendra <
> lakshmiym108 at gmail.com> wrote:
>
>> Hi,
>> I am trying pgpool for automatic faliover on my postgresql cluster using
>> the watchdog feature
>> Wanted to know a couple of things
>> 1. Are there any hooks when pgpool re-elects next pgpool master ? wanted
>> to run some customization during this time
>>
>
> Pgpool executes the user provided commands at the time of acquiring and
> releasing of virtual IP by master pgpool node. You can
> configure wd_escalation_command and wd_de_escalation_command configuration
> parameters to provide the custom command or script.
> Pgpool which executed wd_escalation_command command when it gets elected
> as a master and performs the escalation
> and similarly when the it node will resign as a master the wd_de_escalation_command
> command gets executed,
>

1. I tried using the  above wd_escalation_command and wd_de_escalation_command.
I have a 3 node cluster, Observed that the escalation command will be
triggered if there are at least 2 nodes alive in the pgpool cluster.
If there is only master pgpool alive with no slave nodes, the command is
never initiated. Is this behavior expected ?

2. Wanted to understand the significance of VIP, Is there any issue caused
if I dont use the VIP in a 3-node pgpool cluster.

Please let me know.

Thanks And Regards,

   Lakshmi Y M

On Thu, May 16, 2019 at 2:17 AM Muhammad Usama <m.usama at gmail.com> wrote:

>
>
> On Fri, May 10, 2019 at 6:38 PM Lakshmi Raghavendra <
> lakshmiym108 at gmail.com> wrote:
>
>> Hi,
>> I am trying pgpool for automatic faliover on my postgresql cluster using
>> the watchdog feature
>> Wanted to know a couple of things
>> 1. Are there any hooks when pgpool re-elects next pgpool master ? wanted
>> to run some customization during this time
>>
>
> Pgpool executes the user provided commands at the time of acquiring and
> releasing of virtual IP by master pgpool node. You can
> configure wd_escalation_command and wd_de_escalation_command configuration
> parameters to provide the custom command or script.
> Pgpool which executed wd_escalation_command command when it gets elected
> as a master and performs the escalation
> and similarly when the it node will resign as a master the wd_de_escalation_command
> command gets executed,
>
>
> http://www.pgpool.net/docs/latest/en/html/runtime-watchdog-config.html#CONFIG-WATCHDOG-ESCALATION-DE-ESCALATION
>
>
>> 2. Will the VIP get assigned ony if there are more than 1 node present in
>> the pgpool cluster? I had 3 nodes where i had pgpool running. When the 1st
>> and 2nd node's pgpool was shut i was expecting the 3rd node to acquire the
>> VIP but it didn't happen. And if my understanding was right i was thinking
>> of using the VIP in my database connection string (Since it will always be
>> with pgpool master who can connect to my postgresql primary). Now if the
>> 3rd node is not acquiring the VIP, i could not use it my connection string.
>> Correct me if my understanding is wrong
>>
>
> master pgpool only acquires the VIP when the quorum exists ( minimum 50%
> of nodes are reachable). This is done by Pgpool to guard against the
> split-brain syndrome, which could happen
> otherwise in case of network partitioning. So if you have 3 Pgpool nodes
> configured than the VIP will only get assigned on master node when at-least
> 2 Pgpool nodes are alive and reachable.
> but in case of 2 node configuration, only 1 node is required to ensure the
> quorum and in that case even if the single node is alive, it will get the
> VIP
>
> Thanks
> Best Regards
> Muhammad Usama
>
>>
>>
>> Thanks in advance
>>
>>    Lakshmi Y M
>> _______________________________________________
>> pgpool-general mailing list
>> pgpool-general at pgpool.net
>> http://www.pgpool.net/mailman/listinfo/pgpool-general
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20190521/e7382aab/attachment.html>


More information about the pgpool-general mailing list