[pgpool-general: 6570] Re: Pgpool query

Lakshmi Raghavendra lakshmiym108 at gmail.com
Fri May 24 13:41:30 JST 2019


1. failover_command : this will usually be a shell script. The script will
do a standby promote on new master, so there is no reason that it would
fail and I believe it is over-engineering to do more than that. But since
this is a script, you are free to put what you want in it : if you want to
check the result of the standby promote and if it fails try on the third
node, it is possible.


> Yep understood thanks. But i was looking for more of an automatic retry.
So if pgpool cannot promote a new postgres master for some resaon and
failover failed, i was expecting it to promote other nodes that are
avialable in the quorum



2. pgpool does not support switch-over, you would have to stop the primary
postgres database and let pgpool trigger the failover and then run a
pcp_recovery on the old primary postgres



> Ok. So the reason behind the ask was if pgpool is not going to retry the
promote on other nodes, I was thinking i can just trigger a force promote
there by failover and follow master will be run by pgpool itself and i dont
have to do a failover by myself and run pcp_recovery on all the nodes



Regarding 5. It's normal. Don't worry.

When a standby PostgreSQL node is added, existing connections to
Pgpool-II are kept (thus do not use the new standby for load
balacing). After the session ends, Pgpool-II child process checks
whether the failback has happend. If happend, exits itself so that new
process is spwan to reflect the fact that new standby node has been
added (thus it can use the new standby for load balancing).




> Thanks. Per my understanding, So restart happens only when a new standby
is added is it. Because i see this frequent restarts happening even when
there is no fallback.

Also could you please share some light on when the failback_command gets
executed? I have never seen the command being called.


Thanks And Regards,

   Lakshmi Y M

On Tue, May 21, 2019 at 8:02 PM Lakshmi Raghavendra <lakshmiym108 at gmail.com>
wrote:

> Hi,
>
>    I had couple of queries as below :
>
> On Thu, May 16, 2019 at 2:17 AM Muhammad Usama <m.usama at gmail.com> wrote:
>
>>
>>
>> On Fri, May 10, 2019 at 6:38 PM Lakshmi Raghavendra <
>> lakshmiym108 at gmail.com> wrote:
>>
>>> Hi,
>>> I am trying pgpool for automatic faliover on my postgresql cluster using
>>> the watchdog feature
>>> Wanted to know a couple of things
>>> 1. Are there any hooks when pgpool re-elects next pgpool master ? wanted
>>> to run some customization during this time
>>>
>>
>> Pgpool executes the user provided commands at the time of acquiring and
>> releasing of virtual IP by master pgpool node. You can
>> configure wd_escalation_command and wd_de_escalation_command configuration
>> parameters to provide the custom command or script.
>> Pgpool which executed wd_escalation_command command when it gets elected
>> as a master and performs the escalation
>> and similarly when the it node will resign as a master the wd_de_escalation_command
>> command gets executed,
>>
>
> 1. I tried using the  above wd_escalation_command and wd_de_escalation_command.
> I have a 3 node cluster, Observed that the escalation command will be
> triggered if there are at least 2 nodes alive in the pgpool cluster.
> If there is only master pgpool alive with no slave nodes, the command is
> never initiated. Is this behavior expected ?
>
> 2. Wanted to understand the significance of VIP, Is there any issue caused
> if I dont use the VIP in a 3-node pgpool cluster.
>
> Please let me know.
>
> Thanks And Regards,
>
>    Lakshmi Y M
>
> On Thu, May 16, 2019 at 2:17 AM Muhammad Usama <m.usama at gmail.com> wrote:
>
>>
>>
>> On Fri, May 10, 2019 at 6:38 PM Lakshmi Raghavendra <
>> lakshmiym108 at gmail.com> wrote:
>>
>>> Hi,
>>> I am trying pgpool for automatic faliover on my postgresql cluster using
>>> the watchdog feature
>>> Wanted to know a couple of things
>>> 1. Are there any hooks when pgpool re-elects next pgpool master ? wanted
>>> to run some customization during this time
>>>
>>
>> Pgpool executes the user provided commands at the time of acquiring and
>> releasing of virtual IP by master pgpool node. You can
>> configure wd_escalation_command and wd_de_escalation_command configuration
>> parameters to provide the custom command or script.
>> Pgpool which executed wd_escalation_command command when it gets elected
>> as a master and performs the escalation
>> and similarly when the it node will resign as a master the wd_de_escalation_command
>> command gets executed,
>>
>>
>> http://www.pgpool.net/docs/latest/en/html/runtime-watchdog-config.html#CONFIG-WATCHDOG-ESCALATION-DE-ESCALATION
>>
>>
>>> 2. Will the VIP get assigned ony if there are more than 1 node present
>>> in the pgpool cluster? I had 3 nodes where i had pgpool running. When the
>>> 1st and 2nd node's pgpool was shut i was expecting the 3rd node to acquire
>>> the VIP but it didn't happen. And if my understanding was right i was
>>> thinking of using the VIP in my database connection string (Since it will
>>> always be with pgpool master who can connect to my postgresql primary). Now
>>> if the 3rd node is not acquiring the VIP, i could not use it my connection
>>> string. Correct me if my understanding is wrong
>>>
>>
>> master pgpool only acquires the VIP when the quorum exists ( minimum 50%
>> of nodes are reachable). This is done by Pgpool to guard against the
>> split-brain syndrome, which could happen
>> otherwise in case of network partitioning. So if you have 3 Pgpool nodes
>> configured than the VIP will only get assigned on master node when at-least
>> 2 Pgpool nodes are alive and reachable.
>> but in case of 2 node configuration, only 1 node is required to ensure
>> the quorum and in that case even if the single node is alive, it will get
>> the VIP
>>
>> Thanks
>> Best Regards
>> Muhammad Usama
>>
>>>
>>>
>>> Thanks in advance
>>>
>>>    Lakshmi Y M
>>> _______________________________________________
>>> pgpool-general mailing list
>>> pgpool-general at pgpool.net
>>> http://www.pgpool.net/mailman/listinfo/pgpool-general
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20190524/016d76d2/attachment.html>


More information about the pgpool-general mailing list