[pgpool-general: 6576] Re: Pgpool query

Muhammad Usama m.usama at gmail.com
Thu May 30 18:30:12 JST 2019


Hi

These log messages are giving information about the replication lag between
primary and standby (backend node 1) in your case.

As you know replication lag is the replica's distance behind the primary in
the sequential timeline or number of bytes. The time it takes to copy data
from the primary to a replica, and apply the changes, can vary based on a
number of factors including network time, replication configuration, and
amount of activity on both the primary and replicas.

You can adjust the logging of the replication delay by adjusting the
log_standby_delay configuration in pgpool.conf. And to optimise the
replication dealy between
primary and standby you need to look at your PostgreSQL installation and
network.

Thanks
Best regards
Muhammad Usama

On Mon, May 27, 2019 at 3:45 PM Lakshmi Raghavendra <lakshmiym108 at gmail.com>
wrote:

> Hi,
>
>    i have configured a stream replication pgpool and using watchdog for
> failovers.
> I often see the below messages in the log:
>
> 2019-05-27T10:34:51+00:00  process[6714]: [15-1] 2019-05-27 10:34:51: pid
> 6714: LOG:  Replication of node:1 is behind 671088848 bytes from the
> primary server (node:0)
> 2019-05-27T10:34:51+00:00  process[6714]: [15-2] 2019-05-27 10:34:51: pid
> 6714: CONTEXT:  while checking replication time lag
> 2019-05-27T10:34:51+00:00  process[6714]: [16-1] 2019-05-27 10:34:51: pid
> 6714: LOG:  Replication of node:2 is behind 1124073680 bytes from the
> primary server (node:0)
> 2019-05-27T10:34:51+00:00  process[6714]: [16-2] 2019-05-27 10:34:51: pid
> 6714: CONTEXT:  while checking replication time lag
> 2019-05-27T10:35:01+00:00  process[6714]: [17-1] 2019-05-27 10:35:01: pid
> 6714: LOG:  Replication of node:1 is behind 671088848 bytes from the
> primary server (node:0)
> 2019-05-27T10:35:01+00:00  process[6714]: [17-2] 2019-05-27 10:35:01: pid
> 6714: CONTEXT:  while checking replication time lag
> 2019-05-27T10:35:01+00:00  process[6714]: [18-1] 2019-05-27 10:35:01: pid
> 6714: LOG:  Replication of node:2 is behind 1124073680 bytes from the
> primary server (node:0)
> 2019-05-27T10:35:01+00:00  process[6714]: [18-2] 2019-05-27 10:35:01: pid
> 6714: CONTEXT:  while checking replication time lag
>
> Questions :
>
> 1. Does the above statement mean when failover happens and node  1 / 2 is
> choosen as the primary postgres master, since there is some replication
> lag, some data would be lost?
>
> 2. Please let me know how to over come this ?
>
> Thanks And Regards,
>
>    Lakshmi Y M
>
>
> On Fri, May 24, 2019 at 12:18 PM Tatsuo Ishii <ishii at sraoss.co.jp> wrote:
>
>> >> Ok. So the reason behind the ask was if pgpool is not going to retry
>> the
>> > promote on other nodes, I was thinking i can just trigger a force
>> promote
>> > there by failover and follow master will be run by pgpool itself and i
>> dont
>> > have to do a failover by myself and run pcp_recovery on all the nodes
>>
>> Probably you misunderstand what Pgpool-II is expected to do. Pgpool-II
>> does nothing with PostgreSQL promotion. It's completely the script
>> writer's responsibility to handle the promotion retrying if
>> neccessary.
>>
>> > Regarding 5. It's normal. Don't worry.
>> >
>> > When a standby PostgreSQL node is added, existing connections to
>> > Pgpool-II are kept (thus do not use the new standby for load
>> > balacing). After the session ends, Pgpool-II child process checks
>> > whether the failback has happend. If happend, exits itself so that new
>> > process is spwan to reflect the fact that new standby node has been
>> > added (thus it can use the new standby for load balancing).
>> >
>> >
>> >
>> >
>> >> Thanks. Per my understanding, So restart happens only when a new
>> standby
>> > is added is it. Because i see this frequent restarts happening even when
>> > there is no fallback.
>>
>> The failbacke event detection could happen long after the failback
>> happend if a user keeps on connection to Pgpool-II.
>>
>> > Also could you please share some light on when the failback_command gets
>> > executed? I have never seen the command being called.
>>
>> It is called when pcp_attach_node or pcp_recovery_node get
>> executed. Also it could happen when follow_master_command gets
>> executed if it executes pcp_recovery_node inside the command.
>>
>> >
>> > Thanks And Regards,
>> >
>> >    Lakshmi Y M
>> >
>> > On Tue, May 21, 2019 at 8:02 PM Lakshmi Raghavendra <
>> lakshmiym108 at gmail.com>
>> > wrote:
>> >
>> >> Hi,
>> >>
>> >>    I had couple of queries as below :
>> >>
>> >> On Thu, May 16, 2019 at 2:17 AM Muhammad Usama <m.usama at gmail.com>
>> wrote:
>> >>
>> >>>
>> >>>
>> >>> On Fri, May 10, 2019 at 6:38 PM Lakshmi Raghavendra <
>> >>> lakshmiym108 at gmail.com> wrote:
>> >>>
>> >>>> Hi,
>> >>>> I am trying pgpool for automatic faliover on my postgresql cluster
>> using
>> >>>> the watchdog feature
>> >>>> Wanted to know a couple of things
>> >>>> 1. Are there any hooks when pgpool re-elects next pgpool master ?
>> wanted
>> >>>> to run some customization during this time
>> >>>>
>> >>>
>> >>> Pgpool executes the user provided commands at the time of acquiring
>> and
>> >>> releasing of virtual IP by master pgpool node. You can
>> >>> configure wd_escalation_command and wd_de_escalation_command
>> configuration
>> >>> parameters to provide the custom command or script.
>> >>> Pgpool which executed wd_escalation_command command when it gets
>> elected
>> >>> as a master and performs the escalation
>> >>> and similarly when the it node will resign as a master the
>> wd_de_escalation_command
>> >>> command gets executed,
>> >>>
>> >>
>> >> 1. I tried using the  above wd_escalation_command and
>> wd_de_escalation_command.
>> >> I have a 3 node cluster, Observed that the escalation command will be
>> >> triggered if there are at least 2 nodes alive in the pgpool cluster.
>> >> If there is only master pgpool alive with no slave nodes, the command
>> is
>> >> never initiated. Is this behavior expected ?
>> >>
>> >> 2. Wanted to understand the significance of VIP, Is there any issue
>> caused
>> >> if I dont use the VIP in a 3-node pgpool cluster.
>> >>
>> >> Please let me know.
>> >>
>> >> Thanks And Regards,
>> >>
>> >>    Lakshmi Y M
>> >>
>> >> On Thu, May 16, 2019 at 2:17 AM Muhammad Usama <m.usama at gmail.com>
>> wrote:
>> >>
>> >>>
>> >>>
>> >>> On Fri, May 10, 2019 at 6:38 PM Lakshmi Raghavendra <
>> >>> lakshmiym108 at gmail.com> wrote:
>> >>>
>> >>>> Hi,
>> >>>> I am trying pgpool for automatic faliover on my postgresql cluster
>> using
>> >>>> the watchdog feature
>> >>>> Wanted to know a couple of things
>> >>>> 1. Are there any hooks when pgpool re-elects next pgpool master ?
>> wanted
>> >>>> to run some customization during this time
>> >>>>
>> >>>
>> >>> Pgpool executes the user provided commands at the time of acquiring
>> and
>> >>> releasing of virtual IP by master pgpool node. You can
>> >>> configure wd_escalation_command and wd_de_escalation_command
>> configuration
>> >>> parameters to provide the custom command or script.
>> >>> Pgpool which executed wd_escalation_command command when it gets
>> elected
>> >>> as a master and performs the escalation
>> >>> and similarly when the it node will resign as a master the
>> wd_de_escalation_command
>> >>> command gets executed,
>> >>>
>> >>>
>> >>>
>> http://www.pgpool.net/docs/latest/en/html/runtime-watchdog-config.html#CONFIG-WATCHDOG-ESCALATION-DE-ESCALATION
>> >>>
>> >>>
>> >>>> 2. Will the VIP get assigned ony if there are more than 1 node
>> present
>> >>>> in the pgpool cluster? I had 3 nodes where i had pgpool running.
>> When the
>> >>>> 1st and 2nd node's pgpool was shut i was expecting the 3rd node to
>> acquire
>> >>>> the VIP but it didn't happen. And if my understanding was right i was
>> >>>> thinking of using the VIP in my database connection string (Since it
>> will
>> >>>> always be with pgpool master who can connect to my postgresql
>> primary). Now
>> >>>> if the 3rd node is not acquiring the VIP, i could not use it my
>> connection
>> >>>> string. Correct me if my understanding is wrong
>> >>>>
>> >>>
>> >>> master pgpool only acquires the VIP when the quorum exists ( minimum
>> 50%
>> >>> of nodes are reachable). This is done by Pgpool to guard against the
>> >>> split-brain syndrome, which could happen
>> >>> otherwise in case of network partitioning. So if you have 3 Pgpool
>> nodes
>> >>> configured than the VIP will only get assigned on master node when
>> at-least
>> >>> 2 Pgpool nodes are alive and reachable.
>> >>> but in case of 2 node configuration, only 1 node is required to ensure
>> >>> the quorum and in that case even if the single node is alive, it will
>> get
>> >>> the VIP
>> >>>
>> >>> Thanks
>> >>> Best Regards
>> >>> Muhammad Usama
>> >>>
>> >>>>
>> >>>>
>> >>>> Thanks in advance
>> >>>>
>> >>>>    Lakshmi Y M
>> >>>> _______________________________________________
>> >>>> pgpool-general mailing list
>> >>>> pgpool-general at pgpool.net
>> >>>> http://www.pgpool.net/mailman/listinfo/pgpool-general
>> >>>>
>> >>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20190530/4927be3a/attachment.html>


More information about the pgpool-general mailing list