[pgpool-general: 2209] Re: Pgpool 3.3.1 heartbeat error

Granthana Biswas granthana at zedo.com
Tue Oct 22 19:34:09 JST 2013


Hi Yugo,

Thank you for your quick response. It is working fine now after changing
wd_heartbeat_keepalive to 2. :)

Regards,
Granthana


On Tue, Oct 22, 2013 at 3:59 PM, Yugo Nagata <nagata at sraoss.co.jp> wrote:

> Hi,
>
> There is a problem in your pgpool.conf.
>
>   wd_heartbeat_keepalive = 200ms
>
> pgpool-II can't recognize 'ms' as milliseconds and regards this
> as wd_heartbeat_keepalive is 200 sec. So, pgpool sends heatbeat
> signals every 200 seconds. As a result, they regard other pgpool
> is down since wd_heartbeat_deadtime is set 20 sec.
>
> On Fri, 18 Oct 2013 17:17:29 +0530
> Granthana Biswas <granthana at zedo.com> wrote:
>
> > Hello All,
> >
> > I am testing out the new pgpool heartbeat mode but getting the following
> > issue where in both the slave and master pgpool are showing active as
> > master. The log says:
> >
> > 2013-10-17 14:46:15 LOG:   pid 2757: wd_chk_setuid all commands have
> setuid
> > bit
> > 2013-10-17 14:46:15 LOG:   pid 2757: watchdog might call network commands
> > which using setuid bit.
> > 2013-10-17 14:46:15 LOG:   pid 2757: read_status_file: 1 th backend is
> set
> > to down status
> > 2013-10-17 14:46:15 LOG:   pid 2757: wd_init: start watchdog
> > 2013-10-17 14:46:15 LOG:   pid 2757: pgpool-II successfully started.
> > version 3.3.1 (tokakiboshi)
> > 2013-10-17 14:46:15 LOG:   pid 2757: find_primary_node: primary node id
> is 0
> > 2013-10-17 14:49:36 LOG:   pid 2763: watchdog: lifecheck started
> > 2013-10-17 14:49:36 LOG:   pid 2763: check_pgpool_status_by_hb: lifecheck
> > failed. pgpool 1 (X.X.X.46:9999) seems not to be working
> > 2013-10-17 14:49:36 LOG:   pid 2763: pgpool_down: X.X.X.46:9999 is going
> > down
> > 2013-10-17 14:49:36 LOG:   pid 2763: pgpool_down: I'm oldest so standing
> > for master
> > 2013-10-17 14:49:36 LOG:   pid 2763: wd_escalation: escalating to master
> > pgpool
> > WARNING: interface is ignored: Operation not permitted
> > 2013-10-17 14:49:38 LOG:   pid 2763: wd_escalation: escalated to master
> > pgpool successfully
> > 2013-10-17 14:49:43 LOG:   pid 2763: check_pgpool_status_by_hb: pgpool 1
> > (X.X.X.46:9999) is in down status
> >
> >
> > I have attached master and slave pgpool.conf files for reference. Can
> > anyone throw some light on this? I have exhausted all the resources found
> > on google.
> >
> >
> > Regards,
> > Granthana
>
>
> --
> Yugo Nagata <nagata at sraoss.co.jp>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20131022/7f99fbfa/attachment.html>


More information about the pgpool-general mailing list