[pgpool-general: 2726] Re: pgpool 3.3.3 watchdog problem
Yugo Nagata
nagata at sraoss.co.jp
Tue Apr 8 10:17:07 JST 2014
Hi,
When the heartbeat signal connection between pgpools breaks, each pgpool
feels that the other is down and VIP is brought up at both nodes. However,
I can't conclude from the information given.
Could you please show your pgpool.conf and full log messages?
On Mon, 7 Apr 2014 13:15:10 -0700
Alexandru Cardaniuc <cardaniuc at gmail.com> wrote:
> Hi,
>
>
> Is pgpool 3.3.3 having a watchdog problem?
>
> I have a 2 node cluster.
> pgpool on 10.0.90.11
> pgpool on 10.0.90.12
> delegate_IP = 10.0.90.1 and was set on primary pgpool (10.0.90.11)
> now both pgpool have the delegate_IP up:
>
> # ifconfig
> eth0 Link encap:Ethernet HWaddr 00:1D:55:14:B1:BD
> inet addr:10.0.90.11 Bcast:10.0.255.255 Mask:255.255.0.0
> inet6 addr: fe80::21d:55ff:fe14:b1bd/64 Scope:Link
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:26828863 errors:0 dropped:0 overruns:0 frame:0
> TX packets:32509057 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:2808044025 (2.6 GiB) TX bytes:4026576497 (3.7 GiB)
>
> eth0:0 Link encap:Ethernet HWaddr 00:1D:55:14:B1:BD
> inet addr:10.0.90.1 Bcast:10.0.255.255 Mask:255.255.0.0
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
>
> # ifconfig
> eth0 Link encap:Ethernet HWaddr 00:1D:55:34:D0:86
> inet addr:10.0.90.12 Bcast:10.0.255.255 Mask:255.255.0.0
> inet6 addr: fe80::21d:55ff:fe34:d086/64 Scope:Link
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:16619236 errors:0 dropped:0 overruns:0 frame:0
> TX packets:15740439 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:1676092603 (1.5 GiB) TX bytes:2112486773 (1.9 GiB)
>
> eth0:0 Link encap:Ethernet HWaddr 00:1D:55:34:D0:86
> inet addr:10.0.90.1 Bcast:10.0.255.255 Mask:255.255.0.0
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
>
> 10.0.90.1 should be up only on the 10.0.90.11 at this point.
>
> Looks like earlier today watchdog became confused:
> on 10.0.90.11 from pgpool.log:
> 2014-04-07 11:42:31 DEBUG: pid 11380: wd_hb_receiver: received heartbeat
> signal from 10.0.90.12:9999
> 2014-04-07 11:42:32 DEBUG: pid 11382: check_pgpool_status_by_hb: checking
> pgpool 0 (10.0.90.11:9999)
> 2014-04-07 11:42:32 DEBUG: pid 11382: check_pgpool_status_by_hb: OK; status
> 3
> 2014-04-07 11:42:32 DEBUG: pid 11382: check_pgpool_status_by_hb: checking
> pgpool 1 (10.0.90.12:9999)
> 2014-04-07 11:42:32 LOG: pid 11382: check_pgpool_status_by_hb: pgpool 1 (
> 10.0.90.12:9999) is in down status
> 2014-04-07 11:42:32 DEBUG: pid 11381: wd_hb_send: send 224 byte packet
> 2014-04-07 11:42:32 DEBUG: pid 11381: wd_hb_sender: send heartbeat signal
> to 10.0.90.12:9694
> 2014-04-07 11:42:33 DEBUG: pid 11380: wd_hb_recv: received 224 byte packet
>
> also same from 10.0.90.12
> 2014-04-07 11:15:44 DEBUG: pid 12975: check_pgpool_status_by_hb: checking
> pgpool 1 (10.0.90.11:9999)
> 2014-04-07 11:15:44 LOG: pid 12975: check_pgpool_status_by_hb: pgpool 1 (
> 10.0.90.11:9999) is in down status
>
> Using pgpool 3.3.3 and replication configured using postgres 8.4.4
>
>
> --
> Sincerely yours,
> Alexandru Cardaniuc
--
Yugo Nagata <nagata at sraoss.co.jp>
More information about the pgpool-general
mailing list