[pgpool-general: 2796] Re: ask pgpool for status

Yugo Nagata nagata at sraoss.co.jp
Thu Apr 24 11:26:57 JST 2014


On Wed, 23 Apr 2014 14:42:06 +0200
Attila Heidrich <attila.heidrich at gmail.com> wrote:

> Thanks, "SHOW pool_nodes" showed up for me, that the script I use tells me
> the replication status of the backend, and it is always "Master" since I
> don't use streaming replication.
> 
> My problem is still the difference of the results when I ask the same
> information from the pool nodes:
> 
> Lets ask both servers for pool status. Never mind the script, it just
> invokes pcp on localhost
> 
> -------- code fraction of the script : pool ----------
> ...
> # PCP configuration
> pcp_host="127.0.0.1"
> pcp_port="9898"
> pcp_username="pg_admin"
> pcp_password="Password"
> pcp_timeout="10"
> 
> # Health check uses psql to connect to each backend server. Specify options
> required to connect here
> psql_healthcheck_opts="-U pg_admin template1"
> 
> # Default options to send to pcp commands
> pcp_cmd_preamble="$pcp_timeout $pcp_host $pcp_port $pcp_username
> $pcp_password"
> ...
> -------- /pool/ ----------
> 
> wlab at control-1:~/salt$ sudo salt postgres\* cmd.run "pool status"
> postgres-2:
>     Node: 0
>     Host: postgres-1
>     Port: 5433
>     Weight: 0.500000
>     Status: Up, in pool and connected (2)
>     Role: Master
> 
>     Node: 1
>     Host: postgres-2
>     Port: 5433
>     Weight: 0.500000
>     Status: Up, in pool and connected (2)
>     Role: Master
> postgres-1:
>     Node: 0
>     Host: postgres-1
>     Port: 5433
>     Weight: 0.500000
>     Status: Up, detached from pool (3)
>     Role: Master
> 
>     Node: 1
>     Host: postgres-2
>     Port: 5433
>     Weight: 0.500000
>     Status: Up, in pool (1)
>     Role: Master
> 
> The server named "postgres-1", which is currently the standby node reports
> the first backend "detached". It means, that pcp_node_info returned 3.
> 
> I would like to know, if it is acceptable, or just an error, and I should
> investigate it. (after restarting the nodes, it usually shows all backends
> attached again - see below)

It is possible that two pgpools recognize their backend in different status.
However, this is odd and not acceptable since pgpools notify the backend 
status change to each other when watchdog enabled. If you can reproduce this
certainly, could you please tell me the steps? There might be problem on pgpool
itself or your operation or something other.

> 
> If pool script is configured with "pcp_host="10.6.14.15", it would mean
> that pcp always (well, if not both servers use the HA IP address) ask the
> same server whichever server I run the script on, so in theory, no
> difference can occur at all.
> 
> In this very minute, the IP address status is correct, postgres-2 is the
> active pgpool frontend.
> 
> wlab at control-1:~/salt$ sudo salt postgres\* cmd.run "ip add"
> postgres-2:
>     1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
>         link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>         inet 127.0.0.1/8 scope host lo
>            valid_lft forever preferred_lft forever
>         inet6 ::1/128 scope host
>            valid_lft forever preferred_lft forever
>     2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
> qlen 1000
>         link/ether 00:50:56:8f:7e:7b brd ff:ff:ff:ff:ff:ff
>         inet 10.6.14.11/24 brd 10.6.14.255 scope global eth0
>            valid_lft forever preferred_lft forever
>         inet 10.6.14.15/24 scope global secondary eth0
>            valid_lft forever preferred_lft forever
>         inet6 fe80::250:56ff:fe8f:7e7b/64 scope link
>            valid_lft forever preferred_lft forever
> postgres-1:
>     1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
>         link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>         inet 127.0.0.1/8 scope host lo
>            valid_lft forever preferred_lft forever
>         inet6 ::1/128 scope host
>            valid_lft forever preferred_lft forever
>     2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
> qlen 1000
>         link/ether 00:50:56:8f:58:ab brd ff:ff:ff:ff:ff:ff
>         inet 10.6.14.10/24 brd 10.6.14.255 scope global eth0
>            valid_lft forever preferred_lft forever
>         inet6 fe80::250:56ff:fe8f:58ab/64 scope link
>            valid_lft forever preferred_lft forever
> 
> To be short: is it OK to ask anything on the standby pgpool2 node, or
> should I always ask the currently active one?

If watchdog works well, backend status should be same between active and
standby, so you have to check only active usually. However, these can be
different depending on operations, so checking both pgpool's status is more
confident.

> 
> Anyway, the above information from our system, if anyone can use it as
> information:
> 
> wlab at control-1:~/salt$ sudo salt postgres\* cmd.run 'for i in $(eval echo
> {0..$(($(pcp node_count 10) - 1))}); do pcp node_info 10 $i; done'
> postgres-2:
>     postgres-1 5433 1 0.500000
>     postgres-2 5433 1 0.500000
> postgres-1:
>     postgres-1 5433 2 0.500000
>     postgres-2 5433 2 0.500000
> wlab at control-1:~/salt$ sudo salt postgres\* cmd.run "pcp watchdog_info"
> postgres-1:
>     postgres-1 5432 9000 3
> postgres-2:
>     postgres-2 5432 9000 2
> wlab at control-1:~/salt$ sudo salt postgres\* cmd.run "pool status"
> postgres-2:
>     Node: 0
>     Host: postgres-1
>     Port: 5433
>     Weight: 0.500000
>     Status: Up, in pool (1)
>     Role: Master
> 
>     Node: 1
>     Host: postgres-2
>     Port: 5433
>     Weight: 0.500000
>     Status: Up, in pool (1)
>     Role: Master
> postgres-1:
>     Node: 0
>     Host: postgres-1
>     Port: 5433
>     Weight: 0.500000
>     Status: Up, in pool and connected (2)
>     Role: Master
> 
>     Node: 1
>     Host: postgres-2
>     Port: 5433
>     Weight: 0.500000
>     Status: Up, in pool and connected (2)
>     Role: Master
> wlab at control-1:~/salt$ sudo salt postgres\* cmd.run "psql -U pg_admin -p
> 5432 -c 'show pool_nodes' template1"
> postgres-2:
>      node_id |  hostname  | port | status | lb_weight |  role
>     ---------+------------+------+--------+-----------+--------
>      0       | postgres-1 | 5433 | 2      | 0.500000  | master
>      1       | postgres-2 | 5433 | 2      | 0.500000  | slave
>     (2 rows)
> postgres-1:
>      node_id |  hostname  | port | status | lb_weight |  role
>     ---------+------------+------+--------+-----------+--------
>      0       | postgres-1 | 5433 | 2      | 0.500000  | master
>      1       | postgres-2 | 5433 | 2      | 0.500000  | slave
>     (2 rows)
> 
> The command "salt" in this case works like "ssh" for all servers named
> "postgres*".
> 
> Regards,
> 
> Attila
> 
> 
> 2014-04-23 13:36 GMT+02:00 Yugo Nagata <nagata at sraoss.co.jp>:
> 
> > On Wed, 23 Apr 2014 11:37:46 +0200
> > Attila Heidrich <attila.heidrich at gmail.com> wrote:
> >
> > > Dear guys!
> > >
> > > What command or script do u use for asking the pgpool's current status?
> >
> > I mainly use pcp_node_info, pcp_watchdog_info, and "SHOW pool_nodes".
> >
> > > I have written a small wrapper for pcp. Since pcp is available on all
> > nodes
> > > running pgpool2, I have setup the target IP for 217.0.0.1.
> > >
> > > Am I right, that I should have the same result asking the same thing on
> > all
> > > the nodes? Or should I use the common address, not the local one? It's a
> >
> > Sorry, I'm confused and not sure what difference is. Could you show
> > usecases?
> >
> > > good way to have consistent results (mainly) all the time for sure, but
> > > what's recommended?
> > >
> > >
> > > regards,
> > > Attila
> >
> >
> > --
> > Yugo Nagata <nagata at sraoss.co.jp>
> >


-- 
Yugo Nagata <nagata at sraoss.co.jp>


More information about the pgpool-general mailing list