[pgpool-general: 5448] Re: FW: show pool_nodes status 0

Lucas Luengas lucasluengas at gmail.com
Thu May 11 03:09:28 JST 2017


Hello.

I don't use your pgpool-II version. I am using pgpool-II version 3.4.6.

You can check pgpool status file. With pgpool-II version 3.4.6 the file is
/tmp/pgpool_status by default. Check configuration "logdir" in your
pgpool.conf.

You can try:
- Stop pgpool.
- Remove pgpool status file (/tmp/pgpool_status by default)
- Start pgpool.
- Check node status.

Kind regards.


On Tue, May 9, 2017 at 12:00 PM, Gabriel Fernández Martínez <
gfernandez at hotelbeds.com> wrote:

> Hi,
>
>
>
> I am evaluating if PgPool can be useful to support the business activity
> of one application. For testing purposes, I have deploy a 3.5.2 pgpool
> release on the same server where I am running the primary PostgresSQL 9.5.
>
>
>
> I had deployed a two PosgresSQL nodes using streaming replication and I am
> trying to configure the pgpool like a load balancer, but I am stuck with
> the configuration because the standby server it is in status 0 and no
> selects are run on this server
>
>
>
> ******-web=> show pool_nodes;
>
> node_id |       hostname       | port | status | lb_weight |  role   |
> select_cnt
>
> ---------+----------------------+------+--------+-----------
> +---------+------------
>
> 0       | *************196.*** | 5*** | 2      | 0.500000  | primary | 0
>
> 1       | *************198.*** | 5*** | 0      | 0.500000  | standby | 0
>
> (2 filas)
>
>
>
> I had used the template pgpool.conf.sample-stream and I have modify the
> following parameters to adapt to my environment:
>
> -          listen_addresses = '*'
>
>
>
> -          backend_hostname0 = '**********196.***'
>
> -          backend_port0 = 5***
>
> -          backend_weight0 = 1
>
> -          backend_data_directory0 = '/data/******1'
>
> -          backend_flag0 = 'ALLOW_TO_FAILOVER'
>
>
>
> -          backend_hostname1 = '**********198.***'
>
> -          backend_port1 = 5***
>
> -          backend_weight1 = 1
>
> -          backend_data_directory1 = '/data/******2'
>
> -          backend_flag1 = 'ALLOW_TO_FAILOVER'
>
>
>
> -          enable_pool_hba = on
>
>
>
> -          logdir = '/var/log/pgpool-II-95'
>
> -          pid_file_name = '/data/node1/pgpool-II-95/pgpool.pid'
>
>
>
> -          sr_check_user = 'pgpool'
>
> -          sr_check_password = '*********'
>
>
>
> I have created a user in the PostgreSQL cluster called pgpool and I have
> check that I can connect with this user to the Postgres database running on
> 196 & 198 nodes.
>
>
>
> From the pgpool startup output log, I can see that the primary it is
> properly identified, but no explanation about why the standby show a status
> 0:
>
> 2017-05-08 18:05:05: pid 17334: LOG:  pgpool-II successfully started.
> version 3.5.2 (ekieboshi)
>
> 2017-05-08 18:05:05: pid 17334: LOG:  find_primary_node: checking backend
> no 0
>
> 2017-05-08 18:05:05: pid 17368: DEBUG:  I am 17368
>
> 2017-05-08 18:05:05: pid 17368: DEBUG:  initializing backend status
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  pool_read: read 13 bytes from
> backend 0
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  authenticate kind = 5
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  pool_write: to backend: kind:p
>
> 2017-05-08 18:05:05: pid 17368: DEBUG:  pool_read: read 13 bytes from
> backend 0
>
> 2017-05-08 18:05:05: pid 17368: DEBUG:  authenticate kind = 5
>
> 2017-05-08 18:05:05: pid 17368: DEBUG:  pool_write: to backend: kind:p
>
> 2017-05-08 18:05:05: pid 17368: DEBUG:  pool_read: read 327 bytes from
> backend 0
>
> 2017-05-08 18:05:05: pid 17368: DEBUG:  authenticate kind = 0
>
> 2017-05-08 18:05:05: pid 17368: DEBUG:  authenticate backend: key data
> received
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  pool_read: read 327 bytes from
> backend 0
>
> 2017-05-08 18:05:05: pid 17368: DEBUG:  authenticate backend: transaction
> state: I
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  authenticate kind = 0
>
> 2017-05-08 18:05:05: pid 17368: DEBUG:  pool_write: to backend: kind:X
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  authenticate backend: key data
> received
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  authenticate backend: transaction
> state: I
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: extended:0 query:"SELECT
> pg_is_in_recovery()"
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  pool_write: to backend: kind:Q
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  pool_read: read 75 bytes from
> backend 0
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: kind: 'T'
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: received ROW DESCRIPTION
> ('T')
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: row description:
> num_fileds: 1
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: kind: 'D'
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: received DATA ROW ('D')
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: kind: 'C'
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: received COMMAND
> COMPLETE ('C')
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: kind: 'Z'
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: received READY FOR QUERY
> ('Z')
>
> 2017-05-08 18:05:05: pid 17334: DEBUG:  pool_write: to backend: kind:X
>
> 2017-05-08 18:05:05: pid 17334: LOG:  find_primary_node: primary node id
> is 0
>
>
>
> I have executed the pgbench to verify the load balancing between nodes and
> I can show that only the master is supporting load:
>
> $ pgbench –h ********196.*** -p 9999 -c 10 -S -T 10 -U ******-web test
>
> Password:
>
> starting vacuum...end.
>
> transaction type: SELECT only
>
> scaling factor: 1
>
> query mode: simple
>
> number of clients: 10
>
> number of threads: 1
>
> duration: 10 s
>
> number of transactions actually processed: 218728
>
> latency average: 0.457 ms
>
> tps = 21871.048129 (including connections establishing)
>
> tps = 21879.150205 (excluding connections establishing)
>
>
>
> ******-web=> show pool_nodes;
>
> node_id |       hostname       | port | status | lb_weight |  role   |
> select_cnt
>
> ---------+----------------------+------+--------+-----------
> +---------+------------
>
> 0       | *************196.*** | 5*** | 2      | 0.500000  | primary |
> 218560
>
> 1       | *************198.*** | 5*** | 0      | 0.500000  | standby | 0
>
> (2 filas)
>
>
>
>
>
> I had configured the pcp management in order to be able to use
> pcp_node_info, and I got the same output (as expected):
>
>
>
> [postgres@********196 ~]$ pcp_node_info -U pgpool -h ip-10-222-64-196.ods
> --node=0 -v
>
> Password:
>
> Hostname: *******196.***
>
> Port    : 5***
>
> Status  : 2
>
> Weight  : 0.500000
>
> [postgres@********196~]$ pcp_node_info -U pgpool -h ip-10-222-64-196.ods
> --node=1 -v
>
> Password:
>
> Hostname: ********198.***
>
> Port    : 5322
>
> Status  : 0
>
> Weight  : 0.500000
>
>
>
>
>
> Any suggestion is welcome, I have try to go thought the documentation and
> I cannot find any clue who help me to resolve the issue.
>
>
>
> Best regards
>
>
>
> Gabriel Fernández
> *Technology *
>
>  Architecture
>
>
>
> tel: (+34) 971 189 188 <+34%20971%2018%2091%2088>
>
> Ed. Mirall Cami de Son Fangos 100, B-2
>
> E-07007, Palma de Mallorca, Spain
>
> *gfernandez at hotelbeds.com <gfernandez at hotelbeds.com>*
>
>
>
> [image: Hotelbeds Group] <http://group.hotelbeds.com/>
>
>
>
> _______________________________________________
> pgpool-general mailing list
> pgpool-general at pgpool.net
> http://www.pgpool.net/mailman/listinfo/pgpool-general
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20170510/d8cb55fa/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.jpg
Type: image/jpeg
Size: 52294 bytes
Desc: not available
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20170510/d8cb55fa/attachment-0001.jpg>


More information about the pgpool-general mailing list