[pgpool-general: 5470] Re: FW: show pool_nodes status 0

Lucas Luengas lucasluengas at gmail.com
Tue May 16 03:21:39 JST 2017


Hi Gabriel.

I think pgpool uses status file for recording nodes status. I only manage
pgpool using pcp commands, but once I got status 0 for a node and I did
resolve this situation removing status file.
I think its a pgpool internal file.

For pgpool version 2.3 (link
http://www.pgpool.net/docs/pgpool-II-2.3/pgpool-en.html  ) I get this
information about status file. I hope this help you.

############################################

pgpool records backend status into [logdir]/pgpool_status file. When pgpool
restarts it reads the file and restore the backend status. This will
prevent data difference among DB nodes which might be caused by following
scenario:

   1. One of backend suddenly stopps and pgpool execute fail over
   2. Update one of active DB through pgpool
   3. Our administrator decides to stop pgpool
   4. Someone decides to restart the stopping DB without letting our admin
   know
   5. Our poor administrator restarts pgpool

If for some reason, for example, the stopping DB is synched with the active
DB by hand, you could remove pgpool_status safely before starting pgpool.

###############################################


Kind regards.



On Mon, May 15, 2017 at 2:08 PM, PVY <balroga3 at yandex.ru> wrote:

> Hello Gabriel.
>
> You can also start pgpool with "-D" command-line option, it tells pgpool
> to discard the status file on startup.
>
>
> Hi Lucas,
>
> Thank you show much for your response, it works ⚐
>
> In the status file before follow your sequence we can find the following
> values:
> up
> unused
>
> Once I have apply your suggested action plan now shows:
> up
> up
>
> Please can you let me know where I can get more details about the usage of
> this file, I can see on some documents that this file is it read during the
> start-up and I can see that the file it is not removed after the
> application shutdown (I do not know if it is the expected behavior).
>
> Regards
>
> *From:* Lucas Luengas [mailto:lucasluengas at gmail.com]
> *Sent:* miércoles, 10 de mayo de 2017 20:09
> *To:* Gabriel Fernández Martínez <gfernandez at hotelbeds.com>
> *Cc:* pgpool-general at pgpool.net
> *Subject:* Re: [pgpool-general: 5445] FW: show pool_nodes status 0
>
> Hello.
>
> I don't use your pgpool-II version. I am using pgpool-II version 3.4.6.
>
> You can check pgpool status file. With pgpool-II version 3.4.6 the file is
> /tmp/pgpool_status by default. Check configuration "logdir" in your
> pgpool.conf.
>
> You can try:
> - Stop pgpool.
> - Remove pgpool status file (/tmp/pgpool_status by default)
> - Start pgpool.
> - Check node status.
>
> Kind regards.
>
>
> On Tue, May 9, 2017 at 12:00 PM, Gabriel Fernández Martínez <
> gfernandez at hotelbeds.com> wrote:
> Hi,
>
> I am evaluating if PgPool can be useful to support the business activity
> of one application. For testing purposes, I have deploy a 3.5.2 pgpool
> release on the same server where I am running the primary PostgresSQL 9.5.
>
> I had deployed a two PosgresSQL nodes using streaming replication and I am
> trying to configure the pgpool like a load balancer, but I am stuck with
> the configuration because the standby server it is in status 0 and no
> selects are run on this server
>
> ******-web=> show pool_nodes;
> node_id |       hostname       | port | status | lb_weight |  role   |
> select_cnt
> ---------+----------------------+------+--------+-----------
> +---------+------------
> 0       | *************196.*** | 5*** | 2      | 0.500000  | primary | 0
> 1       | *************198.*** | 5*** | 0      | 0.500000  | standby | 0
> (2 filas)
>
> I had used the template pgpool.conf.sample-stream and I have modify the
> following parameters to adapt to my environment:
> -          listen_addresses = '*'
>
> -          backend_hostname0 = '**********196.***'
> -          backend_port0 = 5***
> -          backend_weight0 = 1
> -          backend_data_directory0 = '/data/******1'
> -          backend_flag0 = 'ALLOW_TO_FAILOVER'
>
> -          backend_hostname1 = '**********198.***'
> -          backend_port1 = 5***
> -          backend_weight1 = 1
> -          backend_data_directory1 = '/data/******2'
> -          backend_flag1 = 'ALLOW_TO_FAILOVER'
>
> -          enable_pool_hba = on
>
> -          logdir = '/var/log/pgpool-II-95'
> -          pid_file_name = '/data/node1/pgpool-II-95/pgpool.pid'
>
> -          sr_check_user = 'pgpool'
> -          sr_check_password = '*********'
>
> I have created a user in the PostgreSQL cluster called pgpool and I have
> check that I can connect with this user to the Postgres database running on
> 196 & 198 nodes.
>
> From the pgpool startup output log, I can see that the primary it is
> properly identified, but no explanation about why the standby show a status
> 0:
> 2017-05-08 18:05:05: pid 17334: LOG:  pgpool-II successfully started.
> version 3.5.2 (ekieboshi)
> 2017-05-08 18:05:05: pid 17334: LOG:  find_primary_node: checking backend
> no 0
> 2017-05-08 18:05:05: pid 17368: DEBUG:  I am 17368
> 2017-05-08 18:05:05: pid 17368: DEBUG:  initializing backend status
> 2017-05-08 18:05:05: pid 17334: DEBUG:  pool_read: read 13 bytes from
> backend 0
> 2017-05-08 18:05:05: pid 17334: DEBUG:  authenticate kind = 5
> 2017-05-08 18:05:05: pid 17334: DEBUG:  pool_write: to backend: kind:p
> 2017-05-08 18:05:05: pid 17368: DEBUG:  pool_read: read 13 bytes from
> backend 0
> 2017-05-08 18:05:05: pid 17368: DEBUG:  authenticate kind = 5
> 2017-05-08 18:05:05: pid 17368: DEBUG:  pool_write: to backend: kind:p
> 2017-05-08 18:05:05: pid 17368: DEBUG:  pool_read: read 327 bytes from
> backend 0
> 2017-05-08 18:05:05: pid 17368: DEBUG:  authenticate kind = 0
> 2017-05-08 18:05:05: pid 17368: DEBUG:  authenticate backend: key data
> received
> 2017-05-08 18:05:05: pid 17334: DEBUG:  pool_read: read 327 bytes from
> backend 0
> 2017-05-08 18:05:05: pid 17368: DEBUG:  authenticate backend: transaction
> state: I
> 2017-05-08 18:05:05: pid 17334: DEBUG:  authenticate kind = 0
> 2017-05-08 18:05:05: pid 17368: DEBUG:  pool_write: to backend: kind:X
> 2017-05-08 18:05:05: pid 17334: DEBUG:  authenticate backend: key data
> received
> 2017-05-08 18:05:05: pid 17334: DEBUG:  authenticate backend: transaction
> state: I
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: extended:0 query:"SELECT
> pg_is_in_recovery()"
> 2017-05-08 18:05:05: pid 17334: DEBUG:  pool_write: to backend: kind:Q
> 2017-05-08 18:05:05: pid 17334: DEBUG:  pool_read: read 75 bytes from
> backend 0
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: kind: 'T'
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: received ROW DESCRIPTION
> ('T')
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: row description:
> num_fileds: 1
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: kind: 'D'
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: received DATA ROW ('D')
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: kind: 'C'
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: received COMMAND
> COMPLETE ('C')
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: kind: 'Z'
> 2017-05-08 18:05:05: pid 17334: DEBUG:  do_query: received READY FOR QUERY
> ('Z')
> 2017-05-08 18:05:05: pid 17334: DEBUG:  pool_write: to backend: kind:X
> 2017-05-08 18:05:05: pid 17334: LOG:  find_primary_node: primary node id
> is 0
>
> I have executed the pgbench to verify the load balancing between nodes and
> I can show that only the master is supporting load:
> $ pgbench –h ********196.*** -p 9999 -c 10 -S -T 10 -U ******-web test
> Password:
> starting vacuum...end.
> transaction type: SELECT only
> scaling factor: 1
> query mode: simple
> number of clients: 10
> number of threads: 1
> duration: 10 s
> number of transactions actually processed: 218728
> latency average: 0.457 ms
> tps = 21871.048129 (including connections establishing)
> tps = 21879.150205 (excluding connections establishing)
>
> ******-web=> show pool_nodes;
> node_id |       hostname       | port | status | lb_weight |  role   |
> select_cnt
> ---------+----------------------+------+--------+-----------
> +---------+------------
> 0       | *************196.*** | 5*** | 2      | 0.500000  | primary |
> 218560
> 1       | *************198.*** | 5*** | 0      | 0.500000  | standby | 0
> (2 filas)
>
>
> I had configured the pcp management in order to be able to use
> pcp_node_info, and I got the same output (as expected):
>
> [postgres@********196 ~]$ pcp_node_info -U pgpool -h ip-10-222-64-196.ods
> --node=0 -v
> Password:
> Hostname: *******196.***
> Port    : 5***
> Status  : 2
> Weight  : 0.500000
> [postgres@********196~]$ pcp_node_info -U pgpool -h ip-10-222-64-196.ods
> --node=1 -v
> Password:
> Hostname: ********198.***
> Port    : 5322
> Status  : 0
> Weight  : 0.500000
>
>
> Any suggestion is welcome, I have try to go thought the documentation and
> I cannot find any clue who help me to resolve the issue.
>
> Best regards
>
> Gabriel Fernández
>
> *Technology *Architecture
>
> tel: (+34) 971 189 188 <+34%20971%2018%2091%2088>
> Ed. Mirall Cami de Son Fangos 100, B-2
> E-07007, Palma de Mallorca, Spain
> gfernandez at hotelbeds.com
>
> <http://group.hotelbeds.com/>
>
>
> _______________________________________________
> pgpool-general mailing list
> pgpool-general at pgpool.net
> http://www.pgpool.net/mailman/listinfo/pgpool-general
> <https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.pgpool.net%2Fmailman%2Flistinfo%2Fpgpool-general&data=02%7C01%7Cgfernandez%40hotelbeds.com%7C8c4054261a2c4f14b88508d497cfb45e%7Cde51db7560c744dfa7ec886324f5e521%7C0%7C0%7C636300365735508276&sdata=kt44KUzZ16TZIJVR7YmLBb%2B728ZSSU80mlPX50jGtIU%3D&reserved=0>
>
>
>
>
>
>
> *-- С уважением, PVY                          *mailto:balroga3 at yandex.ru
> <balroga3 at yandex.ru>
>
> _______________________________________________
> pgpool-general mailing list
> pgpool-general at pgpool.net
> http://www.pgpool.net/mailman/listinfo/pgpool-general
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20170515/fb67b843/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.jpg
Type: image/jpeg
Size: 52294 bytes
Desc: not available
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20170515/fb67b843/attachment-0001.jpg>


More information about the pgpool-general mailing list