[pgpool-general: 7204] Re: One node does not comes to status up

Rinaldo Akio Uehara rinaldo.uehara at gmail.com
Tue Aug 18 21:25:50 JST 2020


I looked into pgpool log from node 1 (which is the master now) and attached
part of it.
>From what it seems, it does both stage 1 and 2, but failback request is
canceled by other pgpool.
Don't know why this is happening.

Em ter., 18 de ago. de 2020 às 08:59, Rinaldo Akio Uehara <
rinaldo.uehara at gmail.com> escreveu:

> After a maintenance where I needed to add disk, I started postgresql and
> pgpool in each node (3 nodes total).
> The first two came back OK after doing the pcp_recovery.
> The third one (although status on pgsql is fine) never comes with the
> status "up"
>
> [postgres at spcdmvm8021 ~]$ psql -h 192.168.21.60 -p 5000 -U pgpool
> postgres -c "show pool_nodes"
> Password for user pgpool:
>  node_id |  hostname   | port | status | lb_weight |  role   | select_cnt
> | load_balance_node | replication_delay | replication_state |
> replication_sync_state | last_status_change
>
> ---------+-------------+------+--------+-----------+---------+------------+-------------------+-------------------+-------------------+------------------------+---------------------
>  0       | spcdmvm8019 | 5432 | up     | 0.333333  | primary | 76254
>  | true              | 0                 |                   |
>            | 2020-08-18 00:34:14
>  1       | spcdmvm8020 | 5432 | up     | 0.333333  | standby | 52491
>  | false             | 0                 | streaming         | async
>            | 2020-08-18 00:34:14
>  2       | spcdmvm8021 | 5432 | down   | 0.333333  | standby | 0
>  | false             | 0                 | streaming         | async
>            | 2020-08-17 23:21:39
> (3 rows)
>
> [postgres at spcdmvm8021 ~]$ pcp_recovery_node -h 192.168.21.60 -p 9898 -U
> pgpool -n 2
> Password:
> pcp_recovery_node -- Command Successful
> [postgres at spcdmvm8021 ~]$ psql -h 192.168.21.60 -p 5000 -U pgpool
> postgres -c "show pool_nodes"
> Password for user pgpool:
>  node_id |  hostname   | port | status | lb_weight |  role   | select_cnt
> | load_balance_node | replication_delay | replication_state |
> replication_sync_state | last_status_change
>
> ---------+-------------+------+--------+-----------+---------+------------+-------------------+-------------------+-------------------+------------------------+---------------------
>  0       | spcdmvm8019 | 5432 | up     | 0.333333  | primary | 76556
>  | false             | 0                 |                   |
>            | 2020-08-18 00:34:14
>  1       | spcdmvm8020 | 5432 | up     | 0.333333  | standby | 52634
>  | true              | 0                 | streaming         | async
>            | 2020-08-18 00:34:14
>  2       | spcdmvm8021 | 5432 | down   | 0.333333  | standby | 0
>  | false             | 0                 | streaming         | async
>            | 2020-08-17 23:21:39
> (3 rows)
>
> Not sure why this happened.
> I changed the log details on the faulty one to see if I could see any
> error, but didn't notice anything
>
> I attached the log of the server at fault and the 3 configs.
>
> Thanks for any help.
>
> --
> Rinaldo Akio Uehara
>


-- 
Rinaldo Akio Uehara
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20200818/a13757e5/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pgpool-1.log
Type: text/x-log
Size: 11097 bytes
Desc: not available
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20200818/a13757e5/attachment.bin>


More information about the pgpool-general mailing list