[pgpool-general: 7207] Re: Query regarding failover and recovery

Praveen Kumar K S praveenssit at gmail.com
Wed Aug 19 18:19:54 JST 2020


Hello,

Thanks for the clarification. I'm trying to execute and getting below
error. I'm attaching configs for your reference. Can you please help ?

postgres at pgp1:/etc/pgpool2/4.0.9$ psql -U postgres -h localhost -p 9999
--pset pager=off -c "show pool_nodes"
 node_id | hostname | port | status | lb_weight |  role   | select_cnt |
load_balance_node | replication_delay | last_status_change
---------+----------+------+--------+-----------+---------+------------+-------------------+-------------------+---------------------
 0       | pg1      | 5432 | down   | 0.500000  | standby | 0          |
false             | 0                 | 2020-08-19 09:02:46
 1       | pg2      | 5432 | up     | 0.500000  | primary | 0          |
true              | 0                 | 2020-08-19 09:02:46
(2 rows)

postgres at pgp1:/etc/pgpool2/4.0.9$
postgres at pgp1:/etc/pgpool2/4.0.9$
postgres at pgp1:/etc/pgpool2/4.0.9$
postgres at pgp1:/etc/pgpool2/4.0.9$ pcp_recovery_node -h localhost -p 9898 -n
0
Password:
FATAL:  authentication failed for user "postgres"
DETAIL:  username and/or password does not match

postgres at pgp1:/etc/pgpool2/4.0.9$


On Wed, Aug 19, 2020 at 10:22 AM Tatsuo Ishii <ishii at sraoss.co.jp> wrote:

> > I have 3 servers with two postgres (9.6) and one pgpool (4.0.9). Postgres
> > is configured with streaming replication.
> > When I manually stop postgres service on primary node, failover has
> > happened successfully.
> > Now I started postgres service on old primary node which is expected to
> be
> > converted as slave, pgpool is not triggering recovery_1st_stage_command =
> > 'recovery_1st_stage.sh'
> > May I know what could be the reason ?
>
> That is an expected behavior. The node previously brought down is left
> as "down" by pgoool. This is intentional. You need to issue
> pcp_recovery_node against the node (previous primary node in your
> case) to make it online again.
>
> When a node is brought down, there might be a reason: for example
> needed to repair the hardware. So in general it's not safe to
> automatically restart the previously down node.
>
> Best regards,
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese:http://www.sraoss.co.jp
>


-- 


*Regards,*


*K S Praveen KumarM: +91-9986855625 *
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20200819/cac393f5/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pool_passwd
Type: application/octet-stream
Size: 44 bytes
Desc: not available
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20200819/cac393f5/attachment-0004.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pool_hba.conf
Type: application/octet-stream
Size: 3369 bytes
Desc: not available
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20200819/cac393f5/attachment-0005.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pcp.conf
Type: application/octet-stream
Size: 949 bytes
Desc: not available
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20200819/cac393f5/attachment-0006.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pgpool.conf
Type: application/octet-stream
Size: 40795 bytes
Desc: not available
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20200819/cac393f5/attachment-0007.obj>


More information about the pgpool-general mailing list