[pgpool-general: 6498] Re: Reset old data in pgpool

Dmitry Medvedev dm.dm.medvedev at gmail.com
Thu Apr 4 17:31:13 JST 2019


Seems that I found reason of such behaviuor.

health_check_database = 'test'

but that database was absent.

ср, 3 апр. 2019 г. в 21:30, pierre timmermans <ptim007 at yahoo.com>:

> Still the same error in the log ? In the previous log you have the error
> ‘unable to read data from db’, must be a wrong config.
> You can increase the log verbosity
>
> Pierre
>
>
> Le 3 avr. 2019 à 17:27, Dmitry Medvedev <dm.dm.medvedev at gmail.com> a
> écrit :
>
> File removed.
> I can connect via psql from the "temp" server to both temp2 and temp3...
> Still no effect.
>
> ср, 3 апр. 2019 г. в 18:07, Pierre Timmermans <ptim007 at yahoo.com>:
>
>> You should remove the file /tmp/pgpool_status before starting pgpool,
>> because node 0 is set to down into it.
>>
>> Also it looks like pgpool cannot connect to the database 1, make sure the
>> firewall port is open and that you can connect via psql from the temp1
>> server to both temp2 and temp3
>>
>>
>> Pierre
>>
>>
>> On Wednesday, April 3, 2019, 4:51:57 PM GMT+2, Dmitry Medvedev <
>> dm.dm.medvedev at gmail.com> wrote:
>>
>>
>> I am using pgpool 4.0.4
>> 3 vitrual machines:
>> temp 172.28.30.5 - with pgpool, no PostgreSQL
>> temp2 172.28.30.6 - primary PostgreSQL
>> temp3 172.28.30.7 - standby PostgreSQL
>>
>> Query select pg_is_in_recovery(); returns "f" on temp2 and "t" on temp3.
>>
>> [root at temp ~]# journalctl --unit pgpool.service
>> -- Logs begin at Wed 2019-04-03 16:47:13 MSK, end at Wed 2019-04-03
>> 17:50:42 MSK. --
>> Apr 03 16:55:32 temp systemd[1]: Started Pgpool-II.
>> Apr 03 16:55:33 temp pgpool[4696]: 2019-04-03 16:55:32: pid 4696: LOG:
>> reading status file: 0 th backend is set to down status
>> Apr 03 16:55:33 temp pgpool[4696]: 2019-04-03 16:55:32: pid 4696: LOG:
>> Setting up socket for 0.0.0.0:9999
>> Apr 03 16:55:33 temp pgpool[4696]: 2019-04-03 16:55:32: pid 4696: LOG:
>> Setting up socket for :::9999
>> Apr 03 16:55:33 temp pgpool[4696]: 2019-04-03 16:55:33: pid 4696: LOG:
>> find_primary_node_repeatedly: waiting for finding a primary node
>> Apr 03 16:55:33 temp pgpool[4696]: 2019-04-03 16:55:33: pid 4696: ERROR:
>> unable to read data from DB node 1
>> Apr 03 16:55:33 temp pgpool[4696]: 2019-04-03 16:55:33: pid 4696:
>> DETAIL:  EOF encountered with backend
>> Apr 03 16:55:33 temp pgpool[4696]: 2019-04-03 16:55:33: pid 4696: LOG:
>> find_primary_node: make_persistent_db_connection_noerror failed on
>> Apr 03 16:55:34 temp pgpool[4696]: 2019-04-03 16:55:34: pid 4696: ERROR:
>> unable to read data from DB node 1
>> Apr 03 16:55:34 temp pgpool[4696]: 2019-04-03 16:55:34: pid 4696:
>> DETAIL:  EOF encountered with backend
>> Apr 03 16:55:34 temp pgpool[4696]: 2019-04-03 16:55:34: pid 4696: LOG:
>> find_primary_node: make_persistent_db_connection_noerror failed on
>> Apr 03 16:55:35 temp pgpool[4696]: 2019-04-03 16:55:35: pid 4696: ERROR:
>> unable to read data from DB node 1
>> Apr 03 16:55:35 temp pgpool[4696]: 2019-04-03 16:55:35: pid 4696:
>> DETAIL:  EOF encountered with backend
>> ...and so on...
>>
>> ср, 3 апр. 2019 г. в 17:43, Tatsuo Ishii <ishii at sraoss.co.jp>:
>>
>> > Hello to everyone. A couple of days I've spent trying to understand how
>> > pgpool-II works.
>> >
>> > After some cruel experiments I've broken my pgpool cluster (1 primary
>> and 1
>> > standby nodes) :-)
>> >
>> > I've re-configured it and launched pgpool again. And now all nodes have
>> > "standby" role no matter what i do. Is there any way of reset pgpool?
>> Nodes
>> > without pgpool operating correctly: one is primary, other is standby.
>>
>> It is likely that Pgpool-II failed to detect the primary. Which
>> version of Pgpool-II are you using? Can you share Pgpool-II debug log
>> right after starting up it? It detects the primary upon starting up.
>>
>> Best regards,
>> --
>> Tatsuo Ishii
>> SRA OSS, Inc. Japan
>> English: http://www.sraoss.co.jp/index_en.php
>> Japanese:http://www.sraoss.co.jp
>>
>> _______________________________________________
>> pgpool-general mailing list
>> pgpool-general at pgpool.net
>> http://www.pgpool.net/mailman/listinfo/pgpool-general
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20190404/7767f341/attachment-0001.html>


More information about the pgpool-general mailing list