[pgpool-general: 410] Re: pgpool2 status info

Videanu Adrian videanuadrian at yahoo.com
Fri May 4 22:37:44 JST 2012

Hi Ruben,
I have modified the wal_keep_segments to 5000, but i guess that this should be parametrized depending on how long do you plan to keep the slave down and how much traffic do you have towards the database. Thanks for the hint.

I`m planning to have a slave node on a server the is powered off every night, so just before the machine will shutdown i will detach the node from cluster and then stop postgresql process. Then in the morning i will start the postgresql process and after 1 min i will attach the node to the cluster.

Regarding the second question maybe we can find someone to confirm our suppositions :)

Adrian Videanu
--- On Fri, 5/4/12, Lazaro Ruben Garcia Martinez <lgarciam at uci.cu> wrote:

From: Lazaro Ruben Garcia Martinez <lgarciam at uci.cu>
Subject: Re: [pgpool-general: 408] pgpool2 status info
To: "Videanu Adrian" <videanuadrian at yahoo.com>
Cc: pgpool-general at pgpool.net
Date: Friday, May 4, 2012, 3:39 PM

#yiv975349578 p {margin:0;}Hello Videanu.

About question one, it seems to me, that this is a replication problem. In the postgresql.conf there is a parameter, called wal_keep_segments, this is the information written in the postgresql documentation about this parameter (...)Specifies the minimum number of past log file segments kept in the pg_xlog directory,
in case a standby server needs to fetch them for streaming replication. Each segment is
normally 16 megabytes. If a standby server connected to the primary falls behind by more than
wal_keep_segments segments, the primary might remove a WAL segment still needed by the
standby, in which case the replication connection will be terminated.

I recommend you increase this parameter in a value of 4000 or 5000.

About question 2, I have the same, but i think the answer is the same of you (that the postgresql is started on the slave node but it is not attached to the cluster). If you need attache a node you can use pcp_attach_node.


Hi all, 
i have a pgpool2 3.1.2 and 2 postgresql 9.1 nodes with streaming replication. I Also use pgpooladmin. The failover and recovery scenarious works just fine, but what i have a few questions regarding the slave recovery:
1. Let`s say that i have stopped the slave for some administrative reasons. First i have detached from the cluster and then i had stopped the postgresql process on the slave node. If a start this node after one day i have this kind of messages in my logs : 
FATAL:  timeline 36 of the primary does not match recovery target timeline 34.
Do i have to perform a full base backup upon this node ?

2. if i see this: 
Down Running as standby server 
is pgpooladmin what does this means ? that the postgresql is started on the slave node but it is not attached to the cluster (so no read queries will be send to this node) and if i want to attach it i have to run pcp_attach_node?

3. If i press the disconnect (pgpooladmin) on the master node the slave would not become master unless i kill/stop postgresql process on the master machine. Is this the normal behavior ?

Adrian Videanu
pgpool-general mailing list
pgpool-general at pgpool.net



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20120504/1276b6e0/attachment.html>

More information about the pgpool-general mailing list