[pgpool-general: 28] Re: [Pgpool-general] pgpool limitations

Sandeep Thakkar sandeeptt at yahoo.com
Mon Dec 5 18:09:25 JST 2011


Oh.. I see.. Why then it behaves randomly?


I did the following test:
the number of pgpool client processes are 32 and only one of them is connected to psql client (one session), and I add new node (take basebackup, create recovery.conf, start new server, get the client PIDs using pcp_proc_count, edit pgpool.conf, reload pgpool.conf, pcp_attach_node, get the client PIDs again using pcp_proc_count).. and I found that when the psql client exits, only one pgpool client gets restarted and now has new PID... rest of the idle pgpool client processes had the same PIDs after attaching the node.




________________________________
 From: Tatsuo Ishii <ishii at postgresql.org>
To: sandeeptt at yahoo.com 
Cc: pgpool-general at pgpool.net 
Sent: Monday, December 5, 2011 11:59 AM
Subject: Re: [pgpool-general: 8] Re: [Pgpool-general] pgpool limitations
 
Good catch. I forgot about this. From pgpool-II 3.1, in streaming
replication mode, after failback event, existing sessions are not
disconnected any more. However afte the session exits, pgpool child
restarts to take care of failback node info to, for example, use the
node for load balancing.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

> I just see some additional statements like "failback event found. restart myself"...
> 
> 2011-11-30 10:39:46 LOG:   pid 7398: find_primary_node_repeatedly: waiting for finding a primary node
> 2011-11-30 10:39:46 LOG:   pid 7398: find_primary_node: primary node id is 1
> 2011-11-30 10:39:46 LOG:   pid 7398: failover: set new primary node: 1
> 2011-11-30 10:39:46 LOG:   pid 7398: failover: set new master node: 0
> 2011-11-30 10:39:46 LOG:   pid 7398: failback done. reconnect host localhost(5447)
> 2011-11-30 10:39:46 LOG:   pid 7532: worker process received restart request
> 2011-11-30 10:39:47 LOG:   pid 7565: pcp child process received restart request
> 2011-11-30 10:39:47 LOG:   pid 7398: worker child 7532 exits with status 256
> 2011-11-30 10:39:47 LOG:   pid 7398: fork a new worker child pid 7648
> 2011-11-30 10:44:10 LOG:   pid 7533: do_child: failback event found. restart myself.
> 2011-11-30 10:44:10 LOG:   pid 7534: do_child: failback event found. restart myself.
> ....
> ....
> 
>  
> 
> 
> ________________________________
>  From: Tatsuo Ishii <ishii at postgresql.org>
> To: sandeeptt at yahoo.com 
> Cc: pgpool-general at pgpool.net 
> Sent: Tuesday, November 29, 2011 3:24 PM
> Subject: Re: [pgpool-general: 8] Re: [Pgpool-general] pgpool limitations
>  
> I can't think of any other reasons. Can you find anything special in
> the pgpool log when pgpool child exits?
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese: http://www.sraoss.co.jp
> 
>> client_idle_limit is set to '0'. Here is the other related settings:
>> ....
>> pcp_timeout = 10
>> num_init_children = 32
>> max_pool = 4
>> child_life_time = 300
>> connection_life_time = 0
>> child_max_connections = 0
>> client_idle_limit = 0
>> ....
>>  
>> 
>> 
>> ________________________________
>>  From: Tatsuo Ishii <ishii at sraoss.co.jp>
>> To: sandeeptt at yahoo.com 
>> Cc: singh.gurjeet at gmail.com; pgpool-general at pgfoundry.org; pgpool-hackers at pgfoundry.org 
>> Sent: Wednesday, November 23, 2011 8:21 PM
>> Subject: Re: [Pgpool-general] pgpool limitations
>>  
>> One possibility is client_idle_limit.
>> --
>> Tatsuo Ishii
>> SRA OSS, Inc. Japan
>> English: http://www.sraoss.co.jp/index_en.php
>> Japanese: http://www.sraoss.co.jp
>> 
>>> I have found that sometimes the client connections get disconnected and the new ones are established. What I do is I get the PIDs using "pcp_proc_count" before running "pcp_attach_node" and then run "pcp_proc_count" to check if the PIDs remain same. I found that the behaviour is random. When can this happen?
>>> 
>>> 
>>> ________________________________
>>>  From: Tatsuo Ishii <ishii at sraoss.co.jp>
>>> To: singh.gurjeet at gmail.com 
>>> Cc: pgpool-general at pgfoundry.org; pgpool-hackers at pgfoundry.org 
>>> Sent: Thursday, August 11, 2011 6:11 AM
>>> Subject: Re: [Pgpool-general] pgpool limitations
>>>  
>>>>> > > Is there something in the works to enable this, or is this feature
>>>>> still in
>>>>> > > design phase? If it is already being/been developed, I wish to know if
>>>>> this
>>>>> > > can be back-patched to a point release of pgpool 3.0.x.
>>>>> >
>>>>> > It has been already in pgpool-II 3.1 alpha version.
>>>>> > Currently there's no plan to back-patching to 3.0.x.
>>>>>
>>>>> I certainly hope we won't backpatch a new feature. That would be insane.
>>>>>
>>>> 
>>>> I don't consider this a new feature. I'd say this is unexpected side-effect
>>>> (a.k.a bug) of pcp_attach_node, since nowhere in the docs does is say that
>>>> invoking pcp_attach_node would drop all client connections.
>>> 
>>> This behavior has not been changed since pcp_attach_node was born in
>>> 2006. Moreover, the enhancement in 3.1 is only for steaming
>>> replication mode. Other modes including replication mode does not take
>>> advantage of this.
>>> --
>>> Tatsuo Ishii
>>> SRA OSS, Inc. Japan
>>> English: http://www.sraoss.co.jp/index_en.php
>>> Japanese: http://www.sraoss.co.jp
>>> _______________________________________________
>>> Pgpool-general mailing list
>>> Pgpool-general at pgfoundry.org
>>> http://pgfoundry.org/mailman/listinfo/pgpool-general
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20111205/178a0738/attachment-0001.html>


More information about the pgpool-general mailing list