[Pgpool-general] The forgotten question

Marcelo Martins pglists at zeroaccess.org
Thu Dec 18 19:13:53 UTC 2008


Ok , in that case, would it heart to have 3 attached nodes and then  
say 3 more that are always in a detached state. It would only be  
attached in case needed ?



Marcelo
PostgreSQL DBA
Linux/Solaris System Administrator
http://www.zeroaccess.org

On Dec 18, 2008, at 1:03 PM, Daniel.Crespo at l-3com.com wrote:

> What I understood about this is that you have to know what backend  
> might
> or will be part of the pool.
>
> So, let's say you have only two backends, but you wish to add a 3rd.
>
> Initially, you have:
>
> backend_hostname0 = '172.16.10.10'
> backend_port0 = 5432
> backend_weight0 = 1
> backend_data_directory0 = '/var/lib/postgresql/8.3/main'
>
> backend_hostname1 = '172.16.10.11'
> backend_port1 = 5432
> backend_weight1 = 1
> backend_data_directory1 = '/var/lib/postgresql/8.3/main'
>
> But you would have to add:
>
> backend_hostname2 = '172.16.10.12'
> backend_port2 = 5432
> backend_weight2 = 1
> backend_data_directory2 = '/var/lib/postgresql/8.3/main'
>
> Even if it is not still there.
>
> When restarting pgpool, it will try to add the 3 nodes, but the last  
> one
> (let's say) is not ready yet.
>
> After you start, you are going to be able to call the attach command
> specifying to add the node 2 (zero-based node count)
>
> Daniel
>
> -----Original Message-----
> From: pgpool-general-bounces at pgfoundry.org
> [mailto:pgpool-general-bounces at pgfoundry.org] On Behalf Of Marcelo
> Martins
> Sent: Thursday, December 18, 2008 1:54 PM
> To: Tatsuo Ishii
> Cc: pgpool-general at pgfoundry.org
> Subject: [Pgpool-general] The forgotten question
>
> In the pgpool page there is section under online recovery that says "A
> recovery target node must have detached before doing online recovery.
> If you wish to add PostgreSQL server dynamically, add backend_*
> parameters and reload pgpool.conf. pgpool-II registers a new node as a
> detached node. ".
>
> How should that "backend_*" be configured in the pgpool.conf file so
> that I could add nodes dynamically when needed. Say I already have 3
> backend_nodes that are already configured and I would like to have the
> option to attache new nodes dynamically without having to edit the
> pgpool.conf file and reload it.
>
> I assume that such is what that comment above refers to be possible,
> am I interpreting that wrong ?
>
> So should I have something like the below for that to work ?
>
> backend_hostname0 = '172.16.10.10'
> backend_port0 = 5432
> backend_weight0 = 1
> backend_data_directory0 = '/var/lib/postgresql/8.3/main'
>
> backend_hostname1 = '172.16.10.11'
> backend_port1 = 5432
> backend_weight1 = 1
> backend_data_directory1 = '/var/lib/postgresql/8.3/main'
>
> backend_*
>
>
>
> Marcelo
> PostgreSQL DBA
> Linux/Solaris System Administrator
> http://www.zeroaccess.org
>
> On Dec 18, 2008, at 12:21 PM, Marcelo Martins wrote:
>
>> Hi Tatsuo,
>>
>> I understand that pgpool does pooling by saving the connections to PG
>> and reusing them when the same user/database is used and indeed I see
>> some pcp procs being reused 40+ times.  What I'm trying to figure out
>> here is, does pgpool just passes the new query it receives through
>> that same connection that has already been opened previously and such
>> will be reused by this new request coming from the same user to the
>> same database ?
>>
>> How does pgpool queues its incoming connections when it starts to
>> receive more connections than num_init_children is available ? I'm
>> pretty sure here that the "child_life_time" setting would be the one
>> responsible for freeing up the pgpool a  child so that the new
>> connection queued can obtain access to PG through pgpool and execute
>> its query, correct ?
>>
>> In regards to the load balancing, that can indeed be very helpful
>> specially since the master node is usually the one with a higher  
>> load.
>> I'm pretty sure this may not be possible right now but it would be
>> pretty cool if pgpool only opened a connection to the backend that it
>> chooses to run the SELECT query against. I'm pretty sure this may be
>> complicated to implement, if it all possible which may not be,  since
>> this would affect how pgpool handles connections.
>>
>>
>> Also you were right about the online recovery scripts. If I skip the
>> second base backup it seems 30-50% faster in most cases. What takes
>> the longest time is just the checkpoint that pg_start_backup has to  
>> do
>> while there is a lot of writes are being done to the DB. But the new
>> online recovery setting makes things perfect since the client just
>> keeps on trying to send the data over and eventually when the 2nd
>> stage is over the rest of all data resumes to be sent.
>>
>> can't remember the other questions right now, sorry :)
>>
>>
>>
>>
>> Marcelo
>> PostgreSQL DBA
>> Linux/Solaris System Administrator
>> http://www.zeroaccess.org
>>
>> _______________________________________________
>> Pgpool-general mailing list
>> Pgpool-general at pgfoundry.org
>> http://pgfoundry.org/mailman/listinfo/pgpool-general
>
> _______________________________________________
> Pgpool-general mailing list
> Pgpool-general at pgfoundry.org
> http://pgfoundry.org/mailman/listinfo/pgpool-general



More information about the Pgpool-general mailing list