[Pgpool-general] Replication problem

Marcelo Martins pglists at zeroaccess.org
Fri Dec 19 19:15:58 UTC 2008


Hi   Łukasz

I'm not so sure I understood you correctly here but let me see if I  
can help.
I hope some of the things below make some sense to you.
I'm sure others may also have some better ideas.


On Dec 18, 2008, at 2:59 PM, Łukasz Jagiełło wrote:

> 2008/12/18 Marcelo Martins <pglists at zeroaccess.org>:
>> Pretty sure you don't need anymore but thought about posting on the  
>> list
>> even though this is not really related to pgpool.
>> It's just a script I created for making a hotcopy of the base. It's  
>> done for
>> debian so you might want to look into the script and make path  
>> changes.
>
> What you mean I don't need anymore that ?

hmm, this pg_hotsync.sh script I posted is not really related to a  
pgpool online-recovery is just something I use
for keeping a backup server sync'd with the production one. I do a  
hotbackup using that script every 2 hours.

> Atm use such script at fedora:
> (pgpool_recovery)
> #v+
> #!/bin/sh
>
> if [ $# -ne 3 ]
> then
>    echo "pgpool_recovery datadir remote_host remote_datadir"
>    exit 1
> fi
>
> datadir=$1
> DEST=$2
> DESTDIR=$3
>
> rsync  -qavz --delete --exclude recovery.conf --exclude
> postmaster.opts  --exclude postmaster.pid  $datadir/
> root@$DEST:$DESTDIR/
> wait
> #v-
> (pgpool_remote_start)
> #v+
> #! /bin/sh
>
> if [ $# -ne 2 ]
> then
>    echo "pgpool_remote_start remote_host remote_datadir"
>    exit 1
> fi
>
> DEST=$1
> DESTDIR=$2
> PGCTL=/etc/init.d/postgresql
>
> ssh -T root@${DEST} $PGCTL restart 2>/dev/null 1>/dev/null < /dev/ 
> null &
> #v-
>
> At second script there are unused parameters, but doesn't want search
> where are define parameters at pgpool.

Don't quite get the above question


> What is correct way to create new backend in replication mode other
> then hotcopy ?

I'm not sure if there is a totally correct way. I guess it really  
depends on what would be the best  way for your setup.
In my case I like starting a pg_start_backup and then starting an  
rsync to the destination node that will be recovered.
Also by doing an rsync, the files created under PGDATA/base are kept  
the same between all backends.

If I were to bring a new node to the pool I would probably first sync  
the new node with one of the postgreSQL nodes by
doing  a hotcopy that way when I do the online recovery it will take  
less time since I have already done a base copy.

Then I would bring that new node to the pool by using online recovery  
pcp command.
The "recovery_1st_stage_command"  (script) would take care of doing
another base copy which should be faster and then once that is done  
bringing the new server online.

I'm sure that are other ways that others may have implemented.

>
> -- 
> Łukasz Jagiełło
> G-Forces Web Management Polska
>
> T: +44 (0) 845 055 9040
> F: +44 (0) 845 055 9038
> E: lukasz.jagiello at gforces.pl
> W: www.gforces.co.uk
>
> This e-mail is confidential and intended solely for the use of the
> individual to whom it is addressed. If you are not the intended
> recipient, please notify the sender immediately. Do not disclose the
> contents of this e-mail to any other person, use it for any purpose,
> print, copy, store, or distribute it. This e-mail should then be
> destroyed. Any views expressed in this message are those of the
> individual sender except where the sender specifically states them to
> be the views of G-Forces Web Management Ltd. Registered Office: 4 & 5
> Kings Row, Armstrong Road, Maidstone, Kent. ME15 6AQ. Registered
> Company No. 3863609. VAT Registration No. 7250 384 50



More information about the Pgpool-general mailing list