[Pgpool-general] Question about Online Recovery

DM dm.aeqa at gmail.com
Thu Apr 9 00:17:48 UTC 2009


I really didnt test this scenario on my boxes. from my understanding one
location should be good,
Pgpool experts please answer Harolds question.

Thanks
Deepak
On Wed, Apr 8, 2009 at 4:32 PM, Harold Lim <rold_50 at yahoo.com> wrote:

>
> Hi Deepak,
>
> Another related question. If I enable the wal archive on many machine, is
> it ok to have the WAL location be the same for all of my machine? Or should
> thay be saved at different location.
>
> Currently my archive_command is like this:
>
> 'rsync -e "ssh -o StrictHostKeyChecking=no -i
> /opt/PostgreSQL/8.3/data/id_dsa -l harold" %p harold at XXXX:~/exchange/wal/%f
>  < /dev/null'
>
> Will that cause conflicts, since all of my machines have harold at XXXX
> :~/exchange/wal/%f
>
>
> Thanks,
> Harold
>
>
> --- On Tue, 4/7/09, DM <dm.aeqa at gmail.com> wrote:
>
> > From: DM <dm.aeqa at gmail.com>
> > Subject: Re: [Pgpool-general] Question about Online Recovery
> > To: rold_50 at yahoo.com, pgpool-general at pgfoundry.org
> > Date: Tuesday, April 7, 2009, 12:07 PM
>  > Harold,
> >
> > WAL archives can be enabled on one machine or many machines
> > it depends on
> > how you want.
> > Assuming you have 2 systems one primary and another
> > standby, in real time
> > scenario any system can go down. Its better to enable WAL
> > on both system so
> > that if one fails you could recover from other.
> >
> > For your issue with recovery make sure that you have added
> > both of your
> > system ip address or host name in pgpool_hba.conf file and
> > try executing
> > your scripts one by one you should be able to debug it.
> >
> > Also copy the scripts to recover the database on both
> > machines.
> >
> > I can send you my steps to recovery if you want. Its same
> > as Gerd's but
> > little modification.
> >
> > - Deepak
> > -----------------------------------------------------------
> >
> > > Message: 2
> > > Date: Mon, 6 Apr 2009 15:32:34 -0700 (PDT)
> > > From: Harold Lim <rold_50 at yahoo.com>
> > > Subject: [Pgpool-general] Question about Online
> > Recovery
> > > To: pgpool-general at pgfoundry.org
> > > Message-ID:
> > <5188.66741.qm at web51003.mail.re2.yahoo.com>
> > > Content-Type: text/plain; charset=us-ascii
> > >
> > >
> > > Hi All,
> > >
> > > I'm trying to setup the online recovery. I'm
> > following the
> > > tutorial/beginners guide written by Gerd.
> > >
> > > I am getting an error for 2nd stage. Any idea what the
> > problem might be?
> > >
> > >
> > > Below is the log file:
> > >
> > > 2009-04-06 18:28:14 DEBUG: pid 25867: pcp_child:
> > authentication OK
> > > 2009-04-06 18:28:14 DEBUG: pid 25867: pcp_child:
> > received PCP packet type
> > > of service 'O'
> > > 2009-04-06 18:28:14 DEBUG: pid 25867: pcp_child: start
> > online recovery
> > > 2009-04-06 18:28:14 LOG:   pid 25867: starting
> > recovering node 1
> > > 2009-04-06 18:28:14 DEBUG: pid 25867: exec_checkpoint:
> > start checkpoint
> > > 2009-04-06 18:28:14 DEBUG: pid 25867: exec_checkpoint:
> > finish checkpoint
> > > 2009-04-06 18:28:14 LOG:   pid 25867: CHECKPOINT in
> > the 1st stage done
> > > 2009-04-06 18:28:14 LOG:   pid 25867: starting
> > recovery command: "SELECT
> > > pgpool_recovery('copy_base_backup',
> > '172.16.63.10',
> > > '/opt/PostgreSQL/8.3/data')"
> > > 2009-04-06 18:28:14 DEBUG: pid 25867: exec_recovery:
> > start recovery
> > > 2009-04-06 18:28:22 DEBUG: pid 25834: starting health
> > checking
> > > 2009-04-06 18:28:22 DEBUG: pid 25834: health_check: 0
> > th DB node status: 1
> > > 2009-04-06 18:28:22 DEBUG: pid 25834: health_check: 1
> > th DB node status: 3
> > > 2009-04-06 18:28:33 DEBUG: pid 25867: exec_recovery:
> > finish recovery
> > > 2009-04-06 18:28:33 LOG:   pid 25867: 1st stage is
> > done
> > > 2009-04-06 18:28:33 LOG:   pid 25867: starting 2nd
> > stage
> > > 2009-04-06 18:28:33 LOG:   pid 25867: all connections
> > from clients have
> > > been closed
> > > 2009-04-06 18:28:33 DEBUG: pid 25867: exec_checkpoint:
> > start checkpoint
> > > 2009-04-06 18:28:33 DEBUG: pid 25867: exec_checkpoint:
> > finish checkpoint
> > > 2009-04-06 18:28:33 LOG:   pid 25867: CHECKPOINT in
> > the 2nd stage done
> > > 2009-04-06 18:28:33 LOG:   pid 25867: starting
> > recovery command: "SELECT
> > > pgpool_recovery('pgpool_recovery_pitr',
> > '172.16.63.10',
> > > '/opt/PostgreSQL/8.3/data')"
> > > 2009-04-06 18:28:33 DEBUG: pid 25867: exec_recovery:
> > start recovery
> > > 2009-04-06 18:28:33 ERROR: pid 25867: exec_recovery:
> > pgpool_recovery_pitr
> > > command failed at 2nd stage
> > > 2009-04-06 18:28:33 DEBUG: pid 25867: exec_recovery:
> > finish recovery
> > > 2009-04-06 18:28:33 DEBUG: pid 25867: pcp_child:
> > received PCP packet type
> > > of service 'X'
> > > 2009-04-06 18:28:33 DEBUG: pid 25867: pcp_child:
> > client disconnecting.
> > > close connection
> > >
> > > Thanks!
> > > Harold
> > >
> > >
> > >
> > >
> > >
> > >
> > > ------------------------------
> > >
> > > Message: 3
> > > Date: Mon, 6 Apr 2009 16:42:20 -0700 (PDT)
> > > From: Harold Lim <rold_50 at yahoo.com>
> > > Subject: [Pgpool-general] Online recovery + WAL
> > archiving
> > > To: pgpool-general at pgfoundry.org
> > > Message-ID:
> > <858505.36310.qm at web51010.mail.re2.yahoo.com>
> > > Content-Type: text/plain; charset=us-ascii
> > >
> > >
> > > Hi,
> > >
> > > I'm currently looking at the pgpool-ii
> > beginner's guide.
> > > I'm mainly interested in online recovery (e.g,
> > dynamically adding a new
> > > postgresql node).
> > >
> > > Do I have to enable wal archiving for all of my node?
> > or just for my first
> > > node?
> > >
> > >
> > >
> > > Thanks!,
> > > Harold
> > >
> > >
> > >
> > >
> > >
> > >
> > > ------------------------------
> > >
> > > _______________________________________________
> > > Pgpool-general mailing list
> > > Pgpool-general at pgfoundry.org
> > > http://pgfoundry.org/mailman/listinfo/pgpool-general
> > >
> > >
> > > End of Pgpool-general Digest, Vol 53, Issue 5
> > > *********************************************
> > >
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://pgfoundry.org/pipermail/pgpool-general/attachments/20090408/61421e40/attachment-0001.html 


More information about the Pgpool-general mailing list