[pgpool-general: 3237] Re: Postgres, Pgpool2 for Autoscaling in private cloud (OpenStack)

Job Cespedes jobcespedes at gmail.com
Sat Oct 18 09:56:47 JST 2014


Understood. I won’t use pcp_promote_node then but one of the two options
you mentioned.


By any chance, do you know an alternative for zero downtime with Postgres
while scaling?


Regards,

2014-10-17 18:19 GMT-06:00 Tatsuo Ishii <ishii at postgresql.org>:

> > Hi,
> >
> > I'm currently researching on HA, Failover, Autosacaling applications in
> > private clouds.  I consider Pgpool2 and Postgres a viable option for the
> DB
> > layer. But I have several question about whether to use horizontal or
> > vertical scaling. So far I think vertical scaling would be the way to go
> > for the DB layer. Adding more nodes in a master/slave configuration
> doesn’t
> > seem right performance-wise and it seems more complex also. Besides I
> think
> > could only add more slaves nodes. But maybe someone out there knows
> better.
> >
> >
> > Anyway my question is the following:
> >
> >
> > The promotion of a slave to master is transparent for the client
> connected
> > to pgpool or there’s a short connection loss (data loss)?
>
> No, it's not tranparent. Connections from client to pgpool-II will be
> closed. Client needs to reconnect to pgpool-II after changing of
> master.
>
> > The scenario I have in mind is: for vertical scaling I could start by
> > shutting down a slave node, provisioning more resources, boot again, and
> > promote to master with the command pcp_promote_node, after that I could
> do
> > the same with the former master, now slave, and then do an online
> recovery.
> > However, I’m not sure this is completely transparent for clients and
> > whether or not it has zero downtime.
>
> So the process is not completely transparent for clients.
>
> BTW I do not recommend to use pcp_promote_node command. It just
> changes the state in pgpool's memory and does nothing to
> PostgreSQL. You'd better either 1) shutdown master and trigger
> failover to promoter standby 2) use pcp_detach_node command to trigger
> failover to promote standby then shutdown master manually. After #1 or
> #2, you do online recovery to resync old master to the new master.
>
> After recovering the old master (as new standby), you can attach the
> new standby by using pcp_attach_node. the process is transparent to
> clients. Existing connections from clients to pgpool will be kept.
>
> Best regards,
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese:http://www.sraoss.co.jp
>



-- 
Job Cespedes
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20141017/b535d7ad/attachment.html>


More information about the pgpool-general mailing list