[pgpool-general: 3239] Re: Postgres, Pgpool2 for Autoscaling in private cloud (OpenStack)

Job Cespedes jobcespedes at gmail.com
Sat Oct 18 09:59:20 JST 2014


Thanks Christophe, I certainly  will.

2014-10-17 18:57 GMT-06:00 Christophe Pettus <xof at thebuild.com>:

> Hi, Job,
>
> For your application, you might want to consider PostgreSQL-XC or -XL;
> that might be a closer fit to your model than a cluster managed by pgpool.
>
> On Oct 17, 2014, at 5:56 PM, Job Cespedes <jobcespedes at gmail.com> wrote:
>
> > Hi Sergey, glad you pointed out IO. I haven’t resolve how to deal with
> scaling storage in the db layer. Shared file systems might scale well but I
> am not sure about their IO performance.
> >
> >
> > However, what I would like to accomplish is a good grade of adaptability
> in the application to scale in or out (elasticity) with zero downtime and
> not much of it being able to grow bigger and bigger. I certainly should
> consider it, though.
> >
> >
> >
> > Thanks,
> >
> >
> > 2014-10-17 18:03 GMT-06:00 Сергей Мелехин <cpro29a at gmail.com>:
> > Hi, Job!
> > Open transactions will fail on master failover.
> > Vertical scaling is naturally limited, especially in cloud environment.
> IO is usually a bottleneck. After upgrading master to SSD storage, or even
> RAID10 SSD - you don't really have much cost-effective options in speeding
> it up, concidering you have enouch RAM.
> > If you expect to scale big - you shoud concider sharding in some point.
> And it will almost invetibly envolve modifying application logic.
> >
> > With best regards, Sergey Melekhin
> >
> > С Уважением, Сергей Мелехин.
> >
> > 2014-10-18 10:41 GMT+11:00 Job Cespedes <jobcespedes at gmail.com>:
> > Hi,
> >
> > I'm currently researching on HA, Failover, Autosacaling applications in
> private clouds.  I consider Pgpool2 and Postgres a viable option for the DB
> layer. But I have several question about whether to use horizontal or
> vertical scaling. So far I think vertical scaling would be the way to go
> for the DB layer. Adding more nodes in a master/slave configuration doesn’t
> seem right performance-wise and it seems more complex also. Besides I think
> could only add more slaves nodes. But maybe someone out there knows better.
> >
> >
> >
> > Anyway my question is the following:
> >
> >
> >
> > The promotion of a slave to master is transparent for the client
> connected to pgpool or there’s a short connection loss (data loss)?
> >
> >
> >
> > The scenario I have in mind is: for vertical scaling I could start by
> shutting down a slave node, provisioning more resources, boot again, and
> promote to master with the command pcp_promote_node, after that I could do
> the same with the former master, now slave, and then do an online recovery.
> However, I’m not sure this is completely transparent for clients and
> whether or not it has zero downtime.
> >
> >
> >
> > Thanks for any piece of advice,
> >
> >
> >
> > --
> > Job Cespedes
> >
> > _______________________________________________
> > pgpool-general mailing list
> > pgpool-general at pgpool.net
> > http://www.pgpool.net/mailman/listinfo/pgpool-general
> >
> >
> >
> >
> >
> > --
> > Job Cespedes
> > _______________________________________________
> > pgpool-general mailing list
> > pgpool-general at pgpool.net
> > http://www.pgpool.net/mailman/listinfo/pgpool-general
>
> --
> -- Christophe Pettus
>    xof at thebuild.com
>
>


-- 
Job Cespedes
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20141017/017ed8b6/attachment.html>


More information about the pgpool-general mailing list