[pgpool-general: 4487] Re: Dynamic configuration of pgpool

Yuri Niyazov yuri at academia.edu
Wed Feb 24 12:19:52 JST 2016


Any more comment on this?

On Fri, Feb 19, 2016 at 1:24 PM, Yuri Niyazov <yuri at academia.edu> wrote:

> We will never have 127 standbys at the same time.
>
> What we will have is, over time, standbys dying, and then new standbys
> coming up with new hostnames and IP addresses. We will add new standbys as
> new backend_hostname<number> entries, and then reloading pgpool config. We
> cannot update old entries without restarting pgpool and breaking
> connections, so that means that we will be allowed at most 127 failed
> standbys before we have to restart pgpool.
>
> On Fri, Feb 19, 2016 at 1:10 AM, Tatsuo Ishii <ishii at postgresql.org>
> wrote:
>
>> Have you actually tried with 127 standbys?
>>
>> I don't think the configuration is usable because of too much load on
>> the primary master.
>>
>> Best regards,
>> --
>> Tatsuo Ishii
>> SRA OSS, Inc. Japan
>> English: http://www.sraoss.co.jp/index_en.php
>> Japanese:http://www.sraoss.co.jp
>>
>> > How do I manage read replicas that appear and disappear dynamically and
>> > unpredictably in an Amazon Web Services environment? If I understand the
>> > docs correctly at most, 127 read replicas can appear and disappear
>> before
>> > we absolutely must restart pgpool, correct?
>> >
>> > Some background:
>> >
>> >   We are running PostgreSQL in AWS. We have an always-on master
>> database,
>> > and a number of read-only replicas using streaming replication. At the
>> > moment, we do not use pgpool - we just hardcode the hostnames of the
>> master
>> > and the read-only replicas in our app code.
>> >
>> >   We want to dynamically scale up and down our read replicas based on
>> daily
>> > usage patterns, as well as use the cheaper spot EC2 instances that
>> Amazon
>> > provides - the caveat being that the latter can be shut down if the
>> market
>> > price goes above what we are willing to pay. When this happens, we would
>> > either wait for the spot price to come down again, or bring up new
>> > instances with a higher price.
>> >
>> >   The following part from the manual for the "backend_hostname"
>> > configuration setting suggests that using pgpool to accomplish this
>> will be
>> > hard: "New nodes can be added in this parameter by reloading a
>> > configuration file. However, values cannot be updated so you must
>> restart
>> > pgpool-II in that case."
>> >
>> > So, if we have, in the config:
>> >
>> > backend_hostname1=A
>> >
>> > and A goes down, then when its replacement B comes up, we can change the
>> > config in the following way:
>> >
>> > backend_hostname1=A
>> > backend_hostname2=B
>> >
>> > and tell pgpool to reload the config. I guess we can also remove the
>> > backend_hostname1 line completely, like this:
>> >
>> > backend_hostname2=B
>> >
>> > We *cannot* edit backend_hostname1 like this and expect it work:
>> >
>> > backend_hostname1=B
>> >
>> > If I understand the code correctly, the biggest number allowed in the
>> > backend_hostname<backend_number> is MAX_NUM_BACKENDS, which is 128.
>> >
>> > This means that after we add backend_hostname127=ZZZ, we must drop back
>> to
>> > backend_hostname0=ZZZA and restart pgpool. Is there a way to avoid this?
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20160223/f8951206/attachment.html>


More information about the pgpool-general mailing list