[pgpool-general: 5007] Re: Avoiding downtime when pgpool changes require a restart

Jacobo García López de Araujo jacobo.garcia at gmail.com
Wed Sep 21 23:27:55 JST 2016


Hello,

I compiled a PgPool based on a git clone pointing to the particular commit
with the fix of the issue I raised. I deployed the new version and I got
the following error when trying to start PgPool

Sep 21 14:16:37 srv0.net pgpool: *** stack smashing detected ***:
/usr/sbin/pgpool terminated

These are my build dependencies:
'libpam0g-dev', 'libssl-dev', 'libmemcached-dev', 'libpq-dev',
'libpam0g-dev', 'libmemcached-dev'

These are my build options:
configure \
      '--disable-rpath',
      '--with-openssl',
      '--with-pam',
      '--with-memcached=/usr/include/libmemcached'

      prefix: '/usr',
      bindir: '/usr/sbin',
      includedir: '/usr/include/pgpool2',
      sysconfdir: '/etc/pgpool2'

It seems there is a bug on the fix.

Many thanks for your time.

J.




On Mon, Sep 19, 2016 at 10:57 PM Jacobo García López de Araujo <
jacobo.garcia at gmail.com> wrote:

> Muhammad, many thanks for your patch, I'll deploy a new PgPool build with
> this patch and I'll write down here the results of my tests.
>
> As stated above, thanks.
>
>
> On Mon, Sep 19, 2016 at 10:38 PM Muhammad Usama <m.usama at gmail.com> wrote:
>
>> On Thu, Sep 15, 2016 at 9:43 PM, Jacobo García López de Araujo <
>> jacobo.garcia at gmail.com> wrote:
>>
>>> I believe this is currently a bug, I'd like to know better before I fill
>>> a bug against PgPool bug tracker, I'm afraid that I'm missing something,
>>> but after one more day of tests, I have been unsuccessful of restarting 2
>>> PgPpool watchdog without incurring in a few seconds of downtime.
>>>
>>> I'll be grateful for any help or information about this issue.
>>>
>>> Many thanks,
>>>
>>> Jacobo García.
>>>
>>>
>>>
>> Hi
>>
>> Your use case is valid and pgpool-II should not produce a FATAL error if
>> the configurations of nodes differs. I have pushed the fix for that in
>> pgpool-II 3.5 and master branches, You can try building from the source
>> code to check if your problem is fixed
>>
>>
>> https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commitdiff;h=a38fa0910f94dfc5314fe34bd8ad86dc7dfb594e
>>
>> Thanks
>> Best regards
>> Muhammad Usama
>>
>>>
>>>
>>> On Wed, Sep 14, 2016 at 4:12 PM Jacobo García López de Araujo <
>>> jacobo.garcia at gmail.com> wrote:
>>>
>>>> Hello,
>>>>
>>>> I am trying to set load_balance_mode =  off setting in one testing 2
>>>> nodes cluster. The option is currently set to on. It is a setting that
>>>> requires a full restart in order to be changed.
>>>>
>>>> I haven't found a solution that does not provoke downtime on my setup.
>>>>
>>>> If I restart pgpool on the master node the watchdog is failed over to
>>>> the secondary, then master will refuse to join the cluster with pgpool
>>>> logs spitting the following error:
>>>>
>>>> FATAL:  configuration error. The configurations on master node is
>>>> different
>>>>
>>>> Then the master will shut down, every time I start the now old master
>>>> it will refuse to join the cluster because settings are different. In
>>>> this situation I just have one node running pgpool so if I restart
>>>> this node it will stop accepting connections through the virtual IP
>>>> and downtime will occur.
>>>>
>>>> The other strategy I tried is restart pgpool on the secondary node
>>>> first. In this case I also got the same error, and the secondary node
>>>> refuses to join the cluster too.
>>>>
>>>> I'd like to know what is the ideal procedure in order to change one of
>>>> those settings without having downtime.
>>>>
>>>> Many thanks for your time.
>>>>
>>>> Jacobo García.
>>>>
>>>>
>>>>
>>>> --
>>>> Jacobo García López de Araujo.
>>>>
>>> --
>>> Jacobo García López de Araujo.
>>>
>>> _______________________________________________
>>> pgpool-general mailing list
>>> pgpool-general at pgpool.net
>>> http://www.pgpool.net/mailman/listinfo/pgpool-general
>>>
>>>
>> --
> Jacobo García López de Araujo.
>
-- 
Jacobo García López de Araujo.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20160921/fe7decd8/attachment.html>


More information about the pgpool-general mailing list