[pgpool-general: 8560] Re: Issues taking a node out of a cluster

Emond Papegaaij emond.papegaaij at gmail.com
Thu Jan 26 00:24:51 JST 2023


>
> < cut steps for adding a new backend >
>> 7) optionaly you can remove backend 1 configuration
>>    parameters. "status" columbn of show pool_nodes will be "unused" after
>>    restarting pgpool.
>
>
> We've tried leaving holes in the numbering initially, but something didn't
> work out as expected. Unfortunately, I don't remember the exact problem.
> Maybe it had to do with each node also running a pgpool instance and gaps
> were not allowed in the watchdog config (like hostname0)? I'll try a build
> without the renumbering and report back with the findings. If we can indeed
> leave gaps in the backend numbering, that would probably fix the issue for
> us. I'm not yet sure what to do with the watchdogs though.
>

We've just completed a full test run where we use a stable numbering
(possibly with gaps) for backends and a contiguous numbering for the
watchdogs and all tests pass. This does have the disadvantage that a
watchdog may get a different number in the configuration than the backend
running on the same node, but this is mostly cosmetically. This solves this
issue for us.

The other question does still remain open though: Why does the
auto_failback not reattach backend 1 when it detects the database is up and
streaming? Could this have been related to the inconsistent numbering of
the backends? I was under the impression that pgpool should always reattach
a detached  backend when it is streaming from the primary.

Best regards,
Emond
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.pgpool.net/pipermail/pgpool-general/attachments/20230125/a28ad703/attachment.htm>


More information about the pgpool-general mailing list