[pgpool-general: 4938] Re: Load balancing issue

Vlad Novikov xou.slackware at gmail.com
Mon Aug 22 23:51:07 JST 2016


I also see the following in the logs:

Aug 22 14:39:03 pgpool1 pgpool[38062]: [48129-1] LOG:  failback event
detected

Aug 22 14:39:03 pgpool1 pgpool: LOG:  failback event detected

Aug 22 14:39:03 pgpool1 pgpool: LOG:  failback event detected

Aug 22 14:39:03 pgpool1 pgpool: LOG:  failback event detected

Aug 22 14:39:03 pgpool1 pgpool: LOG:  child process with pid: 31484 exits
with status 256

Aug 22 14:39:03 pgpool1 pgpool: LOG:  failback event detected

Aug 22 14:39:03 pgpool1 pgpool: LOG:  fork a new child process with pid:
31971

Aug 22 14:39:03 pgpool1 pgpool: LOG:  child process with pid: 33897 exits
with status 256

Aug 22 14:39:03 pgpool1 pgpool: LOG:  failback event detected

Aug 22 14:39:03 pgpool1 pgpool: LOG:  fork a new child process with pid:
31972

Aug 22 14:39:03 pgpool1 pgpool: LOG:  failback event detected

Aug 22 14:39:03 pgpool1 pgpool[14707]: [50130-1] LOG:  child process with
pid: 7590 exits with status 256

Aug 22 14:39:03 pgpool1 pgpool[14707]: [50131-1] LOG:  fork a new child
process with pid: 31973

Aug 22 14:39:03 pgpool1 pgpool[14707]: [50132-1] LOG:  child process with
pid: 38062 exits with status 256

Aug 22 14:39:03 pgpool1 pgpool[14707]: [50133-1] LOG:  fork a new child
process with pid: 31974

Aug 22 14:39:03 pgpool1 pgpool[14707]: [50134-1] LOG:  child process with
pid: 48024 exits with status 256

Aug 22 14:39:03 pgpool1 pgpool[14707]: [50135-1] LOG:  fork a new child
process with pid: 31975

Aug 22 14:39:03 pgpool1 pgpool[14707]: [50136-1] LOG:  child process with
pid: 62956 exits with status 256

Aug 22 14:39:03 pgpool1 pgpool[14707]: [50137-1] LOG:  fork a new child
process with pid: 31976

Aug 22 14:39:03 pgpool1 pgpool[14707]: [50138-1] LOG:  child process with
pid: 10546 exits with status 256

Aug 22 14:39:03 pgpool1 pgpool[14707]: [50139-1] LOG:  fork a new child
process with pid: 31977

Aug 22 14:39:03 pgpool1 pgpool: LOG:  child process with pid: 7590 exits
with status 256

Aug 22 14:39:03 pgpool1 pgpool: LOG:  fork a new child process with pid:
31973

Aug 22 14:39:03 pgpool1 pgpool: LOG:  child process with pid: 38062 exits
with status 256

Aug 22 14:39:03 pgpool1 pgpool: LOG:  fork a new child process with pid:
31974

Aug 22 14:39:03 pgpool1 pgpool: LOG:  child process with pid: 48024 exits
with status 256

Aug 22 14:39:03 pgpool1 pgpool: LOG:  fork a new child process with pid:
31975

Aug 22 14:39:03 pgpool1 pgpool: LOG:  child process with pid: 62956 exits
with status 256

Aug 22 14:39:03 pgpool1 pgpool: LOG:  fork a new child process with pid:
31976

Aug 22 14:39:03 pgpool1 pgpool: LOG:  child process with pid: 10546 exits
with status 256

Aug 22 14:39:03 pgpool1 pgpool: LOG:  fork a new child process with pid:
31977

Aug 22 14:40:02 pgpool1 systemd: Reloaded PostgreSQL 9.4 database server.

Aug 22 14:41:01 pgpool1 systemd: Reloaded PostgreSQL 9.4 database server.

Aug 22 14:42:02 pgpool1 systemd: Reloaded PostgreSQL 9.4 database server.

Aug 22 14:43:01 pgpool1 systemd: Reloaded PostgreSQL 9.4 database server.

Aug 22 14:43:35 pgpool1 pgpool[11307]: [47897-1] LOG:  failback event
detected

Aug 22 14:43:35 pgpool1 pgpool[7289]: [47241-1] LOG:  selecting backend
connection

Aug 22 14:43:35 pgpool1 pgpool[7453]: [47401-1] LOG:  selecting backend
connection

Aug 22 14:43:35 pgpool1 pgpool: LOG:  failback event detected

On Mon, Aug 22, 2016 at 7:40 AM, Vlad Novikov <xou.slackware at gmail.com>
wrote:

> Hello,
>
> I attach node only when I start the failover node (PostgreSQL) after
> primary is up and running. E.g. I start PostgreSQL on pgpool1, then I start
> popool-II and it detects the backend. Then, when I start pgpool2, I need to
> manually attach it, so pgpool-II would know that the backend is online,
> right?
>
> Now nodes status. At this moment I see following nodes status:
> node_id  hostname   port   status   lb_weight  role       select_cnt
> 0        pgpool1    5432      2     0.500000   primary    66133198
> 1        pgpool2    5432      2     0.500000   standby    0
>
> I have a sneaky feeling that lb_weight has something to do about what is
> going on. However, if you take a look at the pgpool.conf attached to the
> initial message, you'll find the backends configured like this:
> backend_hostname0 = 'pgpool1'
> backend_port0 = 5432
> backend_weight0 = 1
> backend_data_directory0 = '/var/lib/pgsql/9.4/data'
> backend_flag0 = 'ALLOW_TO_FAILOVER'
>
> backend_hostname1 = 'pgpool2'
> backend_port1 = 5432
> backend_weight1 = 1
> backend_data_directory1 = '/var/lib/pgsql/9.4/data'
> backend_flag1 = 'ALLOW_TO_FAILOVER'
>
> In particular, backend_weight is set to 1 on both nodes. Also, as far as I
> understand, this setting should not matter when load_balance_mode=off.
>
> Regards,
> Vlad
>
>
> On Sat, Aug 20, 2016 at 4:37 PM, Tatsuo Ishii <ishii at sraoss.co.jp> wrote:
>
>> > Hi Lucas,
>> >
>> > I checked the log and found no failover entries. Here's how that
>> happens. I
>> > start two postgres backends (master-slave streaming replication) and
>> > pgpool-II instance. Then I attach both of the nodes and initially all
>> the
>> > clients get connected to master only.
>>
>> Why do you need attach the backends? Pgpool-II automatically attach
>> all backends valid in pgpool.conf.
>>
>> > I see that with ps ax | grep
>> > postgres. After some time new clients start getting connected to the hot
>> > standby node while older clients are still connected to master. Again, I
>> > see that with ps ax | grep postgres. In that case both master and hot
>> > standby have pgpool-II connected. That's what concerns me the most. If
>> > there was a failover event, master would've been detached and there
>> would
>> > be no pgpool-II connections there.
>>
>> Can you connect to pgool using psql then issue "show pool_nodes" when
>> pgpool starts to behaves like this? This should show which is the
>> primary node (role) and which node should be the node the query routes
>> to (load_balance_node).
>>
>> Best regards,
>> --
>> Tatsuo Ishii
>> SRA OSS, Inc. Japan
>> English: http://www.sraoss.co.jp/index_en.php
>> Japanese:http://www.sraoss.co.jp
>>
>> > Vlad
>> >
>> > On Sat, Aug 20, 2016 at 1:25 AM, Lucas Luengas <lucasluengas at gmail.com>
>> > wrote:
>> >
>> >> Hello.
>> >>
>> >> Have you checked pgpool log file?  Maybe a failover happened?
>> >>
>> >> On Fri, Aug 19, 2016 at 10:48 PM, Vlad Novikov <
>> xou.slackware at gmail.com>
>> >> wrote:
>> >>
>> >>> Hello,
>> >>>
>> >>> I'm have setup pgpool-II with 2 backends in streaming mode (see
>> >>> configuration file attached). In particular, I have load_balance_mode
>> = off
>> >>> to make sure that in this pool all connections will be established to
>> >>> streaming master only. However, over some time I see pgpool-II
>> establishing
>> >>> connections to hot standby server. As a result client applications
>> start
>> >>> failing as they cannot write to the database they're connected to. So
>> far
>> >>> the only solution for me is to keep the hot standby detached (which
>> is not
>> >>> a good idea in terms of automated failover).
>> >>> Pgpool-II starts behaving like this at about 100 clients connected.
>> >>> PostgreSQL max_connections is set to 900 and with hot standby detached
>> >>> there are no connection issues reported (all clients can connect to
>> the
>> >>> backend with no issues).
>> >>> I use pgpool-II and PostgreSQL provided by PostgreSQL official
>> repository.
>> >>> PostgreSQL 9.4.9
>> >>> pgpool-II 3.5.3
>> >>> OS: CentOS 7.2
>> >>>
>> >>> Is there anything I need to change in the configuration file to make
>> all
>> >>> clients connect to master only when both backends are attached? From
>> what I
>> >>> understand that is expected with load_balance_mode = off.
>> >>>
>> >>> Regards,
>> >>> Vlad
>> >>>
>> >>> _______________________________________________
>> >>> pgpool-general mailing list
>> >>> pgpool-general at pgpool.net
>> >>> http://www.pgpool.net/mailman/listinfo/pgpool-general
>> >>>
>> >>>
>> >>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20160822/64be7a85/attachment-0001.html>


More information about the pgpool-general mailing list