[pgpool-general: 4948] Re: Load balancing issue

Tatsuo Ishii ishii at sraoss.co.jp
Wed Aug 24 14:16:04 JST 2016


Ok.

> Start time   : 2016-08-23 21:23:54

This is the time when the pgpool process started. Since pgpool uses
"pre-fork" method, in which all pgpool process are forked when pgpool
starts. In summary, this is normal.

> Creation time: 2016-08-23 23:01:21

This is the time when pgpool received a query from frontend and
forwarded it to PostgreSQL (which I do not suppose to happen). Can you
identify the query which executed on the same time on pgpool2?

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp

> They are relevant:
> 
> [vlad at pgpool2 ~]$ ps ax | grep postgres
> 
> ...
> 
> *62230* ?        Ss     0:00 postgres: myuser mydb 192.168.241.101(39856)
> idle
> 
> ...
> 
> [vlad at pgpool2 ~]$ pcp_proc_info -U postgres -w -h pgpool1 -v | grep -B7 -A1
> 62230
> 
> Database     : mydb
> 
> Username     : myuser
> 
> Start time   : 2016-08-23 21:23:54
> 
> Creation time: 2016-08-23 23:01:21
> 
> Major        : 3
> 
> Minor        : 0
> 
> Counter      : 2
> 
> Backend PID  : *62230*
> 
> Connected    : 1
> 
> 
>>From what I understand, connected stands for the backend id (which is 1 in
> this case).
> 
> On Tue, Aug 23, 2016 at 9:35 PM, Tatsuo Ishii <ishii at sraoss.co.jp> wrote:
> 
>> You can use pcp_proc_info command to check if the PostgreSQL process
>> are relevant to Pgpool-II or not.
>>
>> http://www.pgpool.net/docs/latest/pgpool-en.html#pcp_proc_info
>>
>> Best regards,
>> --
>> Tatsuo Ishii
>> SRA OSS, Inc. Japan
>> English: http://www.sraoss.co.jp/index_en.php
>> Japanese:http://www.sraoss.co.jp
>>
>> > Interesting thing, though, is that I see postgres processes being forked
>> on
>> > pgpool2 eve though in the logs I see only the queries listed above.
>> > ...
>> >
>> > 65032 ?        Ss     0:00 *postgres*: myuser mydb 192.168.241.101(40278)
>> > idle
>> >
>> > 65033 ?        Ss     0:00 *postgres*: myuser mydb 192.168.241.101(40280)
>> > idle
>> >
>> > 65034 ?        Ss     0:00 *postgres*: myuser mydb 192.168.241.101(40282)
>> > idle
>> >
>> > 65035 ?        Ss     0:00 *postgres*: myuser mydb 192.168.241.101(40288)
>> > idle
>> >
>> > 65036 ?        Ss     0:00 *postgres*: myuser mydb 192.168.241.101(40290)
>> > idle
>> >
>> > [vlad at pgpool2 ~]$ ps ax | grep postgres | grep myuser | wc -l
>> >
>> > 317
>> >
>> > Any ideas why can those processes be forked? It feels like pgpool creates
>> > slots just in case it needs to failover, but it's not documented anywhere
>> > and that's why it looks weird.
>> >
>> > On Tue, Aug 23, 2016 at 9:30 AM, Vlad Novikov <xou.slackware at gmail.com>
>> > wrote:
>> >
>> >> Ok, looks like the connections I see on pgpool2 are just healthchecks
>> and
>> >> standard db SELECT calls:
>> >>
>> >> SELECT 1;
>> >>
>> >> SELECT class.relname, blocking_activity.client_addr,
>> >> blocking_activity.query FROM  pg_catalog.pg_locks blocked JOIN
>> >> pg_catalog.pg_stat_activity a ON a.pid=blocked.pid JOIN
>> pg_catalog.pg_locks
>> >> locks ON locks.locktype = blocked.locktype AND locks.DATABASE IS NOT
>> >> DISTINCT FROM blocked.DATABASE AND locks.relation IS NOT DISTINCT FROM
>> >> blocked.relation AND locks.page IS NOT DISTINCT FROM blocked.page AND
>> >> locks.tuple IS NOT DISTINCT FROM blocked.tuple AND locks.virtualxid IS
>> NOT
>> >> DISTINCT FROM blocked.virtualxid AND locks.transactionid IS NOT DISTINCT
>> >> FROM blocked.transactionid AND locks.classid IS NOT DISTINCT FROM
>> >> blocked.classid AND locks.objid IS NOT DISTINCT FROM blocked.objid AND
>> >> locks.objsubid IS NOT DISTINCT FROM blocked.objsubid AND locks.pid !=
>> >> blocked.pid JOIN pg_catalog.pg_stat_activity blocking_activity ON
>> >> blocking_activity.pid=locks.pid JOIN pg_catalog.pg_class class ON
>> >> class.oid=locks.relation WHERE NOT blocked.GRANTED;
>> >>
>> >> I think we can consider this solved. Thank you, guys, for helping with
>> >> this.
>> >>
>> >> Regards,
>> >>
>> >> Vlad
>> >>
>> >> On Mon, Aug 22, 2016 at 3:41 PM, Tatsuo Ishii <ishii at sraoss.co.jp>
>> wrote:
>> >>
>> >>> > Hello,
>> >>> >
>> >>> > I attach node only when I start the failover node (PostgreSQL) after
>> >>> > primary is up and running. E.g. I start PostgreSQL on pgpool1, then I
>> >>> start
>> >>> > popool-II and it detects the backend.
>> >>>
>> >>> Probably at this point a fail over happens because PostgreSQL pgpool2
>> >>> is not running. So far so good.
>> >>>
>> >>> > Then, when I start pgpool2, I need to
>> >>> > manually attach it, so pgpool-II would know that the backend is
>> online,
>> >>> > right?
>> >>>
>> >>> Correct.
>> >>>
>> >>> > Now nodes status. At this moment I see following nodes status:
>> >>> > node_id  hostname   port   status   lb_weight  role       select_cnt
>> >>> > 0        pgpool1    5432      2     0.500000   primary    66133198
>> >>> > 1        pgpool2    5432      2     0.500000   standby    0
>> >>>
>> >>> I don't know which node you executed "show pool_nodes" but as long as
>> >>> the result of the command are same, it is ok.
>> >>>
>> >>> So you sometimes see write queries are sent to PostgreSQL on pgpool2?
>> >>>
>> >>> > I have a sneaky feeling that lb_weight has something to do about
>> what is
>> >>> > going on. However, if you take a look at the pgpool.conf attached to
>> the
>> >>> > initial message, you'll find the backends configured like this:
>> >>> > backend_hostname0 = 'pgpool1'
>> >>> > backend_port0 = 5432
>> >>> > backend_weight0 = 1
>> >>> > backend_data_directory0 = '/var/lib/pgsql/9.4/data'
>> >>> > backend_flag0 = 'ALLOW_TO_FAILOVER'
>> >>> >
>> >>> > backend_hostname1 = 'pgpool2'
>> >>> > backend_port1 = 5432
>> >>> > backend_weight1 = 1
>> >>> > backend_data_directory1 = '/var/lib/pgsql/9.4/data'
>> >>> > backend_flag1 = 'ALLOW_TO_FAILOVER'
>> >>> >
>> >>> > In particular, backend_weight is set to 1 on both nodes.
>> >>>
>> >>> Don't worry. The load balance weight is always "normalized" so that
>> >>> the tatal weight is 1.0.
>> >>>
>> >>> > Also, as far as I
>> >>> > understand, this setting should not matter when
>> load_balance_mode=off.
>> >>>
>> >>> Correct.
>> >>>
>> >>> > Regards,
>> >>> > Vlad
>> >>> >
>> >>> >
>> >>> > On Sat, Aug 20, 2016 at 4:37 PM, Tatsuo Ishii <ishii at sraoss.co.jp>
>> >>> wrote:
>> >>> >
>> >>> >> > Hi Lucas,
>> >>> >> >
>> >>> >> > I checked the log and found no failover entries. Here's how that
>> >>> >> happens. I
>> >>> >> > start two postgres backends (master-slave streaming replication)
>> and
>> >>> >> > pgpool-II instance. Then I attach both of the nodes and initially
>> >>> all the
>> >>> >> > clients get connected to master only.
>> >>> >>
>> >>> >> Why do you need attach the backends? Pgpool-II automatically attach
>> >>> >> all backends valid in pgpool.conf.
>> >>> >>
>> >>> >> > I see that with ps ax | grep
>> >>> >> > postgres. After some time new clients start getting connected to
>> the
>> >>> hot
>> >>> >> > standby node while older clients are still connected to master.
>> >>> Again, I
>> >>> >> > see that with ps ax | grep postgres. In that case both master and
>> hot
>> >>> >> > standby have pgpool-II connected. That's what concerns me the
>> most.
>> >>> If
>> >>> >> > there was a failover event, master would've been detached and
>> there
>> >>> would
>> >>> >> > be no pgpool-II connections there.
>> >>> >>
>> >>> >> Can you connect to pgool using psql then issue "show pool_nodes"
>> when
>> >>> >> pgpool starts to behaves like this? This should show which is the
>> >>> >> primary node (role) and which node should be the node the query
>> routes
>> >>> >> to (load_balance_node).
>> >>> >>
>> >>> >> Best regards,
>> >>> >> --
>> >>> >> Tatsuo Ishii
>> >>> >> SRA OSS, Inc. Japan
>> >>> >> English: http://www.sraoss.co.jp/index_en.php
>> >>> >> Japanese:http://www.sraoss.co.jp
>> >>> >>
>> >>> >> > Vlad
>> >>> >> >
>> >>> >> > On Sat, Aug 20, 2016 at 1:25 AM, Lucas Luengas <
>> >>> lucasluengas at gmail.com>
>> >>> >> > wrote:
>> >>> >> >
>> >>> >> >> Hello.
>> >>> >> >>
>> >>> >> >> Have you checked pgpool log file?  Maybe a failover happened?
>> >>> >> >>
>> >>> >> >> On Fri, Aug 19, 2016 at 10:48 PM, Vlad Novikov <
>> >>> xou.slackware at gmail.com
>> >>> >> >
>> >>> >> >> wrote:
>> >>> >> >>
>> >>> >> >>> Hello,
>> >>> >> >>>
>> >>> >> >>> I'm have setup pgpool-II with 2 backends in streaming mode (see
>> >>> >> >>> configuration file attached). In particular, I have
>> >>> load_balance_mode
>> >>> >> = off
>> >>> >> >>> to make sure that in this pool all connections will be
>> established
>> >>> to
>> >>> >> >>> streaming master only. However, over some time I see pgpool-II
>> >>> >> establishing
>> >>> >> >>> connections to hot standby server. As a result client
>> applications
>> >>> >> start
>> >>> >> >>> failing as they cannot write to the database they're connected
>> to.
>> >>> So
>> >>> >> far
>> >>> >> >>> the only solution for me is to keep the hot standby detached
>> >>> (which is
>> >>> >> not
>> >>> >> >>> a good idea in terms of automated failover).
>> >>> >> >>> Pgpool-II starts behaving like this at about 100 clients
>> connected.
>> >>> >> >>> PostgreSQL max_connections is set to 900 and with hot standby
>> >>> detached
>> >>> >> >>> there are no connection issues reported (all clients can connect
>> >>> to the
>> >>> >> >>> backend with no issues).
>> >>> >> >>> I use pgpool-II and PostgreSQL provided by PostgreSQL official
>> >>> >> repository.
>> >>> >> >>> PostgreSQL 9.4.9
>> >>> >> >>> pgpool-II 3.5.3
>> >>> >> >>> OS: CentOS 7.2
>> >>> >> >>>
>> >>> >> >>> Is there anything I need to change in the configuration file to
>> >>> make
>> >>> >> all
>> >>> >> >>> clients connect to master only when both backends are attached?
>> >>> From
>> >>> >> what I
>> >>> >> >>> understand that is expected with load_balance_mode = off.
>> >>> >> >>>
>> >>> >> >>> Regards,
>> >>> >> >>> Vlad
>> >>> >> >>>
>> >>> >> >>> _______________________________________________
>> >>> >> >>> pgpool-general mailing list
>> >>> >> >>> pgpool-general at pgpool.net
>> >>> >> >>> http://www.pgpool.net/mailman/listinfo/pgpool-general
>> >>> >> >>>
>> >>> >> >>>
>> >>> >> >>
>> >>> >>
>> >>>
>> >>
>> >>
>>


More information about the pgpool-general mailing list