[pgpool-general: 5152] Re: My 3rd backend stay in unused state

Mathieu Poussin mathieu at lodgify.com
Wed Dec 7 17:50:08 JST 2016


Hello Tatsuo,

After reviewing one more time, I've seen that my third node missed the
parallel_mode=1,

Now I can see it up and as secondary, but I can see that not all my nodes
have load_balance_node to true, I don't know if this is normal ?

# show pool_nodes;
 node_id | hostname  | port | status | lb_weight |  role   | select_cnt |
load_balance_node | replication_delay
---------+-----------+------+--------+-----------+---------+------------+-------------------+-------------------
 0       | 10.0.0.58 | 5433 | up     | 0.333333  | primary | 0          |
false             | 0
 1       | 10.0.0.59 | 5433 | up     | 0.333333  | standby | 0          |
false             | 0
 2       | 10.0.0.36 | 5433 | up     | 0.333333  | standby | 0          |
true              | 0

Thank you.
Best regards,
Mathieu

On Wed, Dec 7, 2016 at 6:43 AM, Tatsuo Ishii <ishii at sraoss.co.jp> wrote:

> It seems there's something wrong in you pgpool.conf (especially
> in the backend.. section). Can you show us pgpool.conf?
>
> Best regards,
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese:http://www.sraoss.co.jp
>
> > Hello,
> >
> > I'm trying to deploy a high availability setup with PostgreSQL 9.6 and
> > PgPool 3.6.0.
> > Everything was working fine up to the second host, I could get queries
> load
> > balanced (I use streaming replication)
> >
> > But then I wanted to add a 3rd backend to I can have more read load
> > balancing (1 master streaming to 2 replicas), so I just took a snapshot
> of
> > the first replica and created another instance of it as second replica,
> it
> > can connect without problem to the master and start replicating.
> >
> > I duplicated the configuration I have in pgpool from the first replica to
> > the second one, also added it to the list of the backends in
> > trusted_servers.
> >
> > But then, since, I can't get it to leave the "unused" state :
> >
> > # show pool_nodes;
> >  node_id | hostname  | port | status | lb_weight |  role   | select_cnt |
> > load_balance_node | replication_delay
> > ---------+-----------+------+--------+-----------+---------+
> ------------+-------------------+-------------------
> >  0       | 10.0.0.58 | 5432 | up     | 0.333333  | primary | 145        |
> > false             | 0
> >  1       | 10.0.0.59 | 5432 | up     | 0.333333  | standby | 11         |
> > true              | 0
> >  2       | 10.0.0.36 | 5432 | unused | 0.333333  | standby | 0          |
> > false             | 0
> >
> > Here you can see my issue in node 2.
> > When doing tcpdump or netstats, I can also see that there is not
> > connections from the pgpool to the postgresql on port 5432, but I can
> see a
> > ping coming from the pgpool to this server, the others servers have
> active
> > connections :
> >
> > root at gc-lab-pgsql-1:~# netstat -anp | grep 10.0.0.57
> > tcp        0      0 10.0.0.58:5432          10.0.0.57:41323
> > ESTABLISHED 878/main: mathieu
> > tcp        0      0 10.0.0.58:5432          10.0.0.57:39822
> > ESTABLISHED 31799/main: mathieu
> > tcp        0      0 10.0.0.58:5432          10.0.0.57:39864
> > ESTABLISHED 31841/main: mathieu
> >
> > root at gc-lab-pgsql-2:~# netstat -anp | grep 10.0.0.57
> > tcp        0      0 10.0.0.59:5432          10.0.0.57:42897
> > ESTABLISHED 23707/main: mathieu
> > tcp        0      0 10.0.0.59:5432          10.0.0.57:44398
> > ESTABLISHED 25526/main: mathieu
> > tcp        0      0 10.0.0.59:5432          10.0.0.57:42939
> > ESTABLISHED 23748/main: mathieu
> >
> > root at gc-lab-pgsql-3:~# netstat -anp | grep 10.0.0.57
> > <Nothing>
> >
> > Weird thing, I can see  the node 2 in the pool_pools
> >
> >
> > # show pool_pools;
> >  pool_pid |     start_time      | pool_id | backend_id | database |
> > username |     create_time     | majorversion | minorversion |
> pool_counter
> > | pool_backendpid | pool_connected
> > ----------+---------------------+---------+------------+----
> ------+----------+---------------------+--------------+-----
> ---------+--------------+-----------------+----------------
> >  5307     | 2016-11-25 10:22:28 | 0       | 0          |
> > |          |                     | 0            | 0            |
> > 0            | 0               | 0
> >  5307     | 2016-11-25 10:22:28 | 0       | 1          |
> > |          |                     | 0            | 0            |
> > 0            | 0               | 0
> >  5307     | 2016-11-25 10:22:28 | 0       | 2          |
> > |          |                     | 0            | 0            |
> > 0            | 0               | 0
> >  5307     | 2016-11-25 10:22:28 | 1       | 0          |
> > |          |                     | 0            | 0            |
> > 0            | 0               | 0
> >  5307     | 2016-11-25 10:22:28 | 1       | 1          |
> > |          |                     | 0            | 0            |
> > 0            | 0               | 0
> >  5307     | 2016-11-25 10:22:28 | 1       | 2          |
> > |          |                     | 0            | 0            |
> > 0            | 0               | 0
> >  5307     | 2016-11-25 10:22:28 | 2       | 0          |
> > |          |                     | 0            | 0            |
> > 0            | 0               | 0
> >
> >
> > I can't find why, node 1 and 2 have exactly the same configuration, and
> in
> > everything is the same in my pgpool configuration about those 2 nodes.
> >
> > I ran pgbench in both write and read on the pgpool, and connections to
> node
> > 2 are never used.
> >
> > How can I troubleshoot that ? I could not find anything interesting in
> the
> > logs.
> >
> > Thank you.
> > Mathieu
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20161207/963a6c3f/attachment-0001.html>


More information about the pgpool-general mailing list