[pgpool-general: 7775] Re: Error: sorry, too many clients

michail alexakis drmalex07 at gmail.com
Tue Oct 12 19:10:44 JST 2021


Hello Bo,
Thanks for your quick response

I will  try `client_idle_limit` parameter.
I was reluctant to use it because some clients probably won't like it (but
it may be easier to fix it client-side).

I have a basic question: how can I detect leaked (occupied and not reused)
connection slots?
Can `pcp_proc_info --all` provide some meaningful information in this
direction?

Let M be the maximum number (worst case) of connections that clients can
ask for.
Let N be the number of per-backend connection slots (= num_init_children *
max_pool) available in Pgpool.
Of course: M < N.

I can measure the number m of occupied slots (say for backend #0) as:
m=$(pcp_proc_info ${pcp_opts} --all | grep '0$' | grep -c -v '^[[:space:]]')

If m grows greater than M for some "steady" period of time (say for over
10min),
can I interpret this as a sign of leaked (non-reused) connection slots ?

Kind regards, Michail Alexakis





On Tue, Oct 12, 2021 at 7:10 AM Bo Peng <pengbo at sraoss.co.jp> wrote:

> Hello,
>
> > Hello PgPool
> >
> > We are using PgPool 4.2.4 (from Alpine 3.13) as the frontend of a
> > master/standby PostgreSQL 10.14 cluster (2 synchronous standbys in
> > streaming replication mode). PgPool is operating as a load balancer and a
> > connection pool.
> >
> > PgPool forks 90 pool workers (num_init_children=90, max_pool=2). All
> > PostgreSQL backend servers are configured properly to accept a maximum of
> > 190 connections (so, num_init_children * max_pool is below this maximum
> > with a safety margin of 10).
> >
> > The relevant (pool-specific) part of the configuration is:
> > num_init_children = 90
> > max_pool = 2
> > child_life_time = 5min
> > child_max_connections = 0
> > connection_life_time = 0
> > client_idle_limit = 0
> >
> > After a while (5-10 days of operation), we face several errors of the
> type
> > of "Sorry, too many clients".  These errors are not originating from a
> > PostgreSQL backend (error logs are completely clear and pg_stat_activity
> > shows no more than 140 (from 190) active connections).
> >
> > This problem has been around for some time now, and we have to resort to
> > restarting the entire PgPool service. Of course, this is not a perfect
> > solution because it disrupts some database clients (so, we also have to
> > cascade those restarts to some other services, and so on).
> >
> > The majority of database clients are connecting from a Hikari pool (Java
> > Spring/Boot) which creates long-standing connections (i.e. not one
> > per-transaction).
> >
> > Any light on this issue is highly appreciated!
>
> The connections hasn't been resued?
> Because you are using connection pooling between client and pgpool,
> I think the cached connections should be reused.
>
> You may need to configure "client_idle_limit" paramater
> to prevent the Pgpool-II child processes from being occupied for long time
> by broken connections.
>
>
> https://www.pgpool.net/docs/latest/en/html/runtime-config-connection-pooling.html#GUC-CLIENT-IDLE-LIMIT
>
> --
> Bo Peng <pengbo at sraoss.co.jp>
> SRA OSS, Inc. Japan
> http://www.sraoss.co.jp/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.pgpool.net/pipermail/pgpool-general/attachments/20211012/934d2db1/attachment.htm>


More information about the pgpool-general mailing list