[pgpool-general: 298] Re: lots of errors: kind does not match between master

Lonni J Friedman netllama at gmail.com
Tue Mar 27 00:17:42 JST 2012


On Mon, Mar 26, 2012 at 8:11 AM, Tatsuo Ishii <ishii at postgresql.org> wrote:
>>>> I'm running pgpool-II-3.1.2 on a Linux-x86_64 server, doing load
>>>> balancing for a postgresql-9.04 cluster, that has 1 master, and two
>>>> streaming replication slaves.  Over the past few weeks, I've noticed
>>>> the following errors appearing in the pgpool log with increasing
>>>> frequency.  At this point, they're appearing at a rate of nearly 1
>>>> every other second:
>>>> 2012-03-25 14:33:31 ERROR: pid 24101: pool_read_kind: kind does not
>>>> match between master(45) slot[1] (52)
>>>> 2012-03-25 14:33:31 LOG:   pid 24101: pool_read_kind: error message
>>>> from master backend:sorry, too many clients already
>>>
>>> Do you see the same error in PostgreSQL master's log?
>>
>> Yes, I do see it there as well, although all of the client connections
>> are going through pgpool, nothing connects directly to the master.
>
> You set max_connections=200 in your postgresql.conf. Unless you
> customize superuser_reserved_connections, actual number of connections
> pgpool can use is 200 -3 = 197 because the default of
> superuser_reserved_connections is 3. You have num_init_children = 195
> in pgpool.conf so the mergin is 197 - 195 = 2. If your master
> PostgreSQL is busy, disconnecting from pgpool to PostgreSQL may take
> some time, and actual number of connections from pgpool to PostgreSQL
> might exceed temporarily 197. So I'd suggest to increase
> max_connections of master a little bit if you want to avoid the error
> message.

Thanks for your reply.  Is there some way that I can reconfigure
pgpool to "pool" an extra number connections for its clients that are
not actually available on the backend server, where those extra
connections would just queue up until the backend had an open slot to
accept them?  I realize this means that the extra pgpool connections
would have a potentially longer latency if the backend had no more
spares, but I'd consider that a significant improvement over the
current behavior, where pgpool has no actual spare capacity.


More information about the pgpool-general mailing list