[pgpool-general: 2307] Re: PGPool and backend CPU usage

Stelios Limnaios stelios.limnaios at mekon.com
Thu Nov 21 01:06:11 JST 2013


Hi Tatsuo,

We are not using master/slave, but replication + load balancing
Please find attached our pgpool.conf.

Thanks again,
Stelios

-----Original Message-----
From: Tatsuo Ishii [mailto:ishii at postgresql.org] 
Sent: 20 November 2013 14:49
To: Stelios Limnaios
Cc: pgpool-general at pgpool.net
Subject: Re: [pgpool-general: 2279] Re: PGPool and backend CPU usage

> Hi Tatsuo,
> 
> Thank you very much for your response.
> 
> We do have configured the white and the black list:
> White: get*,has*
> Black: nextval,setval,update*
> We recently found that there are more custom functions that we should add in the lists.
> 
> There is a point of confusion around the lists.
> We came across this page:
> http://comments.gmane.org/gmane.comp.db.postgresql.pgpool.general/3136
> and "Only one of these two lists can be filled in a configuration."
> Is that correct? It is not very obvious in documentation 
> http://www.pgpool.net/docs/latest/pgpool-en.html

Yes, correct.

> Also, does all this explain the I/O exception to the backend?
> Is this exception a result of functions not being in the black list?

I'm not sure any more. After rethinking, I realized that the error cannot happen in master/slave mode. Can you please show your pgpool.conf?
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

> Thank you again,
> Regards,
> Stelios
> 
> -----Original Message-----
> From: Tatsuo Ishii [mailto:ishii at postgresql.org]
> Sent: 14 November 2013 23:00
> To: Stelios Limnaios
> Cc: pgpool-general at pgpool.net
> Subject: Re: [pgpool-general: 2279] Re: PGPool and backend CPU usage
> 
> If you are using PostgreSQL's streaming replication, your porblem shouldn't happen because read only SELECTs should be sent to only one node and the kind mismach error will never happen. I think pgpool thinks that lower() is a writing function and the SELECT needed to be sent to all nodes. This could be solved by teaching pgpool which function is not read only by using black_black_function_list in pgpool.conf.
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese: http://www.sraoss.co.jp
> 
>> Hi,
>> 
>> You are correct, we're using PostgreSQL replication.
>> 
>> We upgraded PGPool to version 3.3.1, but we still are getting 'backend mismatch' errors.
>> I think that CPU overload is not the cause of the problem anymore.
>> 
>> Do you have any idea how can we track what is causing backends to loose synchronisation?
>> PGPool logs only the action of degeneration:
>> read_kind_from_backend: 0 th kind C does not match with master or 
>> majority connection kind D kind mismatch among backends. Possible 
>> last query was: "SELECT name,id,type,creator FROM notus_aqa.tree_node 
>> LEFT JOIN notus_aqa.branch ON id=child_id WHERE parent_id=$1 AND 
>> lower(name)=$2" kind details are: 0[C] 1[D] 2[D]
>> degenerate_backend_set: 0 fail over request from pid 22833 starting 
>> degeneration. shutdown host 158.255.228.43(5433) Restart all children 
>> execute command: /usr/local/etc/failover_r.sh 0 "158.255.228.43" 5433
>> /var/lib/pgsql/9.1/data_5433 1 0 "158.255.228.54" 0
>> 
>> On the Postgre side, we get this log message:
>> LOG:  unexpected EOF on client connection
>> 
>> Finally Java reports:
>> org.postgresql.util.PSQLException: An I/O error occured while sending to the backend.
>> Caused by: java.io.EOFException
>> 
>> So, maybe we need to understand what is causing this EOF on client connection.
>> Does PGPool drop the connection to Postgres for some reason? 
>> Is it a timeout that we need to configure? Is network latency a factor to this problem?
>> 
>> Many thanks for your interest and you time.
>> 
>> Regards,
>> Stelios
>> 
>> -----Original Message-----
>> From: Tatsuo Ishii [mailto:ishii at postgresql.org]
>> Sent: 09 November 2013 08:57
>> To: Stelios Limnaios
>> Cc: pgpool-general at pgpool.net
>> Subject: Re: [pgpool-general: 2263] PGPool and backend CPU usage
>> 
>> How do you set up pgpool replication? Are you using streaming replication of PostgreSQL?
>> --
>> Tatsuo Ishii
>> SRA OSS, Inc. Japan
>> English: http://www.sraoss.co.jp/index_en.php
>> Japanese: http://www.sraoss.co.jp
>> 
>>> Hi all,
>>> 
>>> We're  facing a situation that would like some expertise to be solved.
>>> 
>>> First of all, our setup consists of 3 databases Postgres 9.1, PGPool 3.2 STABLE checked out from the repository (rev 6dde744) and PGPoolAdmin 3.2.0.
>>> We have setup replication and load balancing, along with scripts for online recovery.
>>> The cluster works fine in terms of replicating and load balancing data.
>>> 
>>> But, the servers host other applications expect of the databases.
>>> Some applications require a lot of CPU when running.
>>> We found recently that when CPU is high in a backend, PGPool fails to update the data on this server and that causes a 'kind mismatch among backends' situation.
>>> 
>>> I know that there is a replication_stop_on_mismatch parameter in pgpool conf, but that will disconnect the database from the cluster and we will have to re-attach it (this could happen several times).
>>> 
>>> So I was wondering, if there is a more permanent workaround for this.
>>> Is it something we can configure in PGPool?
>>> Are there any minimum memory requirements that we could set to avoid this problem?
>>> Would that be in Postgres?
>>> Maybe the best option is to dedicate the server to run the database only (?).
>>> 
>>> Thanks in advance for your help and your time.
>>> 
>>> Regards,
>>> Stelios
>> _______________________________________________
>> pgpool-general mailing list
>> pgpool-general at pgpool.net
>> http://www.pgpool.net/mailman/listinfo/pgpool-general
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pgpool.zip
Type: application/x-zip-compressed
Size: 5423 bytes
Desc: pgpool.zip
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20131120/abd64c41/attachment-0001.bin>


More information about the pgpool-general mailing list