[pgpool-general: 2308] Re: PGPool and backend CPU usage
ishii at postgresql.org
Thu Nov 21 09:00:42 JST 2013
You could choose either:
1) use white_function_list only
white_function_list = 'get*,has*'
Add all read-only functions to the list. You need to add, for example,
lower to prevent pgpool from sending SELECTs to all nodes (that's the
cause of your problem).
2) use black_function_list only
black_function_list = 'nextval,setval,update*'
Add all write functions to the list. Other functions are regarded as
read-only. You need to make sure that you add all write functions to
the list. Otherwise, data inconsistency might happen.
Probably #2 is your choice because it requires less set up.
SRA OSS, Inc. Japan
> Hi Tatsuo,
> We are not using master/slave, but replication + load balancing
> Please find attached our pgpool.conf.
> Thanks again,
> -----Original Message-----
> From: Tatsuo Ishii [mailto:ishii at postgresql.org]
> Sent: 20 November 2013 14:49
> To: Stelios Limnaios
> Cc: pgpool-general at pgpool.net
> Subject: Re: [pgpool-general: 2279] Re: PGPool and backend CPU usage
>> Hi Tatsuo,
>> Thank you very much for your response.
>> We do have configured the white and the black list:
>> White: get*,has*
>> Black: nextval,setval,update*
>> We recently found that there are more custom functions that we should add in the lists.
>> There is a point of confusion around the lists.
>> We came across this page:
>> and "Only one of these two lists can be filled in a configuration."
>> Is that correct? It is not very obvious in documentation
> Yes, correct.
>> Also, does all this explain the I/O exception to the backend?
>> Is this exception a result of functions not being in the black list?
> I'm not sure any more. After rethinking, I realized that the error cannot happen in master/slave mode. Can you please show your pgpool.conf?
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese: http://www.sraoss.co.jp
>> Thank you again,
>> -----Original Message-----
>> From: Tatsuo Ishii [mailto:ishii at postgresql.org]
>> Sent: 14 November 2013 23:00
>> To: Stelios Limnaios
>> Cc: pgpool-general at pgpool.net
>> Subject: Re: [pgpool-general: 2279] Re: PGPool and backend CPU usage
>> If you are using PostgreSQL's streaming replication, your porblem shouldn't happen because read only SELECTs should be sent to only one node and the kind mismach error will never happen. I think pgpool thinks that lower() is a writing function and the SELECT needed to be sent to all nodes. This could be solved by teaching pgpool which function is not read only by using black_black_function_list in pgpool.conf.
>> Tatsuo Ishii
>> SRA OSS, Inc. Japan
>> English: http://www.sraoss.co.jp/index_en.php
>> Japanese: http://www.sraoss.co.jp
>>> You are correct, we're using PostgreSQL replication.
>>> We upgraded PGPool to version 3.3.1, but we still are getting 'backend mismatch' errors.
>>> I think that CPU overload is not the cause of the problem anymore.
>>> Do you have any idea how can we track what is causing backends to loose synchronisation?
>>> PGPool logs only the action of degeneration:
>>> read_kind_from_backend: 0 th kind C does not match with master or
>>> majority connection kind D kind mismatch among backends. Possible
>>> last query was: "SELECT name,id,type,creator FROM notus_aqa.tree_node
>>> LEFT JOIN notus_aqa.branch ON id=child_id WHERE parent_id=$1 AND
>>> lower(name)=$2" kind details are: 0[C] 1[D] 2[D]
>>> degenerate_backend_set: 0 fail over request from pid 22833 starting
>>> degeneration. shutdown host 126.96.36.199(5433) Restart all children
>>> execute command: /usr/local/etc/failover_r.sh 0 "188.8.131.52" 5433
>>> /var/lib/pgsql/9.1/data_5433 1 0 "184.108.40.206" 0
>>> On the Postgre side, we get this log message:
>>> LOG: unexpected EOF on client connection
>>> Finally Java reports:
>>> org.postgresql.util.PSQLException: An I/O error occured while sending to the backend.
>>> Caused by: java.io.EOFException
>>> So, maybe we need to understand what is causing this EOF on client connection.
>>> Does PGPool drop the connection to Postgres for some reason?
>>> Is it a timeout that we need to configure? Is network latency a factor to this problem?
>>> Many thanks for your interest and you time.
>>> -----Original Message-----
>>> From: Tatsuo Ishii [mailto:ishii at postgresql.org]
>>> Sent: 09 November 2013 08:57
>>> To: Stelios Limnaios
>>> Cc: pgpool-general at pgpool.net
>>> Subject: Re: [pgpool-general: 2263] PGPool and backend CPU usage
>>> How do you set up pgpool replication? Are you using streaming replication of PostgreSQL?
>>> Tatsuo Ishii
>>> SRA OSS, Inc. Japan
>>> English: http://www.sraoss.co.jp/index_en.php
>>> Japanese: http://www.sraoss.co.jp
>>>> Hi all,
>>>> We're facing a situation that would like some expertise to be solved.
>>>> First of all, our setup consists of 3 databases Postgres 9.1, PGPool 3.2 STABLE checked out from the repository (rev 6dde744) and PGPoolAdmin 3.2.0.
>>>> We have setup replication and load balancing, along with scripts for online recovery.
>>>> The cluster works fine in terms of replicating and load balancing data.
>>>> But, the servers host other applications expect of the databases.
>>>> Some applications require a lot of CPU when running.
>>>> We found recently that when CPU is high in a backend, PGPool fails to update the data on this server and that causes a 'kind mismatch among backends' situation.
>>>> I know that there is a replication_stop_on_mismatch parameter in pgpool conf, but that will disconnect the database from the cluster and we will have to re-attach it (this could happen several times).
>>>> So I was wondering, if there is a more permanent workaround for this.
>>>> Is it something we can configure in PGPool?
>>>> Are there any minimum memory requirements that we could set to avoid this problem?
>>>> Would that be in Postgres?
>>>> Maybe the best option is to dedicate the server to run the database only (?).
>>>> Thanks in advance for your help and your time.
>>> pgpool-general mailing list
>>> pgpool-general at pgpool.net
More information about the pgpool-general