[pgpool-general: 6927] Re: Pgpool postgresql - Idle connections close problem
radoslaw.szczygiel at interia.pl
Tue Mar 10 17:04:55 JST 2020
Continued problem:I have 2 clusters - 2xpgpool + 2xpostgres (stream replication) - one production and the other test (test server has much weaker server parameters).The configuration between them differs only in the number of max connections:Production:num_init_children = 300max_pool = 5child_life_time = 600child_max_connections = 1510connection_life_time = 500client_idle_limit = 300test:num_init_children = 50max_pool = 4child_life_time = 600child_max_connections = 205connection_life_time = 500client_idle_limit = 300I did another tests with that python script from previous message and here are the results:1. I increased the number of inserts to 40. The script running into the test cluster created about 8-10 connections on the PG master server (pg_stat_activity table). I noticed that he created the same number of connections to the PG slave server (inserts ??). The running script did not create more connections but only used the existing ones (I understand that this is how connection cache works).2. I increased the number of inserts to 40. I run script multiple times into the production cluster and it created further connections on the PG master server and there could have been even 100 or more (I was afraid that I would exceed some parameter and failover) and I don't run script again. The same thing happened to the PG slave server. In this case, he didn't use connection cache ... or I don't understand something in its operation (as a reminder on the production cluster there are higher parameters num_init_children and max_connections).Conclusion:I don't know why the connecion_cache test cluster works great and I can't see it in production. I suspect it may be related to num_init_children.The production server has on average about 700-800 connections to the PG master and a similar number of connections to the PG slave (load balancing works well).I still don't understand how to best adjust the num_init_children and max_pools parameters. The production cluster supports a lot of connections (not necessarily simultaneous).I also noticed, as I mentioned in previous messages, that on PG servers there are IDLE connections much older than times specified in PGPOOL to close them (even those having 1 or 2 days).RegardsRadoslaw SzczygielTemat: [pgpool-general: 6915] Re: Pgpool postgresql - Idle connections close problemData: 2020-03-05 12:37Nadawca: "Radosław Szczygieł" <radoslaw.szczygiel at interia.pl>Adresat: "Bo Peng" <pengbo at sraoss.co.jp>; DW: "pgpool-general at pgpool.net" <pgpool-general at pgpool.net>; This is simple python code that make insert 10 times each time making a new connection.Every use this script make 10 connections through pgpool in postgres. I thought that using pgpool connection pooling new connections use cached connecions than close it.script:#!/usr/bin/pythonimport osimport sysimport psycopg2for x in range(1,10): #ustawienia polaczenia i nawiazanie polaczenia conn = psycopg2.connect("host=pgpool port=9999 dbname=test user=test password=test") cur = conn.cursor() sql="INSERT INTO test1 values (1,1)" try: cur.execute(sql) conn.commit() except: print("BLAD - Nie powiodlo sie wstawianie zapytania %s" % sql) else: print("Wstawiono rekord: %s" % sql) conn.close()--Pozdrawiam Radosław SzczygiełTemat: Re: [pgpool-general: 6901] Pgpool postgresql - Idle connections close problemData: 2020-03-05 11:37Nadawca: "Bo Peng" Adresat: "Radosław Szczygieł" ; DW: pgpool-general at pgpool.net; > Hello,
> I want to reproduce this issue.
> Could you share the test program?
> On Wed, 04 Mar 2020 10:43:56 +0100
> Radosław Szczygieł wrote:
> > Hi Pgpool team,I have a strange problem with hanging connections in postgresql or I don't know something. On postgres in pg_stat_activity I see many idle connections with "query_start" and "state_change" time from the previous day but they should be closed after 500 seconds (pgpool.conf settings).I made a test: create simply python code with loop making connect (using psycopg2), insert and close connection. Each code call creates a lot of postgres connections and does not close them. I understand that "connection cache" works, but according to pgpool documentation, if the same connection is used, using the same database, username and password should use the existing connection??Did I something misunderstand of connection pooling? Why these old connections in postgresql that are longer than the times specified in pgpool.conf not closed ??My cluster is : 2 pgpool 4.1 (watchdog) + 2 postgresql 12 (master slave replication)pgpool.conf# - Concurrent session and pool size -
> num_init_children = 300 # Number of concurrent sessions allowed # (change requires restart)max_pool = 5 # Number of connection pool caches per connection # (change requires
> restart)# - Life time -child_life_time = 600 # Pool exits after being idle for this many secondschild_max_connections = 1510 # Pool exits after receiving that many connections # 0 means no exitconnection_life_time = 500 # Connection to backend closes after being idle for this many seconds &
> nbsp; # 0 means no closeclient_idle_limit = 300 # Client is disconnected after being idle for that many seconds # (even inside an explicit transactions!) # 0 means no disconnectionRegardsRado
> slaw Szczygiel--PozdrawiamRadosław Szczygieł
> Bo Peng
> SRA OSS, Inc. Japan
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the pgpool-general