View Issue Details

IDProjectCategoryView StatusLast Update
0000438Pgpool-IIBugpublic2018-11-20 11:39
ReporterrafaelthcaAssigned Topengbo 
Status assignedResolutionopen 
Product Version 
Target VersionFixed in Version 
Summary0000438: Memory consumption

I'm using 3.7.5. My scenario is that I have applications using pgpool and they never disconnect. Pgpool worker processes with high CPU time are consuming a lot of memory. Here is one example:

postgres 15329 4.9 13.9 1885164 1730852 ? S Oct11 419:56 pgpool: XXX XXX X.X.X.X(54197) idle

That machine has 12GB of ram and the process's memory keeps increasing. Is that a known behavior in pgpool and we MUST disconnect from time to time to allow pgpool to recycle the workers or can this be a memory leak?


TagsNo tags attached.



2018-10-19 10:38

reporter   ~0002217

It seems that we met the same problem(Bug 436).
After investigation, we found that when too many stmt objects created(if you use php with pdo, each query statment will create a stmt), when a long session(that is what you talk about a connection will not disconnect) used, the pgpool child process(which handle the connection) will use too much memory and cpu resource.


2018-10-19 12:15

reporter   ~0002218


Thanks for that. Did you take any action to prevent that? The only thing I can think of is changing applications to disconnect from time to time and then configure pgpool to recycle processes after N connections, this can be done with the parameter child_max_connections.


2018-10-19 17:38

reporter   ~0002221

In our environment, we need a daemon process to fetch data from the message queue, and then import data to postgresql.
The dataset is huge, so connection which disconnected from time to time is not acceptable, we just using the pgpool to get the master postgresql instance, and then connect to the master postgresql server directly.


2018-10-22 09:54

developer   ~0002225

Could you provide a test program to reproduce this problem?


2018-10-24 19:30

reporter   ~0002228

hello, U can use this test tool to reproduce this problem.
Usage of this tool:
  -d string
        database name (default "test")
  -h string
        host (default "localhost")
  -n int
        insert number (default 10000)
  -p string
        port (default "9999")
  -t string
        test type (default "good")
  -u string
        user (default "postgres")
  -w string
        passwd (default "postgres")

specially, -t parameter includes "good" and "bad" , U can switch this parameter to compare this two situation.
By the way, I use the flame tool to inspect the bad situation, I found that in bad situation these three functions consume too much CPU time("can_query_context_destroy" && "pool_remove_sent_message"&&"pool_get_sent_message" )

Thanks very much!

pgpool_in_bad_situation_flame.svg (133,529 bytes)
pgpool.go (3,497 bytes)
pgpool_windows.exe (4,944,384 bytes)
pgpool_linux (5,004,897 bytes)
pgpool_darwin (5,092,720 bytes)

Issue History

Date Modified Username Field Change
2018-10-19 05:18 rafaelthca New Issue
2018-10-19 10:38 hanyugang01 Note Added: 0002217
2018-10-19 12:15 rafaelthca Note Added: 0002218
2018-10-19 17:38 hanyugang01 Note Added: 0002221
2018-10-22 09:54 pengbo Note Added: 0002225
2018-10-24 19:30 hanyugang01 File Added: pgpool_darwin
2018-10-24 19:30 hanyugang01 File Added: pgpool_linux
2018-10-24 19:30 hanyugang01 File Added: pgpool_windows.exe
2018-10-24 19:30 hanyugang01 File Added: pgpool.go
2018-10-24 19:30 hanyugang01 File Added: pgpool_in_bad_situation_flame.svg
2018-10-24 19:30 hanyugang01 Note Added: 0002228
2018-11-20 11:39 pengbo Assigned To => pengbo
2018-11-20 11:39 pengbo Status new => assigned