[pgpool-general: 3329] Re: Master/Slave (stream) mode - Memory depletion

Tatsuo Ishii ishii at postgresql.org
Wed Dec 3 08:53:17 JST 2014


I was able to reproduce the problem with 3.4.0

1) run pgbench -i
2) run pgbench -T 600  -S -c 1 -M extended test
3) run ps x as pgpool user and find pgpool process which is bound to the pgbench session #2
4) run ps and watch the process size like 'while true; do ps l 22942; sleep 1; done'

I see in #4, the process size increases rapidly:
1  1000 22942 22432  20   0 5145776 5109900 -   S    pts/25     1:14 pgpool: t-i
F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
1  1000 22942 22432  20   0 5170364 5134368 -   R    pts/25     1:15 pgpool: t-i
F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
1  1000 22942 22432  20   0 5194952 5159100 -   S    pts/25     1:15 pgpool: t-i
F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
1  1000 22942 22432  20   0 5227736 5187716 -   R    pts/25     1:16 pgpool: t-i
F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
1  1000 22942 22432  20   0 5252324 5212448 -   S    pts/25     1:16 pgpool: t-i

Note that even if I remove '-M extended' part (which means using
extended protocol, i.e. prepare statements), I see the memory usage
growing. So it seems this is nothing to do with whether prepared
statement is used or not.

We will look into this.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp

> Dear pgpool users
> 
> I'm running two pgpool-II 3.4.0 instances in master/slave streaming
> replication mode, with enabled watchdog and virtual IP control. In the
> backend are two PostgreSQL 9.3.5 servers (one master and one slave)
> involved. In the frontend are two Wildfly 8.1.0 application servers
> having a xa-data-source configured, which connects to the VIP of the
> pgpool instances.
> 
> After around two days, the memory of the active pgpool-II instance (the
> one holding the VIP) gets depleted completely and all the pgpool-II
> processes together use around 6 GiB of memory until the kernels
> out-of-memory manager kicks in or one stops the instance manually.
> 
> The applications running within the Wildfly application servers are
> proprietary, so I don't have access to the source code. What I see,
> after turning statement logging on on the PostgreSQL server, is that the
> following queries hit the master all two seconds from both servers:
> 
> postgres[20069]: [86-1] LOG:  execute <unnamed>: select user0_.id as
> id1_20_, user0_.company as company2_20_, user0_.created as created3_20_,
> user0_.credentials_id as credent10_20_, user0_.email as email4_20_,
> user0_.firstName as firstNam5_20_, user0_.lastModified as lastModi6_20_,
> user0_.mergedTo as mergedTo7_20_, user0_.name as name8_20_,
> user0_.organisation as organis11_20_, user0_.phone as phone9_20_ from
> bcUser user0_ where user0_.email=$1
> postgres[20069]: [86-2] DETAIL:  parameters: $1 = 'system at example.com'
> 
> The queries are triggered from the HTTP load-balancer's alive-check,
> which is executed every two seconds on one of the applications running
> within the Wildfly servers.
> 
> Does anyone have an idea why pgpool is allocating all the memory, or how
> to further debug this?
> 
> It would also be very helpful if anyone having a similar working setup
> (Wildfly or JBoss) could share the data-source settings.
> 
> There was a similar thread (3162 - Memory leaks) on the mailing list
> around September [1].
> 
> 
> Attached you will find an anonymised pgpool-II and xa-data-source
> configuration, please let me know if you need more.
> 
> 
> Many thanks in advance
> Christian
> 
> 
> 
> [1]
> http://www.pgpool.net/pipermail/pgpool-general/2014-September/003204.html


More information about the pgpool-general mailing list