[pgpool-general: 3330] Re: Master/Slave (stream) mode - Memory depletion

Joe Schaefer joesuf4 at gmail.com
Wed Dec 3 10:55:42 JST 2014


I can confirm this experience with long-lived connections.
My workaround was to ensure I had no long-lived clients
and to configure the pools to take a max of 10K connections.

On Tue, Dec 2, 2014 at 6:53 PM, Tatsuo Ishii <ishii at postgresql.org> wrote:

> I was able to reproduce the problem with 3.4.0
>
> 1) run pgbench -i
> 2) run pgbench -T 600  -S -c 1 -M extended test
> 3) run ps x as pgpool user and find pgpool process which is bound to the
> pgbench session #2
> 4) run ps and watch the process size like 'while true; do ps l 22942;
> sleep 1; done'
>
> I see in #4, the process size increases rapidly:
> 1  1000 22942 22432  20   0 5145776 5109900 -   S    pts/25     1:14
> pgpool: t-i
> F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME
> COMMAND
> 1  1000 22942 22432  20   0 5170364 5134368 -   R    pts/25     1:15
> pgpool: t-i
> F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME
> COMMAND
> 1  1000 22942 22432  20   0 5194952 5159100 -   S    pts/25     1:15
> pgpool: t-i
> F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME
> COMMAND
> 1  1000 22942 22432  20   0 5227736 5187716 -   R    pts/25     1:16
> pgpool: t-i
> F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME
> COMMAND
> 1  1000 22942 22432  20   0 5252324 5212448 -   S    pts/25     1:16
> pgpool: t-i
>
> Note that even if I remove '-M extended' part (which means using
> extended protocol, i.e. prepare statements), I see the memory usage
> growing. So it seems this is nothing to do with whether prepared
> statement is used or not.
>
> We will look into this.
>
> Best regards,
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese:http://www.sraoss.co.jp
>
> > Dear pgpool users
> >
> > I'm running two pgpool-II 3.4.0 instances in master/slave streaming
> > replication mode, with enabled watchdog and virtual IP control. In the
> > backend are two PostgreSQL 9.3.5 servers (one master and one slave)
> > involved. In the frontend are two Wildfly 8.1.0 application servers
> > having a xa-data-source configured, which connects to the VIP of the
> > pgpool instances.
> >
> > After around two days, the memory of the active pgpool-II instance (the
> > one holding the VIP) gets depleted completely and all the pgpool-II
> > processes together use around 6 GiB of memory until the kernels
> > out-of-memory manager kicks in or one stops the instance manually.
> >
> > The applications running within the Wildfly application servers are
> > proprietary, so I don't have access to the source code. What I see,
> > after turning statement logging on on the PostgreSQL server, is that the
> > following queries hit the master all two seconds from both servers:
> >
> > postgres[20069]: [86-1] LOG:  execute <unnamed>: select user0_.id as
> > id1_20_, user0_.company as company2_20_, user0_.created as created3_20_,
> > user0_.credentials_id as credent10_20_, user0_.email as email4_20_,
> > user0_.firstName as firstNam5_20_, user0_.lastModified as lastModi6_20_,
> > user0_.mergedTo as mergedTo7_20_, user0_.name as name8_20_,
> > user0_.organisation as organis11_20_, user0_.phone as phone9_20_ from
> > bcUser user0_ where user0_.email=$1
> > postgres[20069]: [86-2] DETAIL:  parameters: $1 = 'system at example.com'
> >
> > The queries are triggered from the HTTP load-balancer's alive-check,
> > which is executed every two seconds on one of the applications running
> > within the Wildfly servers.
> >
> > Does anyone have an idea why pgpool is allocating all the memory, or how
> > to further debug this?
> >
> > It would also be very helpful if anyone having a similar working setup
> > (Wildfly or JBoss) could share the data-source settings.
> >
> > There was a similar thread (3162 - Memory leaks) on the mailing list
> > around September [1].
> >
> >
> > Attached you will find an anonymised pgpool-II and xa-data-source
> > configuration, please let me know if you need more.
> >
> >
> > Many thanks in advance
> > Christian
> >
> >
> >
> > [1]
> >
> http://www.pgpool.net/pipermail/pgpool-general/2014-September/003204.html
> _______________________________________________
> pgpool-general mailing list
> pgpool-general at pgpool.net
> http://www.pgpool.net/mailman/listinfo/pgpool-general
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20141202/94e9f321/attachment-0001.html>


More information about the pgpool-general mailing list