[pgpool-general: 3348] Re: Master/Slave (stream) mode - Memory depletion

Christian Affolter c.affolter at stepping-stone.ch
Fri Dec 5 01:04:24 JST 2014


Hi

thank you very much!

I've installed the current V3_4_STABLE Git branch on both nodes and
restarted the pgpool-II daemons. I will check the memory usage by
tomorrow and provide feedback.

By the way, if any Gentoo users are on this list, I've created some new
(live) dev-db/pgpool2 ebuilds, which can be found on GitHub [1].

Thanks again and best regards
Christian


[1]
https://github.com/stepping-stone/sst-gentoo-overlay/tree/master/dev-db/pgpool2

On 04.12.2014 13:35, Muhammad Usama wrote:
> Hi
> 
> I have found the problem and pushed the fix for this memory leak in
> master and 3.4 branch.
> 
> http://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=2636236af59b90f7e61054518607ca506bb50135
> 
> Thanks
> Kind regards,
> Muhammad Usama
> 
> On Wed, Dec 3, 2014 at 1:03 PM, Christian Affolter
> <c.affolter at stepping-stone.ch <mailto:c.affolter at stepping-stone.ch>> wrote:
> 
>     Great, thanks a lot for looking into it.
> 
>     Regards
>     Christian
> 
>     On 03.12.2014 00:53, Tatsuo Ishii wrote:
>     > I was able to reproduce the problem with 3.4.0
>     >
>     > 1) run pgbench -i
>     > 2) run pgbench -T 600  -S -c 1 -M extended test
>     > 3) run ps x as pgpool user and find pgpool process which is bound
>     to the pgbench session #2
>     > 4) run ps and watch the process size like 'while true; do ps l
>     22942; sleep 1; done'
>     >
>     > I see in #4, the process size increases rapidly:
>     > 1  1000 22942 22432  20   0 5145776 5109900 -   S    pts/25   
>      1:14 pgpool: t-i
>     > F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY       
>     TIME COMMAND
>     > 1  1000 22942 22432  20   0 5170364 5134368 -   R    pts/25   
>      1:15 pgpool: t-i
>     > F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY       
>     TIME COMMAND
>     > 1  1000 22942 22432  20   0 5194952 5159100 -   S    pts/25   
>      1:15 pgpool: t-i
>     > F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY       
>     TIME COMMAND
>     > 1  1000 22942 22432  20   0 5227736 5187716 -   R    pts/25   
>      1:16 pgpool: t-i
>     > F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY       
>     TIME COMMAND
>     > 1  1000 22942 22432  20   0 5252324 5212448 -   S    pts/25   
>      1:16 pgpool: t-i
>     >
>     > Note that even if I remove '-M extended' part (which means using
>     > extended protocol, i.e. prepare statements), I see the memory usage
>     > growing. So it seems this is nothing to do with whether prepared
>     > statement is used or not.
>     >
>     > We will look into this.
>     >
>     > Best regards,
>     > --
>     > Tatsuo Ishii
>     > SRA OSS, Inc. Japan
>     > English: http://www.sraoss.co.jp/index_en.php
>     > Japanese:http://www.sraoss.co.jp
>     >
>     >> Dear pgpool users
>     >>
>     >> I'm running two pgpool-II 3.4.0 instances in master/slave streaming
>     >> replication mode, with enabled watchdog and virtual IP control.
>     In the
>     >> backend are two PostgreSQL 9.3.5 servers (one master and one slave)
>     >> involved. In the frontend are two Wildfly 8.1.0 application servers
>     >> having a xa-data-source configured, which connects to the VIP of the
>     >> pgpool instances.
>     >>
>     >> After around two days, the memory of the active pgpool-II
>     instance (the
>     >> one holding the VIP) gets depleted completely and all the pgpool-II
>     >> processes together use around 6 GiB of memory until the kernels
>     >> out-of-memory manager kicks in or one stops the instance manually.
>     >>
>     >> The applications running within the Wildfly application servers are
>     >> proprietary, so I don't have access to the source code. What I see,
>     >> after turning statement logging on on the PostgreSQL server, is
>     that the
>     >> following queries hit the master all two seconds from both servers:
>     >>
>     >> postgres[20069]: [86-1] LOG:  execute <unnamed>: select user0_.id as
>     >> id1_20_, user0_.company as company2_20_, user0_.created as
>     created3_20_,
>     >> user0_.credentials_id as credent10_20_, user0_.email as email4_20_,
>     >> user0_.firstName as firstNam5_20_, user0_.lastModified as
>     lastModi6_20_,
>     >> user0_.mergedTo as mergedTo7_20_, user0_.name as name8_20_,
>     >> user0_.organisation as organis11_20_, user0_.phone as phone9_20_ from
>     >> bcUser user0_ where user0_.email=$1
>     >> postgres[20069]: [86-2] DETAIL:  parameters: $1 =
>     'system at example.com <mailto:system at example.com>'
>     >>
>     >> The queries are triggered from the HTTP load-balancer's alive-check,
>     >> which is executed every two seconds on one of the applications
>     running
>     >> within the Wildfly servers.
>     >>
>     >> Does anyone have an idea why pgpool is allocating all the memory,
>     or how
>     >> to further debug this?
>     >>
>     >> It would also be very helpful if anyone having a similar working
>     setup
>     >> (Wildfly or JBoss) could share the data-source settings.
>     >>
>     >> There was a similar thread (3162 - Memory leaks) on the mailing list
>     >> around September [1].
>     >>
>     >>
>     >> Attached you will find an anonymised pgpool-II and xa-data-source
>     >> configuration, please let me know if you need more.
>     >>
>     >>
>     >> Many thanks in advance
>     >> Christian
>     >>
>     >>
>     >>
>     >> [1]
>     >>
>     http://www.pgpool.net/pipermail/pgpool-general/2014-September/003204.html




More information about the pgpool-general mailing list