[pgpool-general: 5652] Re: oom when memory_cache_enabled is on
Tatsuo Ishii
ishii at sraoss.co.jp
Fri Jul 28 17:40:03 JST 2017
> Hello, Tatsuo.
>
> Thank you very much!
> Does it mean that 3.6.4 is not affected by this bug?
Not sure. I just confirmed 3.6.1 has small leak (much smaller than 3.6.5).
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp
> You wrote 28 июля 2017 г., 11:17:36:
>
>> I was able to confirm the memory leak with following conditions met:
>
>> 1) Pgpool-II 3.6/5 or newer
>> 2) streaming replication mode
>> 3) extended queries are used
>
>> I will look into this issue.
>
>> Best regards,
>> --
>> Tatsuo Ishii
>> SRA OSS, Inc. Japan
>> English: http://www.sraoss.co.jp/index_en.php
>> Japanese:http://www.sraoss.co.jp
>
>>> Hello.
>>>
>>> We started testing our project under heavy load and encountered out of
>>> memory condition if pgpool is running with memory_cache_enabled=on, both
>>> with shmem and memcached.
>>>
>>> Under simulated heavy load pgpool consumes all available memory (16Gb)
>>> in just a few minutes and then kernel kills it.
>>>
>>> I've found this old similar bug thread: http://www.pgpool.net/mantisbt/view.php?id=52
>>> and tried running pgpool with valgrind, but (perhaps due to high
>>> number of pgpool childs?) system was almost unresponsive and i had to
>>> restart pgpool without valgrind.
>>>
>>> I'm attaching pgpool log which was written during the short period of
>>> valgrind activity.
>>> There are some records, but I have zero experience in this area and I
>>> have no idea if they are indicating a problem or not. One of the
>>> records:
>>> Jul 26 09:27:59 ip-172-31-26-132 pgpool2[3707]: ==4612== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
>>> Jul 26 09:27:59 ip-172-31-26-132 pgpool2[3707]: ==4612== by 0x448063: save_ps_display_args (ps_status.c:173)
>>> Jul 26 09:27:59 ip-172-31-26-132 pgpool2[3707]: ==4612== by 0x407F41: main (main.c:192)
>>> Jul 26 09:27:59 ip-172-31-26-132 pgpool2[3707]: ==4612==
>>> Jul 26 09:27:59 ip-172-31-26-132 pgpool2[3707]: ==4612== LEAK SUMMARY:
>>> Jul 26 09:27:59 ip-172-31-26-132 pgpool2[3707]: ==4612== definitely lost: 96 bytes in 1 blocks
>>> Jul 26 09:27:59 ip-172-31-26-132 pgpool2[3707]: ==4612== indirectly lost: 343 bytes in 11 blocks
>>> Jul 26 09:27:59 ip-172-31-26-132 pgpool2[3707]: ==4612== possibly lost: 0 bytes in 0 blocks
>>> Jul 26 09:27:59 ip-172-31-26-132 pgpool2[3707]: ==4612== still reachable: 254,475 bytes in 3,102 blocks
>>> Jul 26 09:27:59 ip-172-31-26-132 pgpool2[3707]: ==4612== suppressed: 0 bytes in 0 blocks
>>> Jul 26 09:27:59 ip-172-31-26-132 pgpool2[3707]: ==4612== Reachable blocks (those to which a pointer was found) are not shown.
>>> Jul 26 09:27:59 ip-172-31-26-132 pgpool2[3707]: ==4612== To see them, rerun with: --leak-check=full --show-leak-kinds=all
>>> Jul 26 09:27:59 ip-172-31-26-132 pgpool2[3707]: ==4612==
>>> Jul 26 09:27:59 ip-172-31-26-132 pgpool2[3707]: ==4612== For counts of detected and suppressed errors, rerun with: -v
>>> Jul 26 09:27:59 ip-172-31-26-132 pgpool2[3707]: ==4612== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
>>>
>>>
>>>
>>> "ulimit -a" output of user postgres (pgpool is running under this
>>> account):
>>> postgres at ip-172-31-26-132:~$ ulimit -a
>>> core file size (blocks, -c) 0
>>> data seg size (kbytes, -d) unlimited
>>> scheduling priority (-e) 0
>>> file size (blocks, -f) unlimited
>>> pending signals (-i) 64124
>>> max locked memory (kbytes, -l) 64
>>> max memory size (kbytes, -m) unlimited
>>> open files (-n) 10000
>>> pipe size (512 bytes, -p) 8
>>> POSIX message queues (bytes, -q) 819200
>>> real-time priority (-r) 0
>>> stack size (kbytes, -s) 8192
>>> cpu time (seconds, -t) unlimited
>>> max user processes (-u) 64124
>>> virtual memory (kbytes, -v) unlimited
>>> file locks (-x) unlimited
>>>
>>>
>>>
>>>
>>> --
>>> Best regards,
>>> Pavel mailto:balroga3 at yandex.ru
>
>
>
> --
> Best regards,
> Pavel mailto:balroga3 at yandex.ru
>
More information about the pgpool-general
mailing list