View Issue Details

IDProjectCategoryView StatusLast Update
0000024Pgpool-IIBugpublic2012-10-05 18:05
Reportersingh.gurjeetAssigned Tot-ishii 
PriorityhighSeveritymajorReproducibilityalways
Status resolvedResolutionfixed 
PlatformOSRHELOS Version5.6
Product Version 
Target VersionFixed in Version 
Summary0000024: Severe memory leak in an OLTP environment
Descriptionpgpool-II 3.1

One of our customer saw suspicious memory leaks in the case like following, memory usage keeps rising when they start an application and perform some report/OLTP
transactions, but the memory is not released even after the transactions end.

So they tried to analyze it with valgrind log file, then they saw many Leak Summary and Heap Summary, and found out 4 cases of memory leak that look suspicious,
like below. Can have a look at the cases?

A. extract_string calls strdup

==27927== 3 bytes in 1 blocks are possibly lost in loss record 3 of 100
==27927== at 0x4A05E1C: malloc (vg_replace_malloc.c:195)
==27927== by 0x3C51E798C1: strdup (in /lib64/libc-2.5.so)
==27927== by 0x40C9C4: extract_string (pool_config.l:2065)
==27927== by 0x410D52: pool_get_config (pool_config.l:1756)
==27927== by 0x4066B5: main (main.c:320)
==27927==

The function's call-stack is like following from the valgrind log above.

main-> pool_get_config -> extract_string -> strdup -> malloc

A strdup() is basic C function, and it's copying strings. The strdup() should call malloc() and RELEASE IT but it seems it did not. The strdup() is called from pool_get_config() of pool_config.c, and pool_get_config() is the function to read a configuration file. Then, an listen_addresses is kept in a strdup variable number, but according to the customer, there is a possibility to be refer the strdup, even if when it's stopping pgpool-II service. This means strdup calling malloc hasn't been released allocated memory area.


B. read_startup_packet calls calloc

==27927== 594 (528 direct, 66 indirect) bytes in 11 blocks are definitely lost in loss record 85 of 100
==27927== at 0x4A05140: calloc (vg_replace_malloc.c:418)
==27927== by 0x40895B: read_startup_packet (child.c:803)
==27927== by 0x409663: do_child (child.c:210)
==27927== by 0x403F04: fork_a_child (main.c:1073)
==27927== by 0x406C00: main (main.c:550)
==27927==

The function's call-stack is like following from the valgrind log above.

main-> fork_a_child -> do_child -> read_startup_packet -> calloc

From the stack above, read_startup_packet() function calls calloc() to allocate memory. Then main loop of do_child() calls read_startup_packet(), but it seemed that because of the loop, allocated memory which is done by read_startup_packet() is not released.


C. save_ps_display_args() calls malloc()

==27927== 1,602 (256 direct, 1,346 indirect) bytes in 1 blocks are definitely lost in loss record 92 of 100
==27927== at 0x4A05E1C: malloc (vg_replace_malloc.c:195)
==27927== by 0x434605: save_ps_display_args (ps_status.c:173)
==27927== by 0x403ECA: fork_a_child (main.c:1066)
==27927== by 0x406C00: main (main.c:550)
==27927==

The function's stack is like following from the valgrind log above.

main-> fork_a_child -> save_ps_display_args -> malloc

If num_init_children is set to 128 on the pgpool configuration file, save_ps_display_args() of postgres will be called 128 times according to it, and this is going to consume some amount of memory though it's usual/normal behavior and no problem.

But if it's the case there is a failover, all process will be disconnected, and re-connected again, and in this scenario, 128 process will be created again and also allocated memory again. This should be reated as memory wasted.

It's not unusual to run pgpool-II uninterrupted for long, and in production, they may get some failovers and may lead to a few fork_a_child. So this should be one of the reason for memory leak.


D. extract_string_tokens() calls malloc()

==27927== 24,630 (24,576 direct, 54 indirect) bytes in 3 blocks are definitely lost in loss record 100 of 100
==27927== at 0x4A05E1C: malloc (vg_replace_malloc.c:195)
==27927== by 0x40C8FC: extract_string_tokens (pool_config.l:2191)
==27927== by 0x41177B: pool_get_config (pool_config.l:1657)
==27927== by 0x4066B5: main (main.c:320)
==27927==

The function's stack is like following from the valgrind log above.

main-> pool_get_config -> extract_string_tokens -> malloc

This is similar to the case A, malloc() is called by extract_string_tokens(). But this is called by pool_get_config() of pool_config.c, which is reading configuration file setting. The information read by the function stored in extract_string_tokens(), and the function is not allowed to release the stored information because of possibility of reading the configuration file before stopping pgpool-II.
Steps To ReproduceWe are able to reproduce the issue at our end.
The test case is:-

-bash-4.1$ valgrind --tool=memcheck --leak-check=full --track-origins=yes --leak-resolution=high --undef-value-errors=yes bin/pgpool -f etc/pgpool.conf -n -d

==5655==
==5655== HEAP SUMMARY:
==5655== in use at exit: 65,519 bytes in 146 blocks
==5655== total heap usage: 428 allocs, 282 frees, 88,373 bytes allocated
==5655==
==5655== 3 bytes in 1 blocks are possibly lost in loss record 3 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x410D52: pool_get_config (pool_config.l:1756)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 3 bytes in 1 blocks are possibly lost in loss record 4 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x41032C: pool_get_config (pool_config.l:1180)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 3 bytes in 1 blocks are possibly lost in loss record 5 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x4103B6: pool_get_config (pool_config.l:1200)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 3 bytes in 1 blocks are possibly lost in loss record 6 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x410549: pool_get_config (pool_config.l:1254)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 3 bytes in 1 blocks are possibly lost in loss record 7 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x4105D3: pool_get_config (pool_config.l:1274)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 3 bytes in 1 blocks are possibly lost in loss record 8 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x41065D: pool_get_config (pool_config.l:1294)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 3 bytes in 1 blocks are definitely lost in loss record 9 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x40F990: pool_get_config (pool_config.l:864)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 4 bytes in 1 blocks are possibly lost in loss record 10 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x40E492: pool_get_config (pool_config.l:411)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 6 bytes in 1 blocks are possibly lost in loss record 11 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x411100: pool_get_config (pool_config.l:1887)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 7 bytes in 1 blocks are possibly lost in loss record 12 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x40E468: pool_get_config (pool_config.l:454)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 7 bytes in 1 blocks are possibly lost in loss record 13 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x40E864: pool_get_config (pool_config.l:472)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 7 bytes in 1 blocks are possibly lost in loss record 14 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x40FE05: pool_get_config (pool_config.l:998)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 7 bytes in 1 blocks are possibly lost in loss record 15 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x40EBCA: pool_get_config (pool_config.l:592)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 9 bytes in 1 blocks are possibly lost in loss record 17 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x40EC82: pool_get_config (pool_config.l:611)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 9 bytes in 1 blocks are possibly lost in loss record 18 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x40EDDA: pool_get_config (pool_config.l:649)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 9 bytes in 1 blocks are possibly lost in loss record 19 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x40FD0F: pool_get_config (pool_config.l:958)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 9 bytes in 1 blocks are possibly lost in loss record 20 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x410A05: pool_get_config (pool_config.l:1447)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 9 bytes in 1 blocks are possibly lost in loss record 21 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x410B42: pool_get_config (pool_config.l:1485)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 9 bytes in 1 blocks are possibly lost in loss record 22 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x41048B: pool_get_config (pool_config.l:1234)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 9 bytes in 1 blocks are definitely lost in loss record 23 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x40ED7E: pool_get_config (pool_config.l:630)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 11 bytes in 1 blocks are possibly lost in loss record 24 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x410217: pool_get_config (pool_config.l:1140)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 11 bytes in 1 blocks are possibly lost in loss record 25 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x410BCC: pool_get_config (pool_config.l:1504)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 11 bytes in 1 blocks are possibly lost in loss record 26 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x410074: pool_get_config (pool_config.l:1086)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 12 bytes in 1 blocks are possibly lost in loss record 27 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x4107A7: pool_get_config (pool_config.l:1395)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 12 bytes in 1 blocks are possibly lost in loss record 28 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x410912: pool_get_config (pool_config.l:1415)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 15 bytes in 1 blocks are possibly lost in loss record 30 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x410167: pool_get_config (pool_config.l:1120)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 15 bytes in 1 blocks are possibly lost in loss record 31 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x40FFEA: pool_get_config (pool_config.l:1066)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 17 bytes in 1 blocks are possibly lost in loss record 44 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x40FAF2: pool_get_config (pool_config.l:895)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 17 bytes in 1 blocks are possibly lost in loss record 45 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x410AB7: pool_get_config (pool_config.l:1466)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 18 bytes in 1 blocks are possibly lost in loss record 46 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x40F0EE: pool_get_config (pool_config.l:668)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 21 bytes in 1 blocks are possibly lost in loss record 49 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x4102A2: pool_get_config (pool_config.l:1160)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 21 bytes in 1 blocks are definitely lost in loss record 50 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x40F75C: pool_get_config (pool_config.l:839)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 24 bytes in 2 blocks are definitely lost in loss record 51 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x411262: pool_get_config (pool_config.l:1528)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 40 bytes in 2 blocks are definitely lost in loss record 61 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x41175D: pool_get_config (pool_config.l:1649)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 65 bytes in 2 blocks are definitely lost in loss record 63 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x30A2E7FB41: strdup (in /lib64/libc-2.12.so)
==5655== by 0x40C9C4: extract_string (pool_config.l:2065)
==5655== by 0x4116A7: pool_get_config (pool_config.l:1625)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== 16,420 (16,384 direct, 36 indirect) bytes in 2 blocks are definitely lost in loss record 83 of 83
==5655== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==5655== by 0x40C8FC: extract_string_tokens (pool_config.l:2191)
==5655== by 0x41177B: pool_get_config (pool_config.l:1657)
==5655== by 0x4066B5: main (main.c:320)
==5655==
==5655== LEAK SUMMARY:
==5655== definitely lost: 16,546 bytes in 11 blocks
==5655== indirectly lost: 36 bytes in 2 blocks
==5655== possibly lost: 270 bytes in 29 blocks
==5655== still reachable: 48,667 bytes in 104 blocks
==5655== suppressed: 0 bytes in 0 blocks
==5655== Reachable blocks (those to which a pointer was found) are not shown.
==5655== To see them, rerun with: --leak-check=full --show-reachable=yes
==5655==
==5655== For counts of detected and suppressed errors, rerun with: -v
==5655== ERROR SUMMARY: 36 errors from 36 contexts (suppressed: 6 from 6)
Additional InformationWe initially treated this as a low priority bug, citing the following reason:

<Robert Haas>
"valgrind doesn't produce entirely accurate results, especially on programs like pgpool which have their own memory management routines (see parser/memnodes.h). However, some of these may be real leaks - for example, it does look like there are some real leaks involving extract_string. However, since extract_string is only used when reading the configuration file, these leaks will not consume any significant amount of memory unless the configuration file is reloaded a huge number of times, making it not really a problem in practice.

Still, we have raised a low-priority request with our development team to work with the pgpool community to improve these leaks, and the FogBugz number is XXXX. We would consider this a high priority only if enough memory is leaked in a real-world scenario to materially impact the functioning of the system"
</Robert Haas>

We later received an update from the customer saying that they they noticed leaks of 10 GB in an 8 hour period. So now we consider a high priority bug, and wish to work with the community to resolve it.
TagsNo tags attached.

Activities

singh.gurjeet

2012-08-28 03:45

reporter  

valgrind.log.gz (40,684 bytes)

t-ishii

2012-08-28 07:50

developer   ~0000060

The comment from Robert Haas is correct. Your analysis does not seem to show any evidence of real memory leak.

About your customer's report. I need more details: configuration(pgpool.conf) and the result of valgrind after 8 hours run(or at least long enough run until memory leak is observed).

arshu arora

2012-08-29 16:58

reporter  

pgpool.conf.tar.gz (4,072 bytes)

arshu arora

2012-08-29 17:00

reporter  

valgrind.log.tar.gz (31,212 bytes)

arshu arora

2012-08-29 17:08

reporter   ~0000069

Below is the information requested:

- val.log.27681 (when it's memory leak)(attached)
- pgpool_ps_20120828_01.log (this was taken when they're executing valgrind)- As the size of file is larger than 2097K, we have upload at the below link:

http://uploads.enterprisedb.com/download.php?file=bd3d686fc3395bec9992d246018f41a9

- pgpool.conf (customer's configuration)(attached)

t-ishii

2012-08-29 19:09

developer   ~0000070

Thanks. I would like to see source code around pool_process_query.c:2566, where malloc is called. I'm asking this because your pgpool seems different from community version of pgpool-II 3.1.

==27681== at 0x4A05E1C: malloc (vg_replace_malloc.c:195)
==27681== by 0x413D5F: do_query (pool_process_query.c:2566)
==27681== by 0x4352A3: pool_search_relcache (pool_relcache.c:170)
==27681== by 0x43C9F0: rewrite_timestamp (pool_timestamp.c:407)
==27681== by 0x44244F: Parse (pool_proto_modules.c:738)
==27681== by 0x443559: ProcessFrontendResponse (pool_proto_modules.c:2081)
==27681== by 0x417ACC: pool_process_query (pool_process_query.c:345)
==27681== by 0x409871: do_child (child.c:354)
==27681== by 0x403F04: fork_a_child (main.c:1073)
==27681== by 0x406C00: main (main.c:550)

singh.gurjeet

2012-09-05 03:42

reporter  

singh.gurjeet

2012-09-05 03:49

reporter   ~0000074

Hi,

    My preliminary investigation shows that line 2566 in pool_process_query.c is this line in do_query() function, under switch-case 'D' (data row processing):

    res->data[num_data] = malloc(len + 1);

I have attached ascreenshot of the relevant section of this file for your reference.

Best regards,

singh.gurjeet

2012-09-05 05:37

reporter   ~0000075

Last edited: 2012-09-05 06:05

View 2 revisions

The problem seems to be that the callers of do_query() do not call 'free_select_result(res)' if do_query() call fails (that is, if do_query() does not return POOL_CONTINUE (0)).

And, the return statements in do_query() that exit the function with error conditions do not care to free the locally allocated 'POOL_SELECT_RESULT *res' structure and its members.

So, if a do_query() call fails after allocating the res struct and some of its members, then all the memory allocated for the result descriptor POOL_SELECT_RESULT is leaked.

I believe the fix is to replace every return statement in do_query() with a macro that calls free_select_result(res) before exiting the function.

#define RETURN(EXIT_CODE) \
    do { \
        if (EXIT_CODE != POOL_CONTINUE) \
            free_select_result(res); \
        return EXIT_CODE; \
    } while(0)

And then replace every occurrence of 'return POOL_XYZ;' in do_query() with 'RETURN(POOL_XYZ);'.

This fix also requires that the code that allocates the attribute info under "case 'T'" be modified such that the free()'ing code can cope with the struct if only some of the fields were successfully allocated. That is, this line:

rowdesc->num_attrs = num_fields;

should be converted to

rowdesc->num_attrs += 1;

after this line:

attrinfo->attrname = malloc(len);


PS: I don't have the exact version of the code that was compiled and shipped with pgpool-II 3.1 along with Postgres Plus Advanced Server 9.1. I will review the code again when I get access to that version of pgpool source code.

t-ishii

2012-09-05 08:57

developer  

free_select_result.patch (696 bytes)
diff --git a/pool_process_query.c b/pool_process_query.c
index 1afc52a..ecaf6e9 100644
--- a/pool_process_query.c
+++ b/pool_process_query.c
@@ -2372,17 +2372,23 @@ POOL_STATUS OneNode_do_command(POOL_CONNECTION *frontend, POOL_CONNECTION *backe
  */
 void free_select_result(POOL_SELECT_RESULT *result)
 {
-	int i;
+	int i, j;
+	int index;
 
 	if (result->nullflags)
 		free(result->nullflags);
 
 	if (result->data)
 	{
+		index = 0;
 		for(i=0;i<result->numrows;i++)
 		{
-			if (result->data[i])
-				free(result->data[i]);
+			for (j=0;j<result->rowdesc->num_attrs;j++)
+			{
+				if (result->data[index])
+					free(result->data[index]);
+				index++;
+			}
 		}
 		free(result->data);
 	}

t-ishii

2012-09-05 08:58

developer   ~0000076

Thanks for the info. With the source location info of valgrind
you provide, I have identified memory leak in
free_select_result. It misunderstands the POOL_SELECT_RESULT
structure. It keeps row data in "data" field, which is a one
dimension array: that is, if do_query get 2 rows and 3 attributes
data, the array would be: [row 0 attribute 0][row 0 attribute
1][row 0 attribute 2][row 1 attribute 0][row 1 attribute 1][row 1
attribute 2]. However free_select_result thinks as:[whole row 0
data][whole row 1 data]. As a result, it frees only 2 data, while
it should free 6 data. Attached patch should fix the
problem. Please try it.

BTW, your analysis on do_query error case might worth to
consider, but I don't think that's your customer's case since
your test did not resulted in do_query error case(correct me if I
am wrong). So I think I don't want to risk to make patch for it
for now.

singh.gurjeet

2012-09-05 11:48

reporter   ~0000077

Aha! I didn't catch that glaring bug. I was treating 'res->numrows++' and 'num_data++' as equivalent and was going to complain as to why we have two variables recording the same info. Now I see that they are actually tracking different info. res->numrows is for number of rows processed and num_data is for number of cells (cross-section of a row and a column) allocated. I blame deeply nested, multi-page code blocks for misleading me :) It was a cursory look at the code for leaks before I got hold of the actual version of the code being used at customer's installation.

I agree this is quite a serious bug, and there's a high possibility that our customer is encountering this bug. But I'll let the tests with a patched pgpool prove that.

An OLTP-like environment combined with a high setting of 'child_max_connections = 1000' may be one of the reasons why we could see this leak balloon up so much that it became obvious to the observer. In other setups, a lower value of child_max_connections may lead to child processes to exit/die early and hence memory leaks would vanish soon with the process' death.

IMHO, the leaks in do_query() on error conditions also need to be addressed nonetheless.

t-ishii

2012-09-05 13:43

developer   ~0000078

I think the most significant factor to the memory leak problem is
long running client connection. If clients keep on connecting to
pgpool for hours, the leak will be quickly accumulated. I believe
that's the case for your customer. As far as I know, most pgpool
users use child_life_time or child_max_connections parameter to
allow to restart pgpool child process, which results in avoiding
memory leak problems.

Also please note that the leak will not appear in other than
replication mode by unintended reason. Most calles to do_query
expect one row and one attribute returned. So the leak actually
will not appear. However in replication mode, the time rewriting
routine calls do_query expecting several fields returned.

In summary, long running client connection and replication mode
are the key factor to trigger the memory leak. I think that's the
reason why we didn't hear complains from field (the bug has been
there since 2009, when pgpool-II 2.3 was released).

BTW I think do_query() on error conditions you are talking about
does not cause memory leak since it goes to here anyway(in
child.c):

        /* error occured. discard backend connection pool
           and disconnect connection to the frontend */
        case POOL_ERROR:
            pool_log("do_child: exits with status 1 due to error");
            child_exit(1);

child_exits() calls exit() and the process exits thus memory are
freed by kernel.

arshu arora

2012-09-10 17:04

reporter   ~0000082

I had a look at the pgpool.conf(Attached) file which was sent by the user and the parameter child_life_time is set to 300 and child_max_connections = 1000.
300 is the default value for child_life_time and default value for child_max_connections is 0 which is set to 1000.

Could you please advice whether we should lower these values.

arshu arora

2012-09-10 17:05

reporter  

pgpool.conf (19,801 bytes)

t-ishii

2012-09-10 19:37

developer   ~0000083

These parameters have no effect if client connections last for hours like your customer's case. So I think you don't need to touch those parameters.

singh.gurjeet

2012-09-27 16:03

reporter   ~0000093

I have found a memory leak that our customer is facing. When I run the testDsgwInsert.pgc ECPG program against pgpool, I can see these two strings leaking:

"tbl_com" (quotes included)
pooldb

These strings leak as much as 2000 to 3000 times per minute. I was able to ascertain this by attaching to the pgpool backend using gdb, generating core file using generate-core-file command, and the running `strings` utility on the resulting core file, and then processing the strings-generated output using this command chain:

cat strings.out | sort | uniq -c | sort -g | tail

I was able to track down the cause of the first leak ("tbl_com"), and am attaching a proposed fix (fix_memory_leak.patch). The trouble seems to be caused by the fact that get_insert_command_table_name() function uses session_context->memory_context to perform nodeToString() which allocates a new string "tbl_com" for the table name; and since session_context->memory_context is not released until the frontend disconnects, this memory allocation shows up as a leak on a long-running frontend-connection.

The fix I have devised is to create a free*() function to free the string generated by get_insert_command_table_name(), and call this new function from the callers of get_insert_command_table_name() before the caller returns. This makes sure that the memory gets freed immediately.

The patch also moves a small piece of code in insert_lock() function to avoid a potential crash if table == NULL. The code with the comment /* trim quotes */ tries to use the return value of get_insert_command_table_name() without checking if the returned pointer is NULL.

I haven't been able to find the source of 'pooldb' leak yet. But I have a strong suspicion that this string is part of the string (sql_stmt_cm_insspooldb) the program uses for the name of the prepared statement. I haven't yet been lucky to find this leak's location.

PS: Please note that the .pgc file uses PPAS (EDB) specific syntax, so it very likely won't run as is on Postgres. But I am assuming this program can be made to work with Postgres, if reallly needed.

PPS: For some reason valgrind's memcheck and massif tools did not help in catching the leaks even when I ran them for 20+ minutes with the all the child processes being traced simultaneously. Perhaps there's something about pgpool's memory allocator that confuses the valgrind tools, pr maybe I haven't grasped yet how to use these tools correctly.

singh.gurjeet

2012-09-27 16:08

reporter  

testDsgwInsert.pgc (2,255 bytes)

singh.gurjeet

2012-09-27 16:09

reporter  

fix_memory_leak.patch (4,236 bytes)
diff --git a/pool_process_query.c b/pool_process_query.c
index a163089..0307dbe 100644
--- a/pool_process_query.c
+++ b/pool_process_query.c
@@ -66,6 +66,7 @@
 
 static int reset_backend(POOL_CONNECTION_POOL *backend, int qcnt);
 static char *get_insert_command_table_name(InsertStmt *node);
+static void free_insert_command_table_name(char *ptr);
 static int send_deallocate(POOL_CONNECTION_POOL *backend, POOL_SENT_MESSAGE_LIST msglist, int n);
 static int is_cache_empty(POOL_CONNECTION *frontend, POOL_CONNECTION_POOL *backend);
 static POOL_STATUS ParallelForwardToFrontend(char kind, POOL_CONNECTION *frontend, POOL_CONNECTION *backend, char *database, bool send_to_frontend);
@@ -2962,6 +2963,7 @@ int need_insert_lock(POOL_CONNECTION_POOL *backend, char *query, Node *node)
 		if (relcache == NULL)
 		{
 			pool_error("need_insert_lock: pool_create_relcache error");
+			free_insert_command_table_name(str);
 			return false;
 		}
 	}
@@ -2976,6 +2978,7 @@ int need_insert_lock(POOL_CONNECTION_POOL *backend, char *query, Node *node)
 #else
 	result = pool_search_relcache(relcache, backend, str)==0?0:3;
 #endif
+	free_insert_command_table_name(str);
 	return result;
 }
 
@@ -3023,8 +3026,15 @@ POOL_STATUS insert_lock(POOL_CONNECTION *frontend, POOL_CONNECTION_POOL *backend
 		return POOL_CONTINUE;
 
 	/* get table name */
+	pool_log("insert_lock() calling get_insert_command_table_name()");
 	table = get_insert_command_table_name(node);
 
+	/* could not get table name. probably wrong SQL command */
+	if (table == NULL)
+	{
+		return POOL_CONTINUE;
+	}
+
 	/* trim quotes */
 	p = table;
 	for (i=0; *p; p++)
@@ -3034,12 +3044,6 @@ POOL_STATUS insert_lock(POOL_CONNECTION *frontend, POOL_CONNECTION_POOL *backend
 	}
 	rel_name[i] = '\0';
 
-	/* could not get table name. probably wrong SQL command */
-	if (table == NULL)
-	{
-		return POOL_CONTINUE;
-	}
-
 	/* table lock for insert target table? */
 	if (lock_kind == 1)
 	{
@@ -3071,6 +3075,7 @@ POOL_STATUS insert_lock(POOL_CONNECTION *frontend, POOL_CONNECTION_POOL *backend
 			if (relcache == NULL)
 			{
 				pool_error("insert_lock: pool_create_relcache error");
+				free_insert_command_table_name(table);
 				return false;
 			}
 		}
@@ -3082,6 +3087,7 @@ POOL_STATUS insert_lock(POOL_CONNECTION *frontend, POOL_CONNECTION_POOL *backend
 		if (adsrc == NULL)
 		{
 			/* could not get adsrc */
+			free_insert_command_table_name(table);
 			return POOL_CONTINUE;
 		}
 		pool_debug("adsrc: %s", adsrc);
@@ -3089,6 +3095,7 @@ POOL_STATUS insert_lock(POOL_CONNECTION *frontend, POOL_CONNECTION_POOL *backend
 		if (regcomp(&preg, "nextval\\(+'(.+)'", REG_EXTENDED|REG_NEWLINE) != 0)
 		{
 			pool_error("insert_lock: regex compile failed");
+			free_insert_command_table_name(table);
 			return POOL_END;
 		}
 
@@ -3107,6 +3114,7 @@ POOL_STATUS insert_lock(POOL_CONNECTION *frontend, POOL_CONNECTION_POOL *backend
 		if (len == 0)
 		{
 			pool_error("insert_lock: regex no match: pg_attrdef is %s", adsrc);
+			free_insert_command_table_name(table);
 			return POOL_END;
 		}
 
@@ -3191,6 +3199,7 @@ POOL_STATUS insert_lock(POOL_CONNECTION *frontend, POOL_CONNECTION_POOL *backend
 
 	if (status == POOL_END)
 	{
+		free_insert_command_table_name(table);
 		return POOL_END;
 	}
 	else if (status == POOL_DEADLOCK)
@@ -3223,11 +3232,13 @@ POOL_STATUS insert_lock(POOL_CONNECTION *frontend, POOL_CONNECTION_POOL *backend
 
 			if (status != POOL_CONTINUE)
 			{
+				free_insert_command_table_name(table);
 				return POOL_END;
 			}
 		}
 	}
 
+	free_insert_command_table_name(table);
 	return POOL_CONTINUE;
 }
 
@@ -3451,6 +3462,26 @@ static char *get_insert_command_table_name(InsertStmt *node)
 	return table;
 }
 
+static void free_insert_command_table_name(char *ptr)
+{
+	POOL_SESSION_CONTEXT *session_context;
+	POOL_MEMORY_POOL *old_context = pool_memory;
+
+	session_context = pool_get_session_context();
+	if (!session_context)
+		return NULL;
+
+	if (session_context->query_context)
+		pool_memory = session_context->query_context->memory_context;
+	else
+		pool_memory = session_context->memory_context;
+
+	pfree(ptr);
+
+	pool_memory = old_context;
+}
+
+
 /* judge if this is a DROP DATABASE command */
 int is_drop_database(Node *node)
 {
fix_memory_leak.patch (4,236 bytes)

singh.gurjeet

2012-09-27 16:11

reporter  

fix_memory_leak_v2.patch (4,140 bytes)
diff --git a/pool_process_query.c b/pool_process_query.c
index a163089..fa8c810 100644
--- a/pool_process_query.c
+++ b/pool_process_query.c
@@ -66,6 +66,7 @@
 
 static int reset_backend(POOL_CONNECTION_POOL *backend, int qcnt);
 static char *get_insert_command_table_name(InsertStmt *node);
+static void free_insert_command_table_name(char *ptr);
 static int send_deallocate(POOL_CONNECTION_POOL *backend, POOL_SENT_MESSAGE_LIST msglist, int n);
 static int is_cache_empty(POOL_CONNECTION *frontend, POOL_CONNECTION_POOL *backend);
 static POOL_STATUS ParallelForwardToFrontend(char kind, POOL_CONNECTION *frontend, POOL_CONNECTION *backend, char *database, bool send_to_frontend);
@@ -2962,6 +2963,7 @@ int need_insert_lock(POOL_CONNECTION_POOL *backend, char *query, Node *node)
 		if (relcache == NULL)
 		{
 			pool_error("need_insert_lock: pool_create_relcache error");
+			free_insert_command_table_name(str);
 			return false;
 		}
 	}
@@ -2976,6 +2978,7 @@ int need_insert_lock(POOL_CONNECTION_POOL *backend, char *query, Node *node)
 #else
 	result = pool_search_relcache(relcache, backend, str)==0?0:3;
 #endif
+	free_insert_command_table_name(str);
 	return result;
 }
 
@@ -3025,6 +3028,12 @@ POOL_STATUS insert_lock(POOL_CONNECTION *frontend, POOL_CONNECTION_POOL *backend
 	/* get table name */
 	table = get_insert_command_table_name(node);
 
+	/* could not get table name. probably wrong SQL command */
+	if (table == NULL)
+	{
+		return POOL_CONTINUE;
+	}
+
 	/* trim quotes */
 	p = table;
 	for (i=0; *p; p++)
@@ -3034,12 +3043,6 @@ POOL_STATUS insert_lock(POOL_CONNECTION *frontend, POOL_CONNECTION_POOL *backend
 	}
 	rel_name[i] = '\0';
 
-	/* could not get table name. probably wrong SQL command */
-	if (table == NULL)
-	{
-		return POOL_CONTINUE;
-	}
-
 	/* table lock for insert target table? */
 	if (lock_kind == 1)
 	{
@@ -3071,6 +3074,7 @@ POOL_STATUS insert_lock(POOL_CONNECTION *frontend, POOL_CONNECTION_POOL *backend
 			if (relcache == NULL)
 			{
 				pool_error("insert_lock: pool_create_relcache error");
+				free_insert_command_table_name(table);
 				return false;
 			}
 		}
@@ -3082,6 +3086,7 @@ POOL_STATUS insert_lock(POOL_CONNECTION *frontend, POOL_CONNECTION_POOL *backend
 		if (adsrc == NULL)
 		{
 			/* could not get adsrc */
+			free_insert_command_table_name(table);
 			return POOL_CONTINUE;
 		}
 		pool_debug("adsrc: %s", adsrc);
@@ -3089,6 +3094,7 @@ POOL_STATUS insert_lock(POOL_CONNECTION *frontend, POOL_CONNECTION_POOL *backend
 		if (regcomp(&preg, "nextval\\(+'(.+)'", REG_EXTENDED|REG_NEWLINE) != 0)
 		{
 			pool_error("insert_lock: regex compile failed");
+			free_insert_command_table_name(table);
 			return POOL_END;
 		}
 
@@ -3107,6 +3113,7 @@ POOL_STATUS insert_lock(POOL_CONNECTION *frontend, POOL_CONNECTION_POOL *backend
 		if (len == 0)
 		{
 			pool_error("insert_lock: regex no match: pg_attrdef is %s", adsrc);
+			free_insert_command_table_name(table);
 			return POOL_END;
 		}
 
@@ -3191,6 +3198,7 @@ POOL_STATUS insert_lock(POOL_CONNECTION *frontend, POOL_CONNECTION_POOL *backend
 
 	if (status == POOL_END)
 	{
+		free_insert_command_table_name(table);
 		return POOL_END;
 	}
 	else if (status == POOL_DEADLOCK)
@@ -3223,11 +3231,13 @@ POOL_STATUS insert_lock(POOL_CONNECTION *frontend, POOL_CONNECTION_POOL *backend
 
 			if (status != POOL_CONTINUE)
 			{
+				free_insert_command_table_name(table);
 				return POOL_END;
 			}
 		}
 	}
 
+	free_insert_command_table_name(table);
 	return POOL_CONTINUE;
 }
 
@@ -3451,6 +3461,26 @@ static char *get_insert_command_table_name(InsertStmt *node)
 	return table;
 }
 
+static void free_insert_command_table_name(char *ptr)
+{
+	POOL_SESSION_CONTEXT *session_context;
+	POOL_MEMORY_POOL *old_context = pool_memory;
+
+	session_context = pool_get_session_context();
+	if (!session_context)
+		return NULL;
+
+	if (session_context->query_context)
+		pool_memory = session_context->query_context->memory_context;
+	else
+		pool_memory = session_context->memory_context;
+
+	pfree(ptr);
+
+	pool_memory = old_context;
+}
+
+
 /* judge if this is a DROP DATABASE command */
 int is_drop_database(Node *node)
 {
fix_memory_leak_v2.patch (4,140 bytes)

singh.gurjeet

2012-09-27 16:16

reporter   ~0000094

I have attached updated version of the patch: fix_memory_leak_v2.patch. Previous patch had a log line leftover from my debugging sessions.

t-ishii

2012-09-27 20:37

developer   ~0000095

I wonder why query_context was not created when get_insert_command_table_name() is called. Anyway, if query_context was not set, yes the memory allocated by get_insert_command_table_name() will not be freed until the session ends. However the fix you propose looks overkill to me. The function is called only in two places and one caller copies the string allocated by the function immediately. The other just needs to keep the string while calling a function. So I would like fix the problem in following way: Add static buffer to get_insert_command_table_name() and copy the result of nodeToString() to the buffer then free the string returned by nodeToString() immediately. I'm going create a patch and attach it soon.

Regarding your concern about insert_lock(), you are correct. Thanks for the patch.

t-ishii

2012-09-27 21:48

developer   ~0000096

I have committed and push to master to V3_0_STABLE. For V3_1_STABLE diff, please look at:
http://git.postgresql.org/gitweb/?p=pgpool2.git;a=commitdiff;h=2922349e07f3c585814d7d1f7557101544219e56

singh.gurjeet

2012-09-27 22:38

reporter   ~0000097

I believe the static buffer in get_insert_command_table_name() should be NAMEDATALEN+1, or at least take NAMEDATALEN into account somehow, because if someone compiles pgpool with NAMEDATALEN > 128, bad things can happen.

I haven't looked at 3.2 code yet, but I believe this patch should be applied to all supported branches.

PS: I would have liked to see some attribution, like we do in Postgres. Also, a mention about the fact that we moved some code to avoid a crash.

t-ishii

2012-09-28 06:40

developer   ~0000098

I have committed to all supported branches. Regarding NAMEDATALEN, problem is there's no reliable way to know how long it is in the pgpool's client and/or PostgreSQL. You will have similar problem even if you don't use pgpool. It will be even possible NAMEDATALEN does not match among PostgeSQL's client and PostgreSQL itself if they are compiled differently. So I think we actually cannot change NAMEDATALEN to other than 63.

Regarding string "pooldb", I suspect the leak is caused by pool_create_sent_message() in pool_session_context.c because it's related to prepared statement names and the memory context used by is session context.

singh.gurjeet

2012-09-28 10:10

reporter   ~0000099

Last edited: 2012-09-28 10:17

View 3 revisions

Hi Tatsuo,

    I have tracked down the send leak too. Attaching the patch (fix_second_memory_leak.patch) for the fix.

    Apparently running massif for 20 minutes wasn't enough to make this leak surface to be anywhere in the report. I ran valgrind's massif for 3 hours this time and it pointed to this stack. This stack, which didn't even show up in 20 minute runs, was the only one in the 3-hour run's report; other stacks' allocation paled in comparison to this one.

 90 52,383,985,818 15,116,288 14,888,280 228,008 0
98.49% (14,888,280B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
->97.68% (14,764,992B) 0x46717F: pool_memory_strdup (pool_memory.c:307)
|-->97.68% (14,764,992B) 0x449304: pool_create_sent_message (pool_session_context.c:515)
|---->97.68% (14,764,992B) 0x447966: Parse (pool_proto_modules.c:741)
|------>97.68% (14,764,992B) 0x4482F2: ProcessFrontendResponse (pool_proto_modules.c:2112)
|-------->97.68% (14,764,992B) 0x4180AC: pool_process_query (pool_process_query.c:372)
|---------->97.68% (14,764,992B) 0x40B340: do_child (child.c:358)
|------------>97.68% (14,764,992B) 0x406563: fork_a_child (main.c:1106)
|-------------->97.68% (14,764,992B) 0x404A5A: main (main.c:550)
|
->00.82% (123,288B) in 1+ places, all below ms_print's threshold (01.00%)

    Although this stack points to pool_create_sent_message() as the culprit, just as you suspected, but debugging the allocations and frees showed that these objects were being promptly return to freelist on the session_context->memory_context. Tracking the allocations further showed that the same block of memory was being handed from freelist when get_insert_command_table_name() -> nodeToString() -> init_string() was called for the next query.

    And init_string() makes two allocations, the String object itself, and Srting->data. Yesterday's fix freed the String->data object, but the String object is left behind, and causes the leak.

    The "pooldb" string was in fact part of the name of the cursor (sql_stmt_cm_insspooldb), and hence the contents of the POOL_SENT_MESSAGE->name, but as explained above, it was promptly freed. Subsequent allocation of the same buffer as a String object overwrites the first 16 bytes of this block, hence only the trailing string "pooldb" shows up in the leak.

For the archives, this is how I ran the valgrind-massif tool:

valgrind --tool=massif --alloc-fn=pool_memory_alloc --ignore-fn=pool_create_sent_message --ignore-fn=pool_create_relcache inst/bin/pgpool -f ../pgpool_memory_leak/pgpool.conf -F ../pgpool_memory_leak/pcp.conf -n

I have also attached the output of massif's output to this bug report (massif.out.17603). One needs to use ms_print utility to view this file's contents.

PS: Attribution in commit message please (for 2 leaks and crash-fix, if project policy allows).

singh.gurjeet

2012-09-28 10:13

reporter  

fix_second_memory_leak.patch (336 bytes)
diff --git a/parser/outfuncs.c b/parser/outfuncs.c
index 117ab9e..1c4361f 100644
--- a/parser/outfuncs.c
+++ b/parser/outfuncs.c
@@ -5611,8 +5611,12 @@ char *
 nodeToString(void *obj)
 {
 	String *str;
+	char *ret;
 
 	str = init_string("");
 	_outNode(str, obj);
-	return str->data;
+
+	ret = str->data;
+	pfree(str);
+	return ret;
 }

singh.gurjeet

2012-09-28 10:14

reporter  

massif.out.17603 (55,019 bytes)

t-ishii

2012-09-28 11:00

developer   ~0000100

Good catch! I will test your patch and commit it as soon as possible.

Issue History

Date Modified Username Field Change
2012-08-28 03:43 singh.gurjeet New Issue
2012-08-28 03:45 singh.gurjeet File Added: valgrind.log.gz
2012-08-28 07:50 t-ishii Note Added: 0000060
2012-08-28 07:50 t-ishii Assigned To => t-ishii
2012-08-28 07:50 t-ishii Status new => assigned
2012-08-29 16:58 arshu arora File Added: pgpool.conf.tar.gz
2012-08-29 17:00 arshu arora File Added: valgrind.log.tar.gz
2012-08-29 17:08 arshu arora Note Added: 0000069
2012-08-29 19:09 t-ishii Note Added: 0000070
2012-09-05 03:42 singh.gurjeet File Added: pool_process_query_line2566.png
2012-09-05 03:49 singh.gurjeet Note Added: 0000074
2012-09-05 05:37 singh.gurjeet Note Added: 0000075
2012-09-05 06:05 singh.gurjeet Note Edited: 0000075 View Revisions
2012-09-05 08:57 t-ishii File Added: free_select_result.patch
2012-09-05 08:58 t-ishii Note Added: 0000076
2012-09-05 11:48 singh.gurjeet Note Added: 0000077
2012-09-05 13:43 t-ishii Note Added: 0000078
2012-09-10 17:04 arshu arora Note Added: 0000082
2012-09-10 17:05 arshu arora File Added: pgpool.conf
2012-09-10 19:37 t-ishii Note Added: 0000083
2012-09-27 16:03 singh.gurjeet Note Added: 0000093
2012-09-27 16:08 singh.gurjeet File Added: testDsgwInsert.pgc
2012-09-27 16:09 singh.gurjeet File Added: fix_memory_leak.patch
2012-09-27 16:11 singh.gurjeet File Added: fix_memory_leak_v2.patch
2012-09-27 16:16 singh.gurjeet Note Added: 0000094
2012-09-27 20:37 t-ishii Note Added: 0000095
2012-09-27 21:48 t-ishii Note Added: 0000096
2012-09-27 22:38 singh.gurjeet Note Added: 0000097
2012-09-28 06:40 t-ishii Note Added: 0000098
2012-09-28 10:10 singh.gurjeet Note Added: 0000099
2012-09-28 10:12 singh.gurjeet Note Edited: 0000099 View Revisions
2012-09-28 10:13 singh.gurjeet File Added: fix_second_memory_leak.patch
2012-09-28 10:14 singh.gurjeet File Added: massif.out.17603
2012-09-28 10:17 singh.gurjeet Note Edited: 0000099 View Revisions
2012-09-28 11:00 t-ishii Note Added: 0000100
2012-10-05 18:05 t-ishii Status assigned => resolved
2012-10-05 18:05 t-ishii Resolution open => fixed