[pgpool-committers: 5441] pgpool: Reduce memory usage when large data set is returned from backen

Tatsuo Ishii ishii at sraoss.co.jp
Tue Feb 5 21:18:03 JST 2019


Reduce memory usage when large data set is returned from backend.

In commit 8640abfc41ff06b1e6d31315239292f4d3d4191d,
pool_wait_till_ready_for_query() was introduced to retrieve all
messages into buffer from backend until it found a "ready for query"
message when extended query protocol is used in streaming replication
mode. It could hit memory allocation limit of palloc(), which is 1GB.

This could be easily reproduced by using pgbench and pgproto for
example.

    pgbench -s 100

    pgproto data:
    'P'     ""      "SELECT * FROM pgbench_accounts"        0
    'B'     ""      ""      0       0       0
    'E'     ""      0
    'S'
    'Y'

To reduce the memory usage, introduce "suspend_reading_from_frontend"
flag in session context so that Pgpool-II does not read any message
after sync message is received. The flag is turned off when a "ready
for query" message is received from backend. Between this, Pgpool-II
reads messages from backend and forward to frontend as usual. This way
we could eliminate the necessity to store messages from backend in
buffer, thus it reduces the memory foot print.

Per bug 462.

Branch
------
V4_0_STABLE

Details
-------
https://git.postgresql.org/gitweb?p=pgpool2.git;a=commitdiff;h=b99e62d7988edd26d96eaad25e66873103285d4b

Modified Files
--------------
src/context/pool_session_context.c         | 32 +++++++++++++++++++++++++++++-
src/include/context/pool_session_context.h | 15 +++++++++++++-
src/protocol/pool_proto_modules.c          | 18 +++++++++++++----
3 files changed, 59 insertions(+), 6 deletions(-)



More information about the pgpool-committers mailing list