[pgpool-general: 5539] Re: Issue with parameterized query

Ankur Jain ankur.jain at obeohealth.com
Wed May 31 08:40:12 JST 2017


Hi Tatsuo,
I just tried the patch on 3.6.4 and it works. Thanks!
When do you think this will become available in the yum repositories?
-AJ

On Fri, May 26, 2017 at 1:32 PM, Tatsuo Ishii <ishii at sraoss.co.jp> wrote:

> I think I found the cause of the problem.
> It seems node.js sends a query in following pattern:
>
> Parse
> Bind
> Describe
> Execute
> Flush
> Sync
>
> The "Flush" message is actually unnecessary: it just requests backend
> (in this case Pgpool-II) to send any result pending in the
> backend. However, which will be accomplished by the subsequent "Sync"
> message. So the Flush message is completely redundant here.
>
> Anyway, the pattern is not supposed in Pgpool-II.
> Can you please try the attached fix?
>
> Best regards,
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese:http://www.sraoss.co.jp
>
> > Hi Tatsuo,
> > Thanks for looking into it, I am just coping you and my team-mates
> instead
> > of spamming the whole group.
> >
> > I have attached more details, hopefully this helps you. I have attached a
> > node.js script (*pgpool_test.js.txt, *I had to add a .txt extension
> because
> > gmail would not allow me to attach a file with a .js extension) which
> > creates a table* public.test *with just one column (id) and one row
> (id=1).
> > It then runs "select  * from public.test where id=1" twice using a
> > non-parametrized query and then again twice using a parameterized query.
> >
> >  As you can see in *output.log*, the table creation and the first two non
> > parametrized queries run fine. The first parameterized query also runs
> fine
> > (because it wasn't cached). The second parametrized query never returns
> and
> > node script just hangs.
> >
> > I am also attaching the pgpool output log (*pgpool.log*) and the tcp
> packet
> > capture (*dump.out*, viewable in wireshark) that captured all the tcp
> > traffic between the client and the machine running pgpool. Let me know if
> > you need anything else.
> >
> > -AJ
> >
> > If you want to run the node.js script yourself, here are the instructions
> > to do that (on a linux machine)
> >
> >    - Install node (the instructions would be machine specific)
> >    - npm install pg
> >    - npm install seq
> >    - PGPORT=<PGPOOL_PORT> PGDATABASE=<DATABASE_NAME>
> PGHOST=<PH_HOST_NAME>
> >    PGUSER=<PG_USERNAME> node  pgpool_test.js
> >
> >
> > On Fri, May 19, 2017 at 12:30 AM, Tatsuo Ishii <ishii at sraoss.co.jp>
> wrote:
> >
> >> > Hi,
> >> > I am having an issue with using parameterized queries with pgpool. My
> >> setup
> >> > involves using node.js client with pgpool (3.6.4) and postgresl 9.6.
> I am
> >> > using pgpool just for caching using shmem.
> >> >
> >> > I have a simple example with the following query "SELECT * FROM
> >> > master.employer where employer_id=$1".  The query returns fine the
> first
> >> > time and then it gets cached. When I run it again it hits the cache,
> but
> >> > returns an empty resultset.
> >> >
> >> > Let me know if I can provide with any other details. I have attached
> the
> >> > entire pgpool log.
> >>
> >> In the log I see:
> >>
> >> 2017-05-18 21:13:30: pid 22781: DEBUG:  memcache: sending cached
> messages:
> >> 'D' len: 4917
> >> 2017-05-18 21:13:30: pid 22781: DEBUG:  memcache: sending cached
> messages:
> >> 'C' len: 13
> >>
> >> which means first Pgpool-II sends 'D' (actual row data) in 4917 bytes
> >> then 'C' (Indicating select command completed). This is perfectly sane
> >> from the protocol's point of view. So as far as the log says,
> >> Pgpool-II successfully returns cached data to client. So the remaining
> >> possibility is the returned data is broken. I don't what kind of debug
> >> method node.js has, but is it possible to capture/dump the packet data
> >> using wireshark?
> >>
> >> Best regards,
> >> --
> >> Tatsuo Ishii
> >> SRA OSS, Inc. Japan
> >> English: http://www.sraoss.co.jp/index_en.php
> >> Japanese:http://www.sraoss.co.jp
> >>
>
> diff --git a/src/query_cache/pool_memqcache.c b/src/query_cache/pool_
> memqcache.c
> index fdef8c1..b8a5a5a 100644
> --- a/src/query_cache/pool_memqcache.c
> +++ b/src/query_cache/pool_memqcache.c
> @@ -645,43 +645,9 @@ POOL_STATUS pool_fetch_from_memory_cache(POOL_CONNECTION
> *frontend,
>         pfree(qcache);
>
>         /*
> -        * If we are doing extended query, forward sync message from
> frontend to
> -        * backend. This is necessary to prevent receiving Sync message
> after
> -        * Sending Ready for query.
> +        * Send a "READY FOR QUERY" if not in extended query.
>          */
> -       if (pool_is_doing_extended_query_message())
> -       {
> -               char kind;
> -               int32 len;
> -               POOL_SESSION_CONTEXT *session_context;
> -               POOL_CONNECTION *target_backend;
> -               char buf[5];
> -
> -               if (pool_flush(frontend))
> -                       return POOL_END;
> -               if (pool_read(frontend, &kind, 1))
> -                       return POOL_END;
> -
> -               ereport(DEBUG2,
> -                               (errmsg("memcache: fetching from memory
> cache: expecting sync: kind '%c'", kind)));
> -               if (pool_read(frontend, &len, sizeof(len)))
> -                       return POOL_END;
> -
> -               /* Forward "Sync" message to backend */
> -               session_context = pool_get_session_context(true);
> -               target_backend = CONNECTION(backend,
> session_context->load_balance_node_id);
> -               pool_write(target_backend, &kind, 1);
> -               pool_write_and_flush(target_backend, &len, sizeof(len));
> -
> -               /* Read and discard "Ready for query" message from backend
> */
> -               pool_read(target_backend, &kind, 1);
> -               pool_read(target_backend, buf, sizeof(buf));
> -       }
> -
> -       /*
> -        * send a "READY FOR QUERY"
> -        */
> -       if (MAJOR(backend) == PROTO_MAJOR_V3)
> +       if (!pool_is_doing_extended_query_message() && MAJOR(backend) ==
> PROTO_MAJOR_V3)
>         {
>                 signed char state;
>
> @@ -691,10 +657,7 @@ POOL_STATUS pool_fetch_from_memory_cache(POOL_CONNECTION
> *frontend,
>                 state = MASTER(backend)->tstate;
>                 send_message(frontend, 'Z', 5, (char *)&state);
>         }
> -       else
> -       {
> -               pool_write(frontend, "Z", 1);
> -       }
> +
>         if (pool_flush(frontend))
>         {
>                 return POOL_END;
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20170530/5237c857/attachment-0001.html>


More information about the pgpool-general mailing list