[Pgpool-general] Connection issues with pgpool-II

Daniel.Crespo at l-3com.com Daniel.Crespo at l-3com.com
Tue Jan 20 22:33:13 UTC 2009


I have some good and bad news in the my testings of 2.1 release (since
the latest CVS version does not work for me. Maybe this helps a bit):
 
The good news:
---------------
The connection blocking behavior I had when a failover happened was
because the failover_command was not returning (in pgpool.conf:
failover_command = '. failover_cmd $h %d &'). I replaced it with another
script which in turn calls the intended command (without & at the end).
That way, existent connections keep working, although there's a little
sleep time when the failover occurrs, which is not that bad.
 
The bad news:
--------------
When a failback happens, already opened clients would block forever, no
matter whether or not you have a failback_command. The ideal behavior
should be that the existent connections keep working without
interruption. I found the below code in pool_stream.c, and added
'child_exit(1)' to see if at least I can force the clients to exit, and
have them try reconnecting again.
 
Inside both:
'char *pool_read2(POOL_CONNECTION *cp, int len)' and 'int
pool_read(POOL_CONNECTION *cp, void *buf, int len)' functions:
 
[...]
 
else if (readlen == 0)
  {
   if (cp->isbackend)
   {
    pool_error("pool_read2: EOF encountered with backend");
    child_exit(1); // *** Added this for forcing clients to exit ***
    return -1;
 
[...]
 
------------------------
This change worked for me. It's not ideal, but at least it makes clients
connected not block forever. Might there be a way of instead of exiting
clients, just unblock them and have them continue with their queries?
 
Thanks,
Daniel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://pgfoundry.org/pipermail/pgpool-general/attachments/20090120/6dba20d5/attachment.html 


More information about the Pgpool-general mailing list