[pgpool-general: 2166] pool_read_message_length2: message length (17) in slot 2 does not match with slot 0(27)

Athanasios Kostopoulos athanasios.kostopoulos at classmarkets.com
Sat Oct 5 00:29:04 JST 2013


Dear list,
I have pgpool with 2 postgresql 8.4.11 databases. When I try to add a 3rd
one 8.4.17 (all of them replicated using Slony), I get the following in my
logs:

Oct  4 17:07:05 www1e pgpool[16122]: send_failback_request: fail back 0 th
node request from pid 16122
Oct  4 17:07:05 www1e pgpool[16122]: send_failback_request: fail back 1 th
node request from pid 16122
Oct  4 17:07:05 www1e pgpool[16122]: send_failback_request: fail back 2 th
node request from pid 16122
Oct  4 17:07:05 www1e pgpool[3095]: starting fail back. reconnect host
5.9.49.11(5432)
Oct  4 17:07:05 www1e pgpool[3095]: Restart all children
Oct  4 17:07:05 www1e pgpool[3095]: failover: set new primary node: -1
Oct  4 17:07:05 www1e pgpool[3095]: failover: set new master node: 0
Oct  4 17:07:05 www1e pgpool[3095]: failback done. reconnect host
5.9.49.11(5432)
Oct  4 17:07:05 www1e pgpool[16121]: worker process received restart request
Oct  4 17:07:05 www1e pgpool[26252]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:05 www1e pgpool[26249]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:06 www1e pgpool[16122]: pcp child process received restart
request
Oct  4 17:07:06 www1e pgpool[3095]: worker child 16121 exits with status 256
Oct  4 17:07:06 www1e pgpool[3095]: fork a new worker child pid 26269
Oct  4 17:07:06 www1e pgpool[3095]: PCP child 16122 exits with status 256
Oct  4 17:07:06 www1e pgpool[3095]: fork a new PCP child pid 26270
Oct  4 17:07:06 www1e pgpool[26251]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:06 www1e pgpool[26257]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:07 www1e pgpool[26256]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:07 www1e pgpool[26255]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:07 www1e pgpool[26241]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:07 www1e pgpool[26245]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:07 www1e pgpool[26250]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:07 www1e pgpool[26237]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:07 www1e pgpool[26243]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:08 www1e pgpool[26242]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:08 www1e pgpool[26233]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:08 www1e pgpool[26253]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:08 www1e pgpool[26239]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:09 www1e pgpool[26246]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:10 www1e pgpool[26236]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:10 www1e pgpool[26231]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:11 www1e pgpool[26238]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:13 www1e pgpool[26234]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:15 www1e pgpool[26226]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:15 www1e pgpool[26244]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:17 www1e pgpool[26230]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:18 www1e pgpool[26229]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:18 www1e pgpool[26227]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:19 www1e pgpool[26235]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:26 www1e pgpool[26254]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:27 www1e pgpool[3095]: health check retry sleep time: 1
second(s)
Oct  4 17:07:27 www1e pgpool[26232]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:27 www1e pgpool[26228]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:27 www1e pgpool[26240]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:28 www1e pgpool[3095]: health check retry sleep time: 1
second(s)
Oct  4 17:07:28 www1e pgpool[26247]: pool_read_message_length2: message
length (17) in slot 2 does not match with slot 0(27)
Oct  4 17:07:29 www1e pgpool[3095]: health check retry sleep time: 1
second(s)
Oct  4 17:07:30 www1e pgpool[3095]: health check retry sleep time: 1
second(s)
Oct  4 17:07:31 www1e pgpool[3095]: health check retry sleep time: 1
second(s)
Oct  4 17:07:32 www1e pgpool[3095]: health check retry sleep time: 1
second(s)
Oct  4 17:07:33 www1e pgpool[3095]: set 2 th backend down status
Oct  4 17:07:33 www1e pgpool[3095]: starting degeneration. shutdown host
5.9.49.11(5432)
Oct  4 17:07:33 www1e pgpool[3095]: Restart all children
Oct  4 17:07:33 www1e pgpool[3095]: failover: set new primary node: -1
Oct  4 17:07:33 www1e pgpool[3095]: failover: set new master node: 0
Oct  4 17:07:33 www1e pgpool[3095]: failover done. shutdown host
5.9.49.11(5432)
Oct  4 17:07:33 www1e pgpool[26269]: worker process received restart request
Oct  4 17:07:34 www1e pgpool[26270]: pcp child process received restart
request
Oct  4 17:07:34 www1e pgpool[3095]: worker child 26269 exits with status 256
Oct  4 17:07:34 www1e pgpool[3095]: fork a new worker child pid 26765
Oct  4 17:07:34 www1e pgpool[3095]: PCP child 26270 exits with status 256
Oct  4 17:07:34 www1e pgpool[3095]: fork a new PCP child pid 26766

... and of course the node is reported as down.
Any idea on how to debug/fix this? I am new to pgpool and the setup was
done from a different person that is no longer with us so I do not have all
the details.
Is it possible to using pgpool with different minor version of postgresql?
Feel free to ask me any further information.

-- 
 

classmarkets GmbH | Schumannstraße 6 | 10117 Berlin | Deutschland
Tel: +49 (0)30 56 59 001-0 | Fax: +49 (0)30 56 59 001-99 | 
www.classmarkets.com

Amtsgericht Charlottenburg HRB 111815 B | USt-IdNr. DE 260731582
Geschäftsführer: Veit Mürz, Fabian Ströhle

Diese Nachricht (inklusive aller Anhänge) ist vertraulich. Sie darf 
ausschließlich durch den vorgesehenen Empfänger und Adressaten gelesen, 
kopiert oder genutzt werden. Sollten Sie diese Nachricht versehentlich 
erhalten haben, bitten wir, den Absender (durch Antwort-E-Mail) hiervon 
unverzüglich zu informieren und die Nachricht zu löschen. Jede unerlaubte 
Nutzung oder Weitergabe des Inhalts dieser Nachricht, sei es vollständig 
oder teilweise, ist unzulässig. Bitte beachten Sie, dass E-Mail-Nachrichten 
an den Absender nicht für fristgebundene Mitteilungen geeignet sind. 
Fristgebundene Mitteilungen sind daher ausschließlich per Post oder per 
Telefax zu übersenden.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20131004/013325b8/attachment.html>


More information about the pgpool-general mailing list