View Issue Details
| ID | Project | Category | View Status | Date Submitted | Last Update |
|---|---|---|---|---|---|
| 0000250 | Pgpool-II | Bug | public | 2016-09-23 22:59 | 2017-08-29 09:37 |
| Reporter | stivi21 | Assigned To | Muhammad Usama | ||
| Priority | high | Severity | major | Reproducibility | always |
| Status | closed | Resolution | open | ||
| OS | Ubuntu | OS Version | 14.04 | ||
| Product Version | 3.5.4 | ||||
| Summary | 0000250: Not proper behaviour of Pgpool when one node of cluster goes disconnect by down interface | ||||
| Description | I have Pgpool version 3.5.4 + any changes to today from git:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b306e04dd5c4a6c95cc5d97e0203e280d3983cb4 PostgreSQL version 9.4. My configuration (in this case): - Server1 (usc-sprod-pg-00: 10.1.0.19) - Pgpool and PostgreSQL (slave) - Server2 (usc-sprod-pg-01: 10.1.0.23) - Pgpool and PostgreSQL (master) - Server3 (usc-sprod-pgq-00: 10.1.0.20) - Pgpool Replication: stream All servers in Google Cloud environment. | ||||
| Steps To Reproduce | In situation, when server goes down, or server works but instance of PostgreSQL is killed, all working OK, but problem is when server works, but I run on server command "ifdown eth0" or "ip link set dev eth0 down", or even drop all traffic on firewall. In example case I run "ip link set dev eth0 down" on Server2 (where was PostgreSQL master), so old slave was switched to new master, but when old master return, then pgpool saw 2 masters. After detach old master (on Server2) and run script to failover/switch from old master to new slave, first pgpool (on Server1) redirect queries to second Postgres (on Server2), so I had errors with R/W queries. Restart pgpool fix problem. Times of events: 12:03:00 - I run "ip link set dev eth0 down" on Server2 12:06:34 - I force restart Server2 instance | ||||
| Additional Information | Are also another problems related to situation with interface down fail, when slave goes down. I wrote about it here: http://www.sraoss.jp/pipermail/pgpool-general/2016-September/005071.html Problem is in situation, when interface going down or firewall blocking traffic, because before i tested with service or whole server stop. Errors 'socket read failed with an error "Connection reset by peer"' is from haproxy, so it's no problem. | ||||
| Tags | No tags attached. | ||||
|
|
|
|
|
|
|
|
|
|
|
There was a similar issue I was working on and I have pushed the fix for that. http://git.postgresql.org/gitweb?p=pgpool2.git;a=commitdiff;h=2997b3a40877aa0f04f4acd080126aad8255ed1c With this commit Pgpool-II keeps the backend states sync among all Pgpool-II nodes part of the same watchdog cluster, and this commit should also solve the problem you were facing. Thanks |
| Date Modified | Username | Field | Change |
|---|---|---|---|
| 2016-09-23 22:59 | stivi21 | New Issue | |
| 2016-09-23 22:59 | stivi21 | File Added: pgpool2.conf | |
| 2016-09-23 23:00 | stivi21 | File Added: Server1.log | |
| 2016-09-23 23:00 | stivi21 | File Added: Server3.log | |
| 2016-09-28 09:52 | t-ishii | Assigned To | => Muhammad Usama |
| 2016-09-28 09:52 | t-ishii | Status | new => assigned |
| 2017-04-18 00:29 | Muhammad Usama | Note Added: 0001428 | |
| 2017-08-29 09:37 | pengbo | Status | assigned => closed |