View Issue Details
| ID | Project | Category | View Status | Date Submitted | Last Update |
|---|---|---|---|---|---|
| 0000746 | Pgpool-II | General | public | 2022-03-17 15:57 | 2022-06-28 22:44 |
| Reporter | garg1982@gmail.com | Assigned To | kawamoto | ||
| Priority | normal | Severity | major | Reproducibility | random |
| Status | closed | Resolution | open | ||
| Product Version | 4.1.2 | ||||
| Summary | 0000746: Getting below error in pgpool log on salve server. Need your help to find the reason of it. | ||||
| Description | Below are error messages from pgpool log LOG: trying connecting to PostgreSQL server on "10.144.1.29:5432" by INET socket DETAIL: timed out. retrying... LOG: trying connecting to PostgreSQL server on "10.144.1.29:5432" by INET socket DETAIL: timed out. retrying... LOG: trying connecting to Postgres 2022-03-14 11:26:06: pid 48605: LOG: fork a new child process with pid: 183760 2022-03-14 11:26:06: pid 48605: LOG: child process with pid: 110355 exits with status 256 2022-03-14 11:26:06: pid 48605: LOG: fork a new child process with pid: 183761 2022-03-14 11:26:06: pid 48605: LOG: child process with pid: 81995 exits with status 256 2022-03-14 11:26:06: pid 48605: LOG: fork a new child process with pid: 183762 2022-03-14 11:26:06: pid 48605: LOG: child process with pid: 84194 exits with status 256 2022-03-14 11:26:06: pid 48605: LOG: fork a new child process with pid: 183763 2022-03-14 11:26:06: pid 48605: LOG: child process with pid: 176810 exits with status 256 2022-03-14 11:26:06: pid 48605: LOG: fork a new child process with pid: 183764 2022-03-14 11:26:06: pid 48605: LOG: child process with pid: 108921 exits with status 256 2022-03-14 11:26:06: pid 48605: LOG: fork a new child process with pid: 183765 2022-03-14 11:26:57: pid 184451: HINT: repair the backend nodes and restart pgpool 2022-03-14 11:26:57: pid 184452: FATAL: pgpool is not accepting any new connections 2022-03-14 11:26:57: pid 184452: DETAIL: all backend nodes are down, pgpool requires at least one valid node 2022-03-14 11:26:57: pid 184452: HINT: repair the backend nodes and restart pgpool 2022-03-14 11:26:57: pid 184453: FATAL: pgpool is not accepting any new connections 2022-03-14 11:26:57: pid 184453: DETAIL: all backend nodes are down, pgpool requires at least one valid node 2022-03-14 11:26:57: pid 184453: HINT: repair the backend nodes and restart pgpool 2022-03-14 11:26:57: pid 184454: FATAL: pgpool is not accepting any new connections 2022-03-14 11:26:57: pid 184454: DETAIL: all backend nodes are down, pgpool requires at least one valid node Now I am getting error while connecting to server -bash-4.2$ psql -p 9999 psql: error: could not connect to server: FATAL: Sorry, too many clients already | ||||
| Steps To Reproduce | This issue happened twice in last week | ||||
| Tags | No tags attached. | ||||
|
|
Hi garg1982. --- 2022-03-14 11:26:57: pid 184454: FATAL: pgpool is not accepting any new connections 2022-03-14 11:26:57: pid 184454: DETAIL: all backend nodes are down, pgpool requires at least one valid node --- This messages say that the status of all backend PostgreSQL node were DOWN. Did you check if PostgreSQL was running and the network was healthy when the error occurred? What situation does this error occur? When you start Pgpool up? Does it occur suddenly during operation? |
|
|
Hi Kawamoto, Postgres services are running fine on both nodes (master and slave). We have master/slave setup and each node have pgpool too. Application using jdbc driver and both nodes IP mentioned in connection string to connect to the database. This error came for few minutes only in log " trying connecting to PostgreSQL server on "10.144.1.29:5432" by INET socket". Thereafter we aren't able to connect using pgpool which is running on slave node, even postgres services are running fine. However we can can connect directly without pgpool and other pgpool which is running on master node. We are taking restart pgpool on slave node tonight to fix it but same issue has occurred twice in a week. I just want to check why pgpool automatically recognized the health of backend node because it is running fine or is it usual behavior of pgpool means whenever there are session termination, pgpool need to restart Thanks Gaurav |
|
|
Hi Gaurav, Pgpool has two backends and fail the healthcheck about one node "10.144.1.29". Then, pgpool reports that has no active backend nodes. This is strange behavior. Pgpool should have another node that has not failed healthcheck. To confirm the status of the backends that pgpool currently knows, please throw "show pool_nodes" command to both pgpool and upload the outputs. $psql -p 9999 -x -c "show pool_nodes" > I just want to check why pgpool automatically recognized the health of backend node because it is running > fine or is it usual behavior of pgpool means whenever there are session termination, pgpool need to restart If you set 'auto_failback = on' in pgpool.conf, pgpool can automatically detect the failed backend if it becomes healthy, and re-attach it to cluster. https://www.pgpool.net/docs/41/en/html/runtime-config-failover.html#GUC-AUTO-FAILBACK |
|
|
Hi Kawamoto, We have restarted the PGPOOL so issue has been fixed now. I will come back to you if similar issue encounter again in future. Thanks for your time !! Just want to check one thing, do we have any benefit if we separate PGPOOL from db nodes. Please note we are using it only for Load balancing and connection pooling. Regards Gaurav |
|
|
Sorry for the late reply. >Just want to check one thing, do we have any benefit if we separate PGPOOL from db nodes. Please note we are using it only for Load balancing and connection pooling. Consider when the server running pgpool goes down. If pgpool and postgresql run on the same server, database availability will be lost because the postgresql goes down too, but if they are separated, availability and load balancing performance will be maintained because the two postgresql are still running. |
|
|
Thank you for the update and support !! |
|
|
May I close this issue? |
|
|
Yes, please. Thanks |
|
|
Close issue. |
| Date Modified | Username | Field | Change |
|---|---|---|---|
| 2022-03-17 15:57 | garg1982@gmail.com | New Issue | |
| 2022-03-17 16:33 | kawamoto | Assigned To | => kawamoto |
| 2022-03-17 16:33 | kawamoto | Status | new => assigned |
| 2022-03-17 17:44 | kawamoto | Note Added: 0003999 | |
| 2022-03-17 20:49 | garg1982@gmail.com | Note Added: 0004000 | |
| 2022-03-18 14:08 | kawamoto | Note Added: 0004001 | |
| 2022-03-21 13:38 | garg1982@gmail.com | Note Added: 0004003 | |
| 2022-03-25 13:29 | kawamoto | Note Added: 0004009 | |
| 2022-05-10 14:42 | garg1982@gmail.com | Note Added: 0004030 | |
| 2022-06-28 11:57 | administrator | Note Added: 0004079 | |
| 2022-06-28 22:40 | garg1982@gmail.com | Note Added: 0004085 | |
| 2022-06-28 22:44 | administrator | Status | assigned => closed |
| 2022-06-28 22:44 | administrator | Note Added: 0004086 |