View Issue Details
| ID | Project | Category | View Status | Date Submitted | Last Update |
|---|---|---|---|---|---|
| 0000713 | Pgpool-II | General | public | 2021-05-24 11:38 | 2021-06-22 11:41 |
| Reporter | manikan | Assigned To | pengbo | ||
| Priority | high | Severity | major | Reproducibility | N/A |
| Status | closed | Resolution | open | ||
| Product Version | 4.2.3 | ||||
| Summary | 0000713: PgPool HA in Kubernetes | ||||
| Description | I want to setup PgPool for High availability in my Kubernetes cluster. As part of it, planning to have 2 replicas of PgPool and applying an affinity to schedule the pods in different zones. So that if 1 zone is down, other zone's pod will able to accept the incoming request. In this case, whether watchdog will be a better option? If so, can you please recommend on how to setup the Watch dog in Kubernetes. And also, at runtime, there is a chance that one PgPool pod might be rescheduled due to Kubernetes node issue etc... and will have different Pod IP address. How the configurations will be sync'ed up to all pods in these case. | ||||
| Tags | No tags attached. | ||||
|
|
We are using Cloud SQL postgresql where Standby and read replicas instances are defined. |
|
|
> As part of it, planning to have 2 replicas of PgPool and applying an affinity to schedule the pods in different zones. So that if 1 zone is down, other zone's pod will able to accept the incoming request. > > In this case, whether watchdog will be a better option? If so, can you please recommend on how to setup the Watch dog in Kubernetes. In Kubernetes, Kubernetes is responsible for monitoring the current state of Pgpool. Therefore, you can disable Watchdog in Pgpool-II. I think in Kubernetes, you should disable automatic failover, health check and Watchdog of Pgpool. > And also, at runtime, there is a chance that one PgPool pod might be rescheduled due to Kubernetes node issue etc... and will have different Pod IP address. > How the configurations will be sync'ed up to all pods in these case. You need to create a Service or Loadbalancer for all of the pgpool pods. Your application accesses pgpool via the Service name. The Service name doesn't change, even if the Pod IP address changes after rescheduled. |
|
|
There will be a service created for PgPool. In case of multiple replicas of PgPool, all pods behave like a master-master PgPool. In this case, I need to modify the max_pool and number of init children flags to divide by 2 in case of 2 replicas, divide by 3 in case of 3 replicas and so on. Whether this is recommended? Say for an example, if the Cloud SQL had set it up to max of 200 connections, I need to setup the number of init and max pool to make it 100 connection per pod in case of 2 replicas. This makes our configuration complex in case if we want to increase the number of replicas at run time. And also, Shall I use single children with max pool of 200 connections? This is an another question not related to this ticket. |
|
|
> There will be a service created for PgPool. In case of multiple replicas of PgPool, all pods behave like a master-master PgPool. In this case, I need to modify the max_pool and number of init children flags to divide by 2 in case of 2 replicas, divide by 3 in case of 3 replicas and so on. Whether this is recommended? Say for an example, if the Cloud SQL had set it up to max of 200 connections, I need to setup the number of init and max pool to make it 100 connection per pod in case of 2 replicas. This makes our configuration complex in case if we want to increase the number of replicas at run time. Yes. You need to reconfigure max_pool or num_init_children after scaling up the pgpool replicas. > And also, Shall I use single children with max pool of 200 connections? This is an another question not related to this ticket. No. If you specify num_init_children=1, max_pool=200, then pgpool can only accept one client connection. See more information about num_init_children: https://www.pgpool.net/docs/latest/en/html/runtime-config-connection.html#GUC-NUM-INIT-CHILDREN |
|
|
But in the docs it is specified that "num_init_children * max_pool" determines the number of max connection. How in the case of num_init_children=1 and max_pool=200 accept only one client connection. Can you please explain about it? |
|
|
"num_init_children * max_pool" is the max connections to PostgreSQL. You shold make sure the connection parameters of pgpool and postgresql satisfy: number of Pgpool-II replicas × max_pool × num_init_children <= (max_connections - superuser_reserved_connections) |
|
|
Thanks for the information. Now I got it. num_init_children is also the concurrent connections limit to Pgpool-II from clients. That's why you have mentioned that it will accept one client connection. Can I create a feature request to separate the concurrent connection limit logic from the num_init_children or is it by design to couple together? If we decouple them, then it will looks similar to PgBouncer setup. |
|
|
> Can I create a feature request to separate the concurrent connection limit logic from the num_init_children or is it by design to couple together? If we decouple them, then it will looks similar to PgBouncer setup. Pgpool-II is pre-fork type and each child process handles single connection at the same time. I think it is difficult to change the currunt design of the connection limit logic. |
|
|
In our case, max connection is 200 connections in the server and all 200 connections can be used at the same time. So for our case, we should set the num_init_children to 200 and max_pool to 1 right? Whether a single Kubernetes PgPool pod can handle that many child process? What should be the min and max CPU and memory required for the PgPool kubernetes pod? |
|
|
About the required resource please have a look at the documentation: [Resource Requirement] https://www.pgpool.net/docs/latest/en/html/resource-requiremente.html |
|
|
Thanks for the info. It helps. |
|
|
Have you resolved this issue? May I close this issue? |
|
|
Close issue. |
| Date Modified | Username | Field | Change |
|---|---|---|---|
| 2021-05-24 11:38 | manikan | New Issue | |
| 2021-05-24 11:49 | manikan | Note Added: 0003839 | |
| 2021-05-24 12:26 | pengbo | Assigned To | => pengbo |
| 2021-05-24 12:26 | pengbo | Status | new => assigned |
| 2021-05-24 13:13 | pengbo | Note Added: 0003840 | |
| 2021-05-24 13:13 | pengbo | Status | assigned => feedback |
| 2021-05-24 13:18 | manikan | Note Added: 0003841 | |
| 2021-05-24 13:18 | manikan | Status | feedback => assigned |
| 2021-05-24 14:19 | administrator | Note Added: 0003842 | |
| 2021-05-24 14:19 | administrator | Status | assigned => feedback |
| 2021-05-24 14:23 | manikan | Note Added: 0003843 | |
| 2021-05-24 14:23 | manikan | Status | feedback => assigned |
| 2021-05-24 14:28 | pengbo | Note Added: 0003844 | |
| 2021-05-24 14:28 | pengbo | Status | assigned => feedback |
| 2021-05-24 14:36 | manikan | Note Added: 0003845 | |
| 2021-05-24 14:36 | manikan | Status | feedback => assigned |
| 2021-05-25 01:01 | pengbo | Note Added: 0003850 | |
| 2021-05-25 01:23 | manikan | Note Added: 0003852 | |
| 2021-05-25 10:50 | pengbo | Note Added: 0003853 | |
| 2021-05-25 22:12 | manikan | Note Added: 0003857 | |
| 2021-06-01 13:02 | pengbo | Status | assigned => feedback |
| 2021-06-08 13:56 | pengbo | Note Added: 0003871 | |
| 2021-06-22 11:40 | pengbo | Note Added: 0003880 | |
| 2021-06-22 11:41 | pengbo | Status | feedback => closed |