View Issue Details

IDProjectCategoryView StatusLast Update
0000243Pgpool-IIBugpublic2016-09-09 14:45
Reportersupp_k Assigned Tot-ishii  
PriorityhighSeveritymajorReproducibilityrandom
Status closedResolutionopen 
Platformx86 64OSCentOSOS Version6.x & 7.x
Product Version3.5.3 
Summary0000243: pgpool ignores backend weight in balancing
DescriptionWe have discovered that weights specified for every backend doesn't make sense for balancing mode. Every time the backend is choosed randomly and the proportion specified in weight parameter of a backend is not strict.

Would it it be possible to implement a round robin schema?
TagsNo tags attached.

Activities

t-ishii

2016-09-06 09:42

developer   ~0001050

That's an expected behavior. When the weight is, say node 0:1, node1:2, node 0 is chosen as the load balance node with 1/(1+2) possibility in the long run.
That says, if a client connects to Pgpool-II 300 times, it is likely it connects to the node 0 100 times.

supp_k

2016-09-06 15:52

reporter   ~0001052

It seems that random() function is used in the described scenario. If yes then there is the possibility that 2 requests received by pgpool will be distributed to the same node.

Would it be possible to implement the "Weighted round robin" schema?

t-ishii

2016-09-06 17:21

developer   ~0001055

> It seems that random() function is used in the described scenario.

Yes.

> If yes then there is the possibility that 2 requests received by pgpool will be distributed to the same node.

What's wrong with this? After all in the long run, each node will be assigned as specified in the configuration file.

> Would it be possible to implement the "Weighted round robin" schema?

I don't see a benefit with this method. Can you elaborate more?

supp_k

2016-09-07 00:11

reporter   ~0001058

In this case is a quarantee that every node will receive equivalent (according to its weight) number of requests.
Random function gives no guarantee here and this is what we see in our performance tests. Two nodes have equal weights receive not corresponding number of requests.

t-ishii

2016-09-07 08:32

developer   ~0001059

Really?

I created a 2-node streaming replication cluster using pgpool_setup and got following result. (11000 is the pgpool port number)

$ pgbench -i -p 11000 test
$ pgbench -p 11000 -n -S -c 16 -j 8 -C -T 300 test
transaction type: SELECT only
scaling factor: 1
query mode: simple
number of clients: 16
number of threads: 8
duration: 300 s
number of transactions actually processed: 692237
latency average: 6.934 ms
tps = 2307.419971 (including connections establishing)
tps = 3626.962895 (excluding connections establishing)


$ grep "SELECT abalance" log/pgpool.log |grep "id: 0"|wc
 346089 7267869 48067931

$ grep "SELECT abalance" log/pgpool.log |grep "id: 1"|wc
 346148 7269108 48076296

So the SELECTs sent to node 0 and node 1 are 346089 and 346148 respectively (in total 692237 as pgbench reported).
The difference between these numbers is only 59.
So load balance ratio is:

Node 0: 346089/692237 = 0.49995738
Node 1: 346148/692237 = 0.50004261

These numbers are pretty close to 0.5 as expected. I see no problem here.

supp_k

2016-09-07 19:32

reporter   ~0001060

Ok, I agree obviously it works when the number of requests is growing.

P/S: We just were observing the current CPU utilization for the both DB nodes.

t-ishii

2016-09-09 14:45

developer   ~0001062

Issue closed.

Issue History

Date Modified Username Field Change
2016-09-06 06:13 supp_k New Issue
2016-09-06 09:37 t-ishii Assigned To => t-ishii
2016-09-06 09:37 t-ishii Status new => assigned
2016-09-06 09:42 t-ishii Note Added: 0001050
2016-09-06 09:49 t-ishii Status assigned => feedback
2016-09-06 15:52 supp_k Note Added: 0001052
2016-09-06 15:52 supp_k Status feedback => assigned
2016-09-06 17:21 t-ishii Note Added: 0001055
2016-09-07 00:11 supp_k Note Added: 0001058
2016-09-07 08:32 t-ishii Note Added: 0001059
2016-09-07 08:33 t-ishii Status assigned => feedback
2016-09-07 19:32 supp_k Note Added: 0001060
2016-09-07 19:32 supp_k Status feedback => assigned
2016-09-09 14:45 t-ishii Note Added: 0001062
2016-09-09 14:45 t-ishii Status assigned => closed