[pgpool-general: 3677] pgPool2, 1/10 the speed of direct to PostgreSQL in one scenario?

Mike ipso at snappymail.ca
Mon May 4 02:34:33 JST 2015


Something strange is definitely going on here, and perhaps its something 
simple that I have overlooked...

I'm testing pgPool2 v3.3.4 on a virtual machine setup, where I have 
PostgreSQL (v9.4), pgPool2 and Apache as three separate guest VMs on the 
same bare metal host. Performance seemed as expected in that setup, 
however when I moved pgPool to the same guest VM as PostgreSQL 
(explained below), the performance dropped to an unusable state of just 
45TPS, 1/10 the speed of connecting direct to PGSQL.

*The below is just a baseline direct to PGSQL so you can see what the 
servers are capable of handling... I realize this is not a use case 
pgPool2 is designed to go up against.
1. Web (Guest VM) to PostgreSQL (Guest VM)
-----------------------------------------------------------------------------------------------------
pgbench -h 10.114.168.215 -p 5432 -P1 -r -Sn -U postgres -c 45 -T 20 pgbench
progress: 1.0 s, 10120.0 tps, lat 4.135 ms stddev 3.755
progress: 2.0 s, 10044.7 tps, lat 4.519 ms stddev 5.284
<...boring similar data...>
progress: 19.0 s, 8660.8 tps, lat 5.208 ms stddev 5.931
progress: 20.0 s, 9553.2 tps, lat 4.701 ms stddev 4.894
transaction type: SELECT only
scaling factor: 1000
query mode: simple
number of clients: 45
number of threads: 1
duration: 20 s
number of transactions actually processed: 197891
latency average: 4.539 ms
latency stddev: 4.904 ms
tps = 9829.334821 (including connections establishing)
tps = 9855.501027 (excluding connections establishing)
-----------------------------------------------------------------------------------------------------


*The above only with starting new connections for every transaction 
(-C). Significantly slower as expected. Hopefully pgPool2 can improve 
that...
2. Web (Guest VM) to PostgreSQL (Guest VM)
-----------------------------------------------------------------------------------------------------
pgbench -h 10.114.168.215 -p 5432 -P1 -r -Sn -U postgres -c 45 -T 20 -C 
pgbench
progress: 1.0 s, 687.5 tps, lat 60.312 ms stddev 10.930
progress: 2.0 s, 729.4 tps, lat 60.387 ms stddev 11.562
<...boring similar data...>
progress: 19.0 s, 715.2 tps, lat 61.527 ms stddev 10.846
progress: 20.0 s, 752.8 tps, lat 60.002 ms stddev 12.153
transaction type: SELECT only
scaling factor: 1000
query mode: simple
number of clients: 45
number of threads: 1
duration: 20 s
number of transactions actually processed: 14261
latency average: 61.643 ms
latency stddev: 11.880 ms
tps = 712.458054 (including connections establishing)
tps = 36935.739945 (excluding connections establishing)
-----------------------------------------------------------------------------------------------------


*Now lets see how it performs going through pgPool2 on a 3rd Guest VM. 
The pgPool2 settings are setup for connection pooling and load balancing 
(only to a single backend server is configured though). It is definitely 
faster as expected...
3. Web (Guest VM) to pgPool2 (Guest VM) to PostgreSQL (Guest VM)
-----------------------------------------------------------------------------------------------------
pgbench -h 10.114.168.214 -p 5432 -P1 -r -Sn -U postgres -c 45 -T 20 -C 
pgbench
progress: 1.0 s, 1017.7 tps, lat 40.191 ms stddev 18.570
progress: 2.0 s, 946.7 tps, lat 45.979 ms stddev 16.952
<...boring similar data...>
progress: 19.0 s, 1104.9 tps, lat 39.139 ms stddev 16.875
progress: 20.0 s, 1032.7 tps, lat 43.329 ms stddev 14.065
transaction type: SELECT only
scaling factor: 1000
query mode: simple
number of clients: 45
number of threads: 1
duration: 20 s
number of transactions actually processed: 19911
latency average: 44.091 ms
latency stddev: 16.574 ms
tps = 995.249484 (including connections establishing)
tps = 36580.526395 (excluding connections establishing)
-----------------------------------------------------------------------------------------------------


*Now lets try installing pgPool2 on the PostgreSQL guest VM itself 
(using port 5431), the idea here is to eliminate an extra "hop" that the 
packets need to traverse, in theory it should be quite close to the same 
performance or slightly better assuming the PostgreSQL guest VM has 
resources to spare, which it should given the single connection pgBench 
was 10x faster... It was just the opposite though, notice the strange 
stddev in the latency and the horribly slow TPS.
4. Web (Guest VM) to PostgreSQL (Guest VM) running pgPool.
-----------------------------------------------------------------------------------------------------
pgbench -h 10.114.168.215 -p 5431 -P1 -r -Sn -U postgres -c 45 -T 20 -C 
pgbench
progress: 5.7 s, 38.5 tps, lat 407.917 ms stddev 1184.321
progress: 5.8 s, 496.1 tps, lat 3620.260 ms stddev 2285.943
progress: 5.9 s, 436.2 tps, lat 86.326 ms stddev 16.226
progress: 6.0 s, 402.5 tps, lat 110.582 ms stddev 16.638
progress: 6.1 s, 411.6 tps, lat 111.462 ms stddev 16.107
progress: 6.2 s, 453.4 tps, lat 96.972 ms stddev 22.470
progress: 11.3 s, 8.6 tps, lat 2938.272 ms stddev 2481.237
progress: 11.3 s, 501.5 tps, lat 2254.063 ms stddev 2482.148
progress: 11.4 s, 515.3 tps, lat 85.895 ms stddev 12.755
progress: 11.5 s, 515.2 tps, lat 86.436 ms stddev 17.697
progress: 11.6 s, 495.4 tps, lat 83.459 ms stddev 13.695
progress: 16.8 s, 17.0 tps, lat 2190.696 ms stddev 2469.002
progress: 16.9 s, 515.1 tps, lat 880.116 ms stddev 1835.654
progress: 17.0 s, 482.3 tps, lat 88.700 ms stddev 13.244
progress: 17.0 s, 469.2 tps, lat 91.753 ms stddev 14.201
progress: 17.1 s, 513.8 tps, lat 89.938 ms stddev 14.362
progress: 22.2 s, 8.7 tps, lat 881.010 ms stddev 1837.567
progress: 22.2 s, 68901.3 tps, lat 5053.151 ms stddev 20.122
progress: 22.2 s, 689.2 tps, lat 2.073 ms stddev 0.000
transaction type: SELECT only
scaling factor: 1000
query mode: simple
number of clients: 45
number of threads: 1
duration: 20 s
number of transactions actually processed: 1000
latency average: 974.529 ms
latency stddev: 1901.815 ms
tps = 45.010280 (including connections establishing)
tps = 29321.213898 (excluding connections establishing)
-----------------------------------------------------------------------------------------------------

*Lets try with the concurrency level set to just 1 and see if that makes 
any difference:
5. Web (Guest VM) to PostgreSQL (Guest VM) running pgPool.
-----------------------------------------------------------------------------------------------------
pgbench -h 10.114.168.215 -p 5431 -P1 -r -Sn -U postgres -c 1 -T 20 -C 
pgbench
progress: 5.8 s, 42.8 tps, lat 1.617 ms stddev 0.510
progress: 5.8 s, 264.6 tps, lat 1.993 ms stddev 0.000
progress: 5.8 s, 313.3 tps, lat 1.493 ms stddev 0.000
progress: 5.8 s, 302.3 tps, lat 1.385 ms stddev 0.000
progress: 5.8 s, 171.4 tps, lat 4.036 ms stddev -nan
progress: 6.0 s, 313.0 tps, lat 1.613 ms stddev 0.603
progress: 11.6 s, 34.5 tps, lat 1.527 ms stddev 0.421
progress: 11.6 s, 261.0 tps, lat 1.805 ms stddev 0.000
progress: 11.6 s, 283.0 tps, lat 1.744 ms stddev 0.000
progress: 11.6 s, 345.7 tps, lat 1.199 ms stddev 0.000
progress: 11.6 s, 328.2 tps, lat 1.401 ms stddev 0.000
progress: 12.0 s, 321.3 tps, lat 1.538 ms stddev 0.254
progress: 17.4 s, 22.7 tps, lat 1.495 ms stddev 0.210
progress: 17.4 s, 286.8 tps, lat 1.686 ms stddev 0.000
progress: 17.4 s, 360.2 tps, lat 1.116 ms stddev 0.000
progress: 17.4 s, 305.5 tps, lat 1.637 ms stddev 0.000
progress: 17.4 s, 308.7 tps, lat 1.633 ms stddev 0.000
progress: 18.0 s, 310.4 tps, lat 1.602 ms stddev 0.515
progress: 23.2 s, 10.8 tps, lat 1.616 ms stddev 0.511
progress: 23.2 s, 471.0 tps, lat 2.124 ms stddev 0.000
transaction type: SELECT only
scaling factor: 1000
query mode: simple
number of clients: 1
number of threads: 1
duration: 20 s
number of transactions actually processed: 1000
latency average: 1.574 ms
latency stddev: 0.454 ms
tps = 43.124062 (including connections establishing)
tps = 630.338971 (excluding connections establishing)
-----------------------------------------------------------------------------------------------------

As a final test to eliminate resource starvation on the PostgreSQL guest 
VM, I try the above only *without* -C... Its back up to an expected 
performance, slower than connecting directly to PostgreSQL as would be 
expected, but still within an expected margin allotted to pgPool2 overhead.
6. Web (Guest VM) to PostgreSQL (Guest VM) running pgPool.
-----------------------------------------------------------------------------------------------------
pgbench -h 10.114.168.215 -p 5431 -P1 -r -Sn -U postgres -c 45 -T 20 pgbench
progress: 1.0 s, 6297.9 tps, lat 6.562 ms stddev 7.596
progress: 2.0 s, 7842.1 tps, lat 5.701 ms stddev 5.846
<...boring similar data...>
progress: 19.0 s, 6451.1 tps, lat 6.818 ms stddev 5.823
progress: 20.0 s, 7059.5 tps, lat 6.248 ms stddev 5.622
transaction type: SELECT only
scaling factor: 1000
query mode: simple
number of clients: 45
number of threads: 1
duration: 20 s
number of transactions actually processed: 129383
latency average: 6.832 ms
latency stddev: 5.847 ms
tps = 6467.068251 (including connections establishing)
tps = 6491.437702 (excluding connections establishing)
-----------------------------------------------------------------------------------------------------

So the question is whats going on with test #4 and #5? Why would it be 
so terribly slow making new connections to pgPool on the PostgreSQL 
guest VM vs going through a 3rd guest VM when there are resources to spare?

*Below is the "vmstat 1" output of test #4 on the PostgreSQL+pgPool2 
guest VM, its mostly sitting idle with a few small bursts of activity:
procs -----------memory---------- ---swap-- -----io---- -system-- 
------cpu-----
  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy 
id wa st
  0  0      0 406412   1268 2449960    0    0    15     4   13   22 0  0 
100  0  0
  0  0      0 406412   1268 2449960    0    0     0     8   23   31 0  0 
100  0  0
  0  0      0 406320   1268 2450220    0    0     0     0   14   26 0  0 
100  0  0
  0  0      0 406320   1268 2450220    0    0     0     0   11   18 0  0 
100  0  0
  0  0      0 406320   1268 2450220    0    0     0     0   13   21 0  0 
100  0  0
  0  0      0 406336   1276 2450212    0    0     0    12   28   49 0  0 
100  0  0
  0  0      0 406368   1276 2450212    0    0     0     0   13   18 0  0 
100  0  0
  0  0      0 406368   1276 2450212    0    0     0     0   10   14 0  0 
100  0  0
  0  0      0 406368   1276 2450212    0    0     0     0   12   20 0  0 
100  0  0
  0  0      0 406352   1276 2450220    0    0     0     0   35   88 0  0 
100  0  0
  0  0      0 406144   1276 2450220    0    0     0     0   24   47 0  0 
100  0  0
  0  0      0 406160   1276 2450220    0    0     0     0   10   15 0  0 
100  0  0
  0  0      0 112036   1276 2455536    0    0  4700   496 7799 19675 6 
10 79  2  4
  0  0      0 112036   1276 2455536    0    0     0     0   24   36 0  0 
100  0  0
  0  0      0 111820   1276 2455688    0    0     0     0   15   24 0  0 
100  0  0
  0  0      0 111816   1284 2455592    0    0     0    20   24   37 0  0 
100  0  0
  0  0      0 111740   1284 2455588    0    0     0   548   29   29 0  0 
100  0  0
  1  0      0 109400   1284 2458128    0    0  1624   160 4335 9290 2  5 
89  1  2
  0  0      0 109308   1284 2458420    0    0  1232   176 3605 7182 2  4 
91  1  2
  0  0      0 109044   1284 2458820    0    0     0     0   13   22 0  0 
100  0  0
  0  0      0 108748   1284 2458932    0    0     0     0   42   75 0  0 
100  0  0
  0  0      0 108936   1292 2459096    0    0     0    40   29   44 0  0 
100  0  0
  0  0      0 108828   1292 2459076    0    0     0     0   17   24 0  0 
100  0  0
  0  0      0 103656   1292 2462380    0    0  3024   360 7959 16507 4 
11 78  2  6
  0  0      0 103656   1292 2462380    0    0     0     0   65   23 0  0 
100  0  0
  0  0      0 103972   1292 2462436    0    0     0     0   19   32 0  0 
100  0  0
  0  0      0 103976   1292 2462472    0    0     0     0   13   20 0  0 
100  0  0
  0  0      0 104104   1292 2462524    0    0     0   696   14   17 0  0 
100  0  0
  3  0      0 102952   1300 2463296    0    0   792   128 1772 3955 1  2 
96  0  1
  0  0      0 100636   1300 2465648    0    0  2264   224 6252 12492 3  
7 86  1  3
  0  0      0 100816   1300 2466132    0    0     0     0   37   64 0  0 
100  0  0
  0  0      0 100816   1300 2466132    0    0     0     0   13   22 0  0 
100  0  0
  0  0      0 100884   1300 2466208    0    0     0    64   27   48 0  0 
100  0  0
  0  0      0 100808   1300 2466212    0    0     0     0   27   42 0  0 
100  0  0
  0  0      0 155184   1308 2466212    0    0     8    20  176  255 0  0 
99  0  0
  0  0      0 155268   1308 2466212    0    0     0     0   14   22 0  0 
100  0  0
  0  0      0 155436   1308 2466220    0    0     0     0   25   39 0  0 
100  0  0
  0  0      0 155852   1308 2466220    0    0     0     0   21   36 0  0 
100  0  0

Here is a link to the pgpool.conf file: http://pastebin.com/hBqt3Jx2

Any ideas as to what I might be missing with the PostgreSQL+pgPool2 on 
the same guest VM scenario to cause such poor performance? The guest VM 
itself seems to be able to handle the load given the direct to 
PostgreSQL with -C performs quite well...

Thanks.

-- 
Mike



More information about the pgpool-general mailing list