[pgpool-general: 4081] pgpool failing standby

Ioana Danes ioanadanes at gmail.com
Sat Oct 3 01:54:30 JST 2015


Hello,


I have setup pgpool (version 3.4) in master slave mode with 2 postgres
databases (no load balancing) using streaming replication:

backend_hostname0 = 'voldb1'
backend_port0 = 5432
backend_weight0 = 1
backend_data_directory0 = '/data01/postgres'
backend_flag0 = 'ALLOW_TO_FAILOVER'

backend_hostname1 = 'voldb2'
backend_port1 = 5432
backend_weight1 = 1
backend_data_directory1 = '/data01/postgres'
backend_flag1 = 'ALLOW_TO_FAILOVER'

connection_cache = on
load_balance_mode = off

master_slave_mode = on
master_slave_sub_mode = 'stream'

sr_check_period = 10

health_check_period = 40
health_check_timeout = 10
health_check_max_retries = 3
health_check_retry_delay = 1
connect_timeout = 10000

failover_command = '/usr/local/bin/pgpool_failover.sh %P %m %H'
failback_command = '/usr/local/bin/pgpool_failback.sh %d %P %m %H'
fail_over_on_backend_error = on
search_primary_node_timeout = 10

When the primary db (voldb1) fails I expect the system to have a hiccup as
the old connections fail and new ones are created but it looks like the
same thing happens if I restart the standby server. On standby restart  I
would expect the system to drop the standby from the cluster with no impact
on the open transactions to the primary server but it looks this is not
what's happening.

Am I missing something, is this the expected behavior?

Thanks a lot,
Ioana Danes
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20151002/74a78ef0/attachment.html>


More information about the pgpool-general mailing list