View Issue Details
| ID | Project | Category | View Status | Date Submitted | Last Update |
|---|---|---|---|---|---|
| 0000465 | Pgpool-II | Bug | public | 2019-02-15 02:46 | 2019-03-05 04:11 |
| Reporter | caudatus | Assigned To | pengbo | ||
| Priority | normal | Severity | major | Reproducibility | always |
| Status | assigned | Resolution | open | ||
| Platform | Linux | OS | CentOS | OS Version | 7 |
| Product Version | 4.0.2 | ||||
| Summary | 0000465: pgpool cannot accept new connections while attached backend stream-wal replicated dbslave:tcp:5432 unreachable | ||||
| Description | ################################################################################# os: CentOS7 x86_64 pgpool: 3.7.7 4.0.2 struct: pgpool + dbmaster (pg95) + stream-wal replication + dbslave (pg95) node states: dbmaster: up/connected dbslave: up/connected config: health_check_period = 0 sr_check_period = 0 load_balance_mode = off master_slave_mode = on master_slave_sub_mode = 'stream' replication_mode = off event: if attached node: dbslave's network goes down immediately (iptables ... -j DROP) incident: then pgpool: cannot accept new psql connections (it seems as an hangup in accepting new connections) (BAD BEHAVIOUR!) while serves all queries continuesly in all existing connections (GOOD.) usecase: backend struct: master-slave pg backends + direct stream-wal postgres replication with config: load-balance: on or off master_slave_sub_mode: stream health_check: enabled or disabled sr_check: enabled or disabled we need pgpool must: serve: - read-write queries (via primary master) - and accept new connections continueusly and serves queries (via primary master) independently: - if slave is in any state (up, down, waiting, etc...) - if slave:tcp:5432 reachable or unreachable ################################################################################# note: if slave postgresql was shutdown (pg_ctl stop command), behaviour is right: - pgpool serves existing connections and accept new connections - and serves them continueusly bad behaviour happens only when attached slave node's network goes down (aka pgpool cannot reach slave:tcp:5432 through network) | ||||
| Steps To Reproduce | - start pgpool, dbmaster, dbslave - backend_nodes are up - # iptables -A net-net -p tcp -m tcp -s $GPGOOLIP -d $DBSLAVEIP --dport 5432 -j DROP - # while true; timeout 3 psql -U $PGUSER -h $PGPOOLHOST postgres -c 'select pg_is_in_recovery()' || echo TIMEDOUT; sleep 1; done it gives TIMEDOUT, until unreachable dbslave node will be detached. (BAD) existing psql connections served continueusly (GOOD) | ||||
| Tags | No tags attached. | ||||
|
|
|
|
|
maybe this issue descripts the same bug as 0000177, but with extended informations |
|
|
one more note for this report: - if i switch off load_balance_mode, healt_check and sr_check (i think, it should mean, that there's nothing to communicate on client queries for pgpool to dbslave node), there is even an empty connection (authorized) request and immediately a connection close request from pgpool to dbslave (it ) psql client query: root@pgproxy4-dbtest-plus-dp1:/etc/pgpool-II# export PGPASSWORD=xxx; while true; do { echo -en "$(date):\t" ; timeout 5 psql -A -t -P pager=off -h -U pgpoolcheck postgres -c '\l' || echo TIMEDOUT; } | head -1; sleep 1; done Fri Feb 15 12:52:09 UTC 2019: postgres|postgres|SQL_ASCII|C|C|=Tc/postgres ... csv log on dbslave node: 2019-02-15 12:52:09.782 UTC,,,7599,"pgproxy-dbtest.docker1:48218",5c66b5f9.1daf,1,"",2019-02-15 12:52:09 UTC,,0,LOG,00000,"connection received: host=pgproxy-dbtest.docker1 port=48218",,,,,,,,,"" 2019-02-15 12:52:09.805 UTC,"pgpoolcheck","postgres",7599,"pgproxy-dbtest.docker1:48218",5c66b5f9.1daf,2,"authentication",2019-02-15 12:52:09 UTC,2/47747,0,LOG,00000,"connection authorized: user=pgpoolcheck database=postgres",,,,,,,,,"" 2019-02-15 12:52:09.810 UTC,"pgpoolcheck","postgres",7599,"pgproxy-dbtest.docker1:48218",5c66b5f9.1daf,3,"idle",2019-02-15 12:52:09 UTC,,0,LOG,00000,"disconnection: session time: 0:00:00.029 user=pgpoolcheck database=postgres host=pgproxy-dbtest.docker1 port=48218",,,,,,,,,"psql" I don't know why is this connection attempt? I think this connect-disconnect-to-dbslave operation can be the real reason of hangup. However, if the psql test queries contains one persistent connect attempt and then sqeuencial queries as: root@pgproxy4-dbtest-plus-dp1:/etc/pgpool-II# export PGPASSWORD=xxx; while true; do psql -A -t -h localhost -U pgpoolcheck postgres < <(while true; do echo 'select now(), pg_is_in_recovery();'; sleep 1; done;); sleep 1; done 2019-02-15 13:02:55.659116+00|f 2019-02-15 13:02:56.632916+00|f ... Also there is a connection attempt on dbslave, but (certainly) there's no any queries: csv.log: 2019-02-15 13:02:55.640 UTC,,,7614,"pgproxy-dbtest.docker1:48520",5c66b87f.1dbe,1,"",2019-02-15 13:02:55 UTC,,0,LOG,00000,"connection received: host=pgproxy-dbtest.docker1 port=48520",,,,,,,,,"" 2019-02-15 13:02:55.656 UTC,"pgpoolcheck","postgres",7614,"pgproxy-dbtest.docker1:48520",5c66b87f.1dbe,2,"authentication",2019-02-15 13:02:55 UTC,2/47756,0,LOG,00000,"connection authorized: user=pgpoolcheck database=postgres",,,,,,,,,"" and if meanwhile pgpool lost dbslave:tc:5432, pgpool can serve queries without any forced operation to dbslave |
|
|
- if i reduce connect_timeout from 10000 to 10, even the bug bad behaviour is the same :( |
|
|
Could you show your pgpool.conf? I would like to confirm the config of "health_check_timeout". |
|
|
yes, it's in attached bugpack.tgz pgpool.conf: # ---------------------------- # pgPool-II configuration file # ---------------------------- # # This file consists of lines of the form: # # name = value # # Whitespace may be used. Comments are introduced with "#" anywhere on a line. # The complete list of parameter names and allowed values can be found in the # pgPool-II documentation. # # This file is read on server startup and when the server receives a SIGHUP # signal. If you edit the file on a running system, you have to SIGHUP the # server for the changes to take effect, or use "pgpool reload". Some # parameters, which are marked below, require a server shutdown and restart to # take effect. # #------------------------------------------------------------------------------ # CONNECTIONS #------------------------------------------------------------------------------ # - pgpool Connection Settings - listen_addresses = '*' port = 5432 #socket_dir = '/var/run/postgresql' socket_dir = '/tmp' # Unix domain socket path # The Debian package defaults to # /var/run/postgresql # (change requires restart) listen_backlog_multiplier = 2 # Set the backlog parameter of listen(2) to # num_init_children * listen_backlog_multiplier. # (change requires restart) serialize_accept = off # whether to serialize accept() call to avoid thundering herd problem # (change requires restart) # - pgpool Communication Manager Connection Settings - pcp_listen_addresses = '*' # Host name or IP address for pcp process to listen on: # '*' for all, '' for no TCP/IP connections # (change requires restart) pcp_port = 9898 # Port number for pcp # (change requires restart) #pcp_socket_dir = '/var/run/postgresql' pcp_socket_dir = '/tmp' # Unix domain socket path for pcp # The Debian package defaults to # /var/run/postgresql # (change requires restart) # - Backend Connection Settings - #backend_hostname1 = 'host2' #backend_port1 = 5433 #backend_weight1 = 1 #backend_data_directory1 = '/data1' #backend_flag1 = 'ALLOW_TO_FAILOVER' # - Authentication - enable_pool_hba = on pool_passwd = 'pool_passwd' # File name of pool_passwd for md5 authentication. # "" disables pool_passwd. # (change requires restart) authentication_timeout = 60 # Delay in seconds to complete client authentication # 0 means no timeout. allow_clear_text_frontend_auth = off # Allow Pgpool-II to use clear text password authentication # with clients, when pool_passwd does not # contain the user password # - SSL Connections - ssl = on #------------------------------------------------------------------------------ # POOLS #------------------------------------------------------------------------------ # - Concurrent session and pool size - num_init_children = 1 # Number of concurrent sessions allowed # (change requires restart) max_pool = 4 # Number of connection pool caches per connection # (change requires restart) # - Life time - child_life_time = 300 # Pool exits after being idle for this many seconds child_max_connections = 0 # Pool exits after receiving that many connections # 0 means no exit connection_life_time = 0 # Connection to backend closes after being idle for this many seconds # 0 means no close client_idle_limit = 0 # Client is disconnected after being idle for that many seconds # (even inside an explicit transactions!) # 0 means no disconnection #------------------------------------------------------------------------------ # LOGS #------------------------------------------------------------------------------ # - Where to log - log_destination = 'syslog' # - What to log - log_line_prefix = '%t: pid %p: ' # printf-style string to output at beginning of each log line. log_connections = on log_hostname = on log_statement = on log_per_node_statement = on log_standby_delay = 'always' # Log standby delay # Valid values are combinations of always, # if_over_threshold, none # - Syslog specific - syslog_facility = 'LOCAL0' # Syslog local facility. Default to LOCAL0 syslog_ident = 'pgpool' # Syslog program identification string # Default to 'pgpool' # - Debug - #log_error_verbosity = default # terse, default, or verbose messages log_error_verbosity = verbose client_min_messages = warning # values in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # log # notice # warning # error log_min_messages = info # values in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # log # fatal # panic #------------------------------------------------------------------------------ # FILE LOCATIONS #------------------------------------------------------------------------------ pid_file_name = '/var/run/pgpool/pgpool.pid' # PID file name # Can be specified as relative to the" # location of pgpool.conf file or # as an absolute path # (change requires restart) logdir = '/var/log/pgpool' # Directory of pgPool status file # (change requires restart) #------------------------------------------------------------------------------ # CONNECTION POOLING #------------------------------------------------------------------------------ connection_cache = off # Activate connection pools # (change requires restart) # Semicolon separated list of queries # to be issued at the end of a session # The default is for 8.3 and later reset_query_list = 'ABORT; DISCARD ALL' # The following one is for 8.2 and before #reset_query_list = 'ABORT; RESET ALL; SET SESSION AUTHORIZATION DEFAULT' #------------------------------------------------------------------------------ # REPLICATION MODE #------------------------------------------------------------------------------ replication_mode = off # Activate replication mode # (change requires restart) replicate_select = off # Replicate SELECT statements # when in replication mode # replicate_select is higher priority than # load_balance_mode. insert_lock = off # Automatically locks a dummy row or a table # with INSERT statements to keep SERIAL data # consistency # Without SERIAL, no lock will be issued lobj_lock_table = '' # When rewriting lo_creat command in # replication mode, specify table name to # lock # - Degenerate handling - replication_stop_on_mismatch = off # On disagreement with the packet kind # sent from backend, degenerate the node # which is most likely "minority" # If off, just force to exit this session failover_if_affected_tuples_mismatch = off # On disagreement with the number of affected # tuples in UPDATE/DELETE queries, then # degenerate the node which is most likely # "minority". # If off, just abort the transaction to # keep the consistency #------------------------------------------------------------------------------ # LOAD BALANCING MODE #------------------------------------------------------------------------------ load_balance_mode = off ignore_leading_white_space = on # Ignore leading white spaces of each query white_function_list = '' # Comma separated list of function names # that don't write to database # Regexp are accepted black_function_list = 'nextval,setval,nextval,setval,insert,update,alter' black_query_pattern_list = '' # Semicolon separated list of query patterns # that should be sent to primary node # Regexp are accepted # valid for streaming replicaton mode only. database_redirect_preference_list = '' # comma separated list of pairs of database and node id. # example: postgres:primary,mydb[0-4]:1,mydb[5-9]:2' # valid for streaming replicaton mode only. app_name_redirect_preference_list = '' # comma separated list of pairs of app name and node id. # example: 'psql:primary,myapp[0-4]:1,myapp[5-9]:standby' # valid for streaming replicaton mode only. allow_sql_comments = off # if on, ignore SQL comments when judging if load balance or # query cache is possible. # If off, SQL comments effectively prevent the judgment # (pre 3.4 behavior). disable_load_balance_on_write = 'transaction' # Load balance behavior when write query is issued # in an explicit transaction. # Note that any query not in an explicit transaction # is not affected by the parameter. # 'transaction' (the default): if a write query is issued, # subsequent read queries will not be load balanced # until the transaction ends. # 'trans_transaction': if a write query is issued, # subsequent read queries in an explicit transaction # will not be load balanced until the session ends. # 'always': if a write query is issued, read queries will # not be load balanced until the session ends. #------------------------------------------------------------------------------ # MASTER/SLAVE MODE #------------------------------------------------------------------------------ master_slave_mode = on master_slave_sub_mode = 'stream' # - Streaming - sr_check_period = 0 sr_check_user = 'pgpcheck' sr_check_password = 'xxx' sr_check_database = 'postgres' # Database name for streaming replication check delay_threshold = 0 # Threshold before not dispatching query to standby node # Unit is in bytes # Disabled (0) by default # - Special commands - follow_master_command = '/usr/bin/logger -t pgpoolalert "follow_master id:%d.%h.%M, new id:%m.%M"' # Executes this command after master failover # Special values: # %d = node id # %h = host name # %p = port number # %D = database cluster path # %m = new master node id # %H = hostname of the new master node # %M = old master node id # %P = old primary node id # %r = new master port number # %R = new master database cluster path # %% = '%' character #------------------------------------------------------------------------------ # HEALTH CHECK GLOBAL PARAMETERS #------------------------------------------------------------------------------ health_check_period = 0 health_check_timeout = 3 health_check_user = 'pgpcheck' health_check_password = 'xxx' health_check_database = 'postgres' # Database name for health check. If '', tries 'postgres' frist, then 'template1' health_check_max_retries = 3 # Maximum number of times to retry a failed health check before giving up. health_check_retry_delay = 1 # Amount of time to wait (in seconds) between retries. connect_timeout = 1000 # Timeout value in milliseconds before giving up to connect to backend. # Default is 10000 ms (10 second). Flaky network user may want to increase # the value. 0 means no timeout. # Note that this value is not only used for health check, # but also for ordinary conection to backend. #------------------------------------------------------------------------------ # HEALTH CHECK PER NODE PARAMETERS (OPTIONAL) #------------------------------------------------------------------------------ #health_check_period0 = 0 #health_check_timeout0 = 20 #health_check_user0 = 'nobody' #health_check_password0 = '' #health_check_database0 = '' #health_check_max_retries0 = 0 #health_check_retry_delay0 = 1 #connect_timeout0 = 10000 #------------------------------------------------------------------------------ # FAILOVER AND FAILBACK #------------------------------------------------------------------------------ failover_command = '/usr/bin/logger -t pgpoolalert "failover id:%d.%h.%M, new id:%m.%M"' #failover_command = '/srv/postgres-data/attach "%d" | logger -t pgpool_failover "%d"' # Executes this command at failover # Special values: # %d = node id # %h = host name # %p = port number # %D = database cluster path # %m = new master node id # %H = hostname of the new master node # %M = old master node id # %P = old primary node id # %r = new master port number # %R = new master database cluster path # %% = '%' character failback_command = '/usr/bin/logger -t pgpoolalert "failback id:%d.%h.%M, new id:%m.%M"' # Executes this command at failback. # Special values: # %d = node id # %h = host name # %p = port number # %D = database cluster path # %m = new master node id # %H = hostname of the new master node # %M = old master node id # %P = old primary node id # %r = new master port number # %R = new master database cluster path # %% = '%' character failover_on_backend_error = off detach_false_primary = off #search_primary_node_timeout = 300 search_primary_node_timeout = 0 # Timeout in seconds to search for the # primary node when a failover occurs. # 0 means no timeout, keep searching # for a primary node forever. #------------------------------------------------------------------------------ # ONLINE RECOVERY #------------------------------------------------------------------------------ recovery_user = 'nobody' # Online recovery user recovery_password = '' # Online recovery password # Leaving it empty will make Pgpool-II to first look for the # Password in pool_passwd file before using the empty password recovery_1st_stage_command = '' # Executes a command in first stage recovery_2nd_stage_command = '' # Executes a command in second stage recovery_timeout = 90 # Timeout in seconds to wait for the # recovering node's postmaster to start up # 0 means no wait client_idle_limit_in_recovery = 0 # Client is disconnected after being idle # for that many seconds in the second stage # of online recovery # 0 means no disconnection # -1 means immediate disconnection #------------------------------------------------------------------------------ # WATCHDOG #------------------------------------------------------------------------------ # - Enabling - use_watchdog = off # Activates watchdog # (change requires restart) # -Connection to up stream servers - trusted_servers = '' # trusted server list which are used # to confirm network connection # (hostA,hostB,hostC,...) # (change requires restart) ping_path = '/bin' # ping command path # (change requires restart) # - Watchdog communication Settings - wd_hostname = '' # Host name or IP address of this watchdog # (change requires restart) wd_port = 9000 # port number for watchdog service # (change requires restart) wd_priority = 1 # priority of this watchdog in leader election # (change requires restart) wd_authkey = '' # Authentication key for watchdog communication # (change requires restart) #wd_ipc_socket_dir = '/var/run/postgresql' wd_ipc_socket_dir = '/tmp' # Unix domain socket path for watchdog IPC socket # The Debian package defaults to # /var/run/postgresql # (change requires restart) # - Virtual IP control Setting - delegate_IP = '' # delegate IP address # If this is empty, virtual IP never bring up. # (change requires restart) if_cmd_path = '/sbin' # path to the directory where if_up/down_cmd exists # (change requires restart) if_up_cmd = 'ip addr add $_IP_$/24 dev eth0 label eth0:0' # startup delegate IP command # (change requires restart) if_down_cmd = 'ip addr del $_IP_$/24 dev eth0' # shutdown delegate IP command # (change requires restart) arping_path = '/usr/sbin' # arping command path # (change requires restart) arping_cmd = 'arping -U $_IP_$ -w 1' # arping command # (change requires restart) # - Behaivor on escalation Setting - clear_memqcache_on_escalation = on # Clear all the query cache on shared memory # when standby pgpool escalate to active pgpool # (= virtual IP holder). # This should be off if client connects to pgpool # not using virtual IP. # (change requires restart) wd_escalation_command = '' # Executes this command at escalation on new active pgpool. # (change requires restart) wd_de_escalation_command = '' # Executes this command when master pgpool resigns from being master. # (change requires restart) # - Watchdog consensus settings for failover - failover_when_quorum_exists = off failover_require_consensus = on # Perform failover when majority of Pgpool-II nodes # aggrees on the backend node status change # (change requires restart) allow_multiple_failover_requests_from_node = off # A Pgpool-II node can cast multiple votes # for building the consensus on failover # (change requires restart) # - Lifecheck Setting - # -- common -- wd_monitoring_interfaces_list = '' # Comma separated list of interfaces names to monitor. # if any interface from the list is active the watchdog will # consider the network is fine # 'any' to enable monitoring on all interfaces except loopback # '' to disable monitoring # (change requires restart) wd_lifecheck_method = 'heartbeat' # Method of watchdog lifecheck ('heartbeat' or 'query' or 'external') # (change requires restart) wd_interval = 10 # lifecheck interval (sec) > 0 # (change requires restart) # -- heartbeat mode -- wd_heartbeat_port = 9694 # Port number for receiving heartbeat signal # (change requires restart) wd_heartbeat_keepalive = 2 # Interval time of sending heartbeat signal (sec) # (change requires restart) wd_heartbeat_deadtime = 30 # Deadtime interval for heartbeat signal (sec) # (change requires restart) heartbeat_destination0 = 'host0_ip1' # Host name or IP address of destination 0 # for sending heartbeat signal. # (change requires restart) heartbeat_destination_port0 = 9694 # Port number of destination 0 for sending # heartbeat signal. Usually this is the # same as wd_heartbeat_port. # (change requires restart) heartbeat_device0 = '' # Name of NIC device (such like 'eth0') # used for sending/receiving heartbeat # signal to/from destination 0. # This works only when this is not empty # and pgpool has root privilege. # (change requires restart) #heartbeat_destination1 = 'host0_ip2' #heartbeat_destination_port1 = 9694 #heartbeat_device1 = '' # -- query mode -- wd_life_point = 3 # lifecheck retry times # (change requires restart) wd_lifecheck_query = 'SELECT 1' # lifecheck query to pgpool from watchdog # (change requires restart) wd_lifecheck_dbname = 'template1' # Database name connected for lifecheck # (change requires restart) wd_lifecheck_user = 'nobody' # watchdog user monitoring pgpools in lifecheck # (change requires restart) wd_lifecheck_password = '' # Password for watchdog user in lifecheck # Leaving it empty will make Pgpool-II to first look for the # Password in pool_passwd file before using the empty password # (change requires restart) # - Other pgpool Connection Settings - #other_pgpool_hostname0 = 'host0' # Host name or IP address to connect to for other pgpool 0 # (change requires restart) #other_pgpool_port0 = 5432 # Port number for other pgpool 0 # (change requires restart) #other_wd_port0 = 9000 # Port number for other watchdog 0 # (change requires restart) #other_pgpool_hostname1 = 'host1' #other_pgpool_port1 = 5432 #other_wd_port1 = 9000 #------------------------------------------------------------------------------ # OTHERS #------------------------------------------------------------------------------ relcache_expire = 0 # Life time of relation cache in seconds. # 0 means no cache expiration(the default). # The relation cache is used for cache the # query result against PostgreSQL system # catalog to obtain various information # including table structures or if it's a # temporary table or not. The cache is # maintained in a pgpool child local memory # and being kept as long as it survives. # If someone modify the table by using # ALTER TABLE or some such, the relcache is # not consistent anymore. # For this purpose, cache_expiration # controls the life time of the cache. relcache_size = 256 # Number of relation cache # entry. If you see frequently: # "pool_search_relcache: cache replacement happend" # in the pgpool log, you might want to increate this number. check_temp_table = on # If on, enable temporary table check in SELECT statements. # This initiates queries against system catalog of primary/master # thus increases load of master. # If you are absolutely sure that your system never uses temporary tables # and you want to save access to primary/master, you could turn this off. # Default is on. check_unlogged_table = on # If on, enable unlogged table check in SELECT statements. # This initiates queries against system catalog of primary/master # thus increases load of master. # If you are absolutely sure that your system never uses unlogged tables # and you want to save access to primary/master, you could turn this off. # Default is on. #------------------------------------------------------------------------------ # IN MEMORY QUERY MEMORY CACHE #------------------------------------------------------------------------------ memory_cache_enabled = on # If on, use the memory cache functionality, off by default memqcache_method = 'shmem' # Cache storage method. either 'shmem'(shared memory) or # 'memcached'. 'shmem' by default # (change requires restart) memqcache_memcached_host = 'localhost' # Memcached host name or IP address. Mandatory if # memqcache_method = 'memcached'. # Defaults to localhost. # (change requires restart) memqcache_memcached_port = 11211 # Memcached port number. Mondatory if memqcache_method = 'memcached'. # Defaults to 11211. # (change requires restart) memqcache_total_size = 67108864 # Total memory size in bytes for storing memory cache. # Mandatory if memqcache_method = 'shmem'. # Defaults to 64MB. # (change requires restart) memqcache_max_num_cache = 1000000 # Total number of cache entries. Mandatory # if memqcache_method = 'shmem'. # Each cache entry consumes 48 bytes on shared memory. # Defaults to 1,000,000(45.8MB). # (change requires restart) memqcache_expire = 0 # Memory cache entry life time specified in seconds. # 0 means infinite life time. 0 by default. # (change requires restart) memqcache_auto_cache_invalidation = on # If on, invalidation of query cache is triggered by corresponding # DDL/DML/DCL(and memqcache_expire). If off, it is only triggered # by memqcache_expire. on by default. # (change requires restart) memqcache_maxcache = 409600 # Maximum SELECT result size in bytes. # Must be smaller than memqcache_cache_block_size. Defaults to 400KB. # (change requires restart) memqcache_cache_block_size = 10485760 # Cache block size in bytes. Mandatory if memqcache_method = 'shmem'. # Defaults to 1MB. # (change requires restart) memqcache_oiddir = '/var/log/pgpool/oiddir' # Temporary work directory to record table oids # (change requires restart) white_memqcache_table_list = '' # Comma separated list of table names to memcache # that don't write to database # Regexp are accepted black_memqcache_table_list = '' # Comma separated list of table names not to memcache # that don't write to database # Regexp are accepted ssl_key = '/etc/pgpool-II/server-ssl.key' ssl_cert = '/etc/pgpool-II/server-ssl.crt' backend_hostname1 = '172.18.0.12' backend_port1 = 5432 backend_weight1 = 1 backend_data_directory1 = '/srv/postgres-data/data' backend_flag1 = 'ALLOW_TO_FAILOVER' backend_hostname0 = '172.18.0.11' backend_port0 = 5432 backend_weight0 = 1 backend_data_directory0 = '/srv/postgres-data/data' backend_flag0 = 'ALLOW_TO_FAILOVER' parallel_mode = false |
|
|
It doesn't matter if health_check is on or off, On every new connection request pgpool tries to connect (w. auth) and disconnect onto slave backend also I tried several variations of configuration options and pgpool showed this behaviour in all cases. |
|
|
This is an expected behavior of Pgpool-II, because non blocking connect() system call is used. If the network is cut by using "iptables", config of "connect_timeout" doesn't work and the connection will wait until TCP/IP timeout. |
|
|
Thank You for your response, but it held a sad news for us. 1. Is it any config variation to allow pgpool receive new client connections while network between pgpool -> backend_slave is down in any reason? We need no load_balance, health_check and replication_check at all and replication process remains directly between backends through stream replication. 2. Why does pgpool try to forward client's new connect and authentication request to backend_slave when load_balance=off?... It's just an empty `connect w auth + disconnect` sequence without any operation... |
| Date Modified | Username | Field | Change |
|---|---|---|---|
| 2019-02-15 02:46 | caudatus | New Issue | |
| 2019-02-15 02:46 | caudatus | File Added: bugpack.tgz | |
| 2019-02-15 03:10 | caudatus | Note Added: 0002386 | |
| 2019-02-15 10:20 | administrator | Assigned To | => pengbo |
| 2019-02-15 10:20 | administrator | Status | new => assigned |
| 2019-02-15 22:07 | caudatus | Note Added: 0002387 | |
| 2019-02-15 23:13 | caudatus | Note Added: 0002388 | |
| 2019-02-19 15:06 | pengbo | Note Added: 0002393 | |
| 2019-02-19 20:52 | caudatus | Note Added: 0002394 | |
| 2019-02-19 20:56 | caudatus | Note Added: 0002395 | |
| 2019-02-26 11:29 | pengbo | Note Added: 0002406 | |
| 2019-03-05 04:11 | caudatus | Note Added: 0002418 |