[pgpool-general: 8960] Java application occasionally gets "The connection attempt failed" errors

Ron ronljohnsonjr at gmail.com
Sat Nov 11 05:03:56 JST 2023


PgPool 4.3.7 and PG 14.7

By "occasionally", I mean 1 or 2 times for every 500 "batches" of data we 
process.

Attached is the error message text, the pgpool.conf file and postgresql.conf 
file.

"ps | aux | grep <db application user name>" shows maybe a dozen connections 
to PgPool.

We do not get these errors when connecting directly to the database on node "a".

It's a three node cluster (named "a", "b" and "c").  The VIP is attached to 
node "c", and the PostgreSQL primary database is on node "a".  PG streaming 
replica is node "b".

Putting the VIP on node "a" does _not_ solve the problem (but it does make 
things run slower).

"num_init_children = 350" on all three nodes, and "max_connections = 400" on 
the two DB servers.

There's nothing in the error logs where PgPool "log_min_messages = notice" 
and Postgresql log_min_error_statement and log_min_messages  are both = 
info, and client_min_messages = notice.

Where else should we look?

-- 
Born in Arizona, moved to Babylonia.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.pgpool.net/pipermail/pgpool-general/attachments/20231110/db031f65/attachment.htm>
-------------- next part --------------
Message:
An external error has occurred
com.powerdox.jcore2.dbproxy.DBPAbortedSQLException: No database connection available for the datasource 'cds' (e0) (s08001) due to The connection attempt failed.
	at com.powerdox.jcore2.dbproxy.DBPPostgreSQLDataSource$DBPPostgreSQLDataSourceInner.verifySQLException(DBPPostgreSQLDataSource.java:112)
	at com.powerdox.jcore2.dbproxy.DBPDataSource$DBPDataSourceInner.createConnection(DBPDataSource.java:384)
	at com.powerdox.jcore2.dbproxy.DBPDataSource.createConnection(DBPDataSource.java:196)
	at com.powerdox.jcore2.dbproxy.DBPConnectionPool.createConnection(DBPConnectionPool.java:163)
	at com.powerdox.jcore2.dbproxy.DBPConnectionPool.getConnection(DBPConnectionPool.java:101)
	at com.powerdox.jcore2.dbproxy.DBPPooledDataSource$DBPPooledDataSourceInner.getConnection(DBPPooledDataSource.java:55)
	at com.powerdox.jcore2.dbproxy.DBPDataSource$DBPDataSourceInner.getConnection(DBPDataSource.java:362)
	at com.powerdox.jcore2.dbproxy.DBPDataSource$DBPDataSourceInner.access$1300(DBPDataSource.java:231)
	at com.powerdox.jcore2.dbproxy.DBPDataSource.getConnection(DBPDataSource.java:188)
	at com.powerdox.jservice2.pos.db.DBManager$DBManagerInner.getConnection(DBManager.java:633)
	at com.powerdox.jservice2.pos.db.DBManager.getConnection(DBManager.java:241)
	at com.powerdox.jservice2.pos.db.DBTransactionProcess$DBTransactionProcessInner.getUpdateConnection(DBTransactionProcess.java:263)
	at com.powerdox.jservice2.pos.db.DBTransactionProcess$DBTransactionProcessInner.getQueryConnection(DBTransactionProcess.java:252)
	at com.powerdox.jservice2.pos.db.DBTransactionProcess.getQueryConnection(DBTransactionProcess.java:73)
	at com.powerdox.jservice2.pos.db.DBTransactionHandler$DBTransactionHandlerInner.buildPrepareStatement(DBTransactionHandler.java:1806)
	at com.powerdox.jservice2.pos.db.DBTransactionHandler$DBTransactionHandlerInner.prepareSearchStatement(DBTransactionHandler.java:1796)
	at com.powerdox.jservice2.pos.db.DBTransactionHandler$DBTransactionHandlerInner.search(DBTransactionHandler.java:1315)
	at com.powerdox.jservice2.pos.db.DBTransactionHandler$DBTransactionHandlerInner.search(DBTransactionHandler.java:1295)
	at com.powerdox.jservice2.pos.db.DBTransactionHandler$DBTransactionHandlerInner.searchMax(DBTransactionHandler.java:1286)
	at com.powerdox.jservice2.pos.db.DBTransactionHandler.searchMax(DBTransactionHandler.java:298)
	at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.powerdox.jservice2.session.JSTransactionProcess$JSTransactionProcessInner.processTransaction(JSTransactionProcess.java:465)
	at com.powerdox.jservice2.pos.db.DBTransactionProcess$DBTransactionProcessInner.processTransaction(DBTransactionProcess.java:294)
	at com.powerdox.jservice2.pos.db.DBTransactionProcess.processTransaction(DBTransactionProcess.java:93)
	at com.powerdox.jservice2.session.JSTransactionHandler$JSTransactionHandlerInner.process(JSTransactionHandler.java:93)
	at com.powerdox.jservice2.session.JSTransactionHandler.process(JSTransactionHandler.java:55)
	at com.powerdox.jservice2.session.JSSession$JSSessionInner.sendTransaction(JSSession.java:348)
	at com.powerdox.jservice2.session.JSSession.sendTransaction(JSSession.java:133)
	at com.powerdox.lockbox.service.cds.DepositService.retrieveBatches(DepositService.java:1592)
	at com.powerdox.lockbox.service.cds.DepositService.processX937Files(DepositService.java:323)
	at com.powerdox.lockbox.service.cds.DepositX9Service.runService(DepositX9Service.java:36)
	at com.powerdox.jservice2.service.AbstractService.runImpl(AbstractService.java:150)
	at com.powerdox.jservice2.service.AbstractService.run(AbstractService.java:130)
	at java.lang.Thread.run(Thread.java:748)
-------------- next part --------------
listen_addresses = '*'
port = 9999
socket_dir = '/var/run/postgresql'
backend_clustering_mode = 'streaming_replication'

pcp_listen_addresses = '*'
pcp_port = 9898
pcp_socket_dir = '/var/run/postgresql'

listen_backlog_multiplier = 2
serialize_accept = on
num_init_children = 350
max_pool = 2
reserved_connections = 3
client_idle_limit = 0
#client_idle_limit = 1200
#client_idle_limit = 60

load_balance_mode = off

backend_hostname0 = 'MYCLUSTERa'
backend_port0 = 5432
backend_weight0 = 2
backend_data_directory0 = '/var/lib/pgsql/15/data'
backend_flag0 = 'ALLOW_TO_FAILOVER'
backend_application_name0 = 'server0'

backend_hostname1 = 'MYCLUSTERb'
backend_port1 = 5432
backend_weight1 = 1
backend_data_directory1 = '/var/lib/pgsql/15/data'
backend_flag1 = 'ALLOW_TO_FAILOVER'
backend_application_name1 = 'server1'

enable_pool_hba = on
pool_passwd = '/etc/pgpool-II/pool_passwd'

log_destination = 'stderr'
log_line_prefix = '%m: %a pid %p: '
log_connections = on
log_hostname = on
#log_statement = on
log_statement = off
#log_per_node_statement = on
log_per_node_statement = off
log_client_messages = on
#log_min_messages = info
log_min_messages = notice
logging_collector = on
log_directory = '/var/log/pgpool2'
log_filename = 'pgpool-%F_%H.log'
log_truncate_on_rotation = on
log_rotation_age = 1h
log_rotation_size = 0

sr_check_user = 'pool_health_check'
sr_check_database = 'pool_health_check'

health_check_period = 5
health_check_user = 'pool_health_check'
health_check_database = 'pool_health_check'
health_check_max_retries = 10
health_check_retry_delay = 10

use_watchdog = on
wd_priority = 1
delegate_IP = '10.109.165.13'

hostname0 = 'MYCLUSTERa'
wd_port0 = 9000
pgpool_port0 = 9999

hostname1 = 'MYCLUSTERb'
wd_port1 = 9000
pgpool_port1 = 9999

hostname2 = 'MYCLUSTERc'
wd_port2 = 9000
pgpool_port2 = 9999

wd_ipc_socket_dir = '/var/run/postgresql'
if_up_cmd = '/usr/bin/sudo /sbin/ip addr add $_IP_$/26 dev ens192 label ens192:0'
if_down_cmd = '/usr/bin/sudo /sbin/ip addr del $_IP_$/26 dev ens192'
arping_cmd = '/usr/bin/sudo /usr/sbin/arping -U $_IP_$ -w 1 -I ens192'

wd_escalation_command = '/etc/pgpool-II/escalation.sh'
wd_lifecheck_method = 'heartbeat'

heartbeat_hostname0 = 'MYCLUSTERa'
heartbeat_port0 = 9694
heartbeat_device0 = 'ens192'

heartbeat_hostname1 = 'MYCLUSTERb'
heartbeat_port1 = 9694
heartbeat_device1 = 'ens192'

heartbeat_hostname2 = 'MYCLUSTERc'
heartbeat_port2 = 9694
heartbeat_device2 = 'ens192'

wd_heartbeat_keepalive = 2
wd_heartbeat_deadtime = 30

memory_cache_enabled = off
#memory_cache_enabled = on
memqcache_oiddir = '/var/log/pgpool2/oiddir'
memqcache_total_size = 1024MB
memqcache_max_num_cache = 1048576
-------------- next part --------------
listen_addresses = '*'
max_connections = 400

shared_buffers = 32GB
#temp_buffers = 16MB
work_mem = 300MB
maintenance_work_mem = 12GB
dynamic_shared_memory_type = posix
effective_cache_size = 108GB
max_wal_size = 2GB

fsync = on
synchronous_commit = on
checkpoint_timeout = 15min
checkpoint_completion_target = 0.9

wal_level = hot_standby
wal_log_hints = on
#archive_mode = on
archive_mode = on
archive_command = 'pgbackrest --stanza=localhost archive-push %p'
max_wal_senders = 12
max_wal_size = 12GB

password_encryption = scram-sha-256
default_statistics_target = 5000

log_destination = 'syslog, stderr'
logging_collector = on
#log_directory = '/var/lib/pgsql/pg_log'
log_directory = '/var/log/postgresql'
log_filename = 'postgresql-%F_%H.log'
log_truncate_on_rotation = off
log_rotation_age = 1h
log_rotation_size = 0
#client_min_messages = log
log_min_messages = info
log_min_error_statement = info
client_min_messages = notice
#log_min_messages = warning
#log_min_error_statement = warning
log_checkpoints = on
log_duration = off
log_error_verbosity = verbose
log_line_prefix = '%m\t%r\t%u\t%d\t%p\t%i\t%a\t%e\t'
log_lock_waits = on
#log_statement = 'mod'
log_statement = 'ddl'
#log_statement = 'all'
log_temp_files = 1kB
log_timezone = 'US/Eastern'

#track_activity_query_size = 10240
track_activity_query_size = 48kB
#log_statement_stats = off

autovacuum = on
#autovacuum_vacuum_threshold =  250
autovacuum_vacuum_cost_delay = 10ms
#autovacuum_vacuum_cost_limit = 1000
autovacuum_max_workers = 6
#autovacuum_analyze_threshold = 50
autovacuum_vacuum_scale_factor = 0.03
autovacuum_analyze_scale_factor = 0.03

datestyle = 'iso, mdy'
timezone = 'US/Eastern'

lc_messages = 'en_US.UTF-8'
lc_monetary = 'en_US.UTF-8'
lc_numeric = 'en_US.UTF-8'
lc_time = 'en_US.UTF-8'

default_text_search_config = 'pg_catalog.english'

max_locks_per_transaction = 15360

shared_preload_libraries = 'pgaudit'
pgaudit.role = 'auditor'
pgaudit.log_parameter = on
pgaudit.log_statement_once = on


More information about the pgpool-general mailing list