View Issue Details
| ID | Project | Category | View Status | Date Submitted | Last Update |
|---|---|---|---|---|---|
| 0000769 | Pgpool-II | Bug | public | 2022-09-30 23:14 | 2022-10-11 17:46 |
| Reporter | dcvythoulkas | Assigned To | pengbo | ||
| Priority | normal | Severity | major | Reproducibility | always |
| Status | assigned | Resolution | open | ||
| OS | Debian | OS Version | 11.3 | ||
| Product Version | 4.3.3 | ||||
| Summary | 0000769: Pgpool will not run if_cmd_down command | ||||
| Description | This is a continuance from issue https://www.pgpool.net/mantisbt/view.php?id=597 . The same configuration is deployed on two pairs of pgpool nodes. One pair has Debian 11 nodes, the other CentOS 8 nodes. Both pairs have the same Debian 11 backend with PostgreSQL 14. All four pgpool nodes have identical pgpool.conf, apart from file paths. Pgpool has been installed via apt/yum. On Debian from the PGDG apt repo while on CentOS from pgpool yum repo. When stopping pgpool, the VIP is released on the CentOS pair, but on the Debian pair the if_cmd_down command is never issued. Specifically the following lines are missing from the Debian pgpool nodes: ``` Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:22:00.816: watchdog_utility pid 28617: LOG: watchdog: de-escalation started Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:22:00.816: watchdog_utility pid 28617: LOCATION: wd_escalation.c:182 Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:22:00.962: watchdog_utility pid 28617: LOG: successfully released the delegate IP:"10.10.20.110" Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:22:00.962: watchdog_utility pid 28617: DETAIL: 'if_down_cmd' returned with success Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:22:00.962: watchdog_utility pid 28617: LOCATION: wd_if.c:226 ``` | ||||
| Steps To Reproduce | Latest Debian with the attached pgpool.conf. No failback/failover/restore/etc, scripts are set up. | ||||
| Additional Information | I have also noticed that the CentOS packages comes with several scripts and specifically escalation.sh.sample. Ηοwever it seems that the CentOS installation does not require it to release the VIP | ||||
| Tags | networking, vip, watchdog | ||||
|
|
centos8_pgpool.conf (49,882 bytes)
# ----------------------------
# pgPool-II configuration file
# ----------------------------
#
# This file consists of lines of the form:
#
# name = value
#
# Whitespace may be used. Comments are introduced with "#" anywhere on a line.
# The complete list of parameter names and allowed values can be found in the
# pgPool-II documentation.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal. If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, or use "pgpool reload". Some
# parameters, which are marked below, require a server shutdown and restart to
# take effect.
#
#------------------------------------------------------------------------------
# BACKEND CLUSTERING MODE
# Choose one of: 'streaming_replication', 'native_replication',
# 'logical_replication', 'slony', 'raw' or 'snapshot_isolation'
# (change requires restart)
#------------------------------------------------------------------------------
backend_clustering_mode = 'streaming_replication'
#------------------------------------------------------------------------------
# CONNECTIONS
#------------------------------------------------------------------------------
# - pgpool Connection Settings -
listen_addresses = '*'
# Host name or IP address to listen on:
# '*' for all, '' for no TCP/IP connections
# (change requires restart)
port = 5432
# Port number
# (change requires restart)
#socket_dir = '/var/run/postgresql'
# Unix domain socket path
# The Debian package defaults to
# /var/run/postgresql
# (change requires restart)
#reserved_connections = 0
# Number of reserved connections.
# Pgpool-II does not accept connections if over
# num_init_chidlren - reserved_connections.
# - pgpool Communication Manager Connection Settings -
pcp_listen_addresses = '*'
# Host name or IP address for pcp process to listen on:
# '*' for all, '' for no TCP/IP connections
# (change requires restart)
#pcp_port = 9898
# Port number for pcp
# (change requires restart)
#pcp_socket_dir = '/var/run/postgresql'
# Unix domain socket path for pcp
# The Debian package defaults to
# /var/run/postgresql
# (change requires restart)
#listen_backlog_multiplier = 2
# Set the backlog parameter of listen(2) to
# num_init_children * listen_backlog_multiplier.
# (change requires restart)
#serialize_accept = off
# whether to serialize accept() call to avoid thundering herd problem
# (change requires restart)
# - Backend Connection Settings -
#backend_hostname0 = 'host1'
# Host name or IP address to connect to for backend 0
#backend_port0 = 5432
# Port number for backend 0
#backend_weight0 = 1
# Weight for backend 0 (only in load balancing mode)
#backend_data_directory0 = '/data'
# Data directory for backend 0
#backend_flag0 = 'ALLOW_TO_FAILOVER'
# Controls various backend behavior
# ALLOW_TO_FAILOVER, DISALLOW_TO_FAILOVER
# or ALWAYS_PRIMARY
#backend_application_name0 = 'server0'
# walsender's application_name, used for "show pool_nodes" command
backend_hostname0 = 'psql0-stage-cn1.psqldb-stage.example.com'
backend_port0 = 5432
backend_weight0 = 1
backend_data_directory0 = '/var/lib/postgresql/14/main'
backend_flag0 = 'ALLOW_TO_FAILOVER'
backend_application_name0 = 'psql0-stage-cn1.psqldb-stage.example.com'
backend_hostname1 = 'psql0-stage-cn2.psqldb-stage.example.com'
backend_port1 = 5432
backend_weight1 = 1
backend_data_directory1 = '/var/lib/postgresql/14/main'
backend_flag1 = 'ALLOW_TO_FAILOVER'
backend_application_name1 = 'psql0-stage-cn2.psqldb-stage.example.com'
# - Authentication -
#enable_pool_hba = off
# Use pool_hba.conf for client authentication
#pool_passwd = 'pool_passwd'
# File name of pool_passwd for md5 authentication.
# "" disables pool_passwd.
# (change requires restart)
#authentication_timeout = 1min
# Delay in seconds to complete client authentication
# 0 means no timeout.
allow_clear_text_frontend_auth = on
# Allow Pgpool-II to use clear text password authentication
# with clients, when pool_passwd does not
# contain the user password
# - SSL Connections -
ssl = on
# Enable SSL support
# (change requires restart)
ssl_key = '/etc/pgpool-II/server.key'
# SSL private key file
# (change requires restart)
ssl_cert = '/etc/pgpool-II/server.crt'
# SSL public certificate file
# (change requires restart)
#ssl_ca_cert = ''
# Single PEM format file containing
# CA root certificate(s)
# (change requires restart)
#ssl_ca_cert_dir = ''
# Directory containing CA root certificate(s)
# (change requires restart)
#ssl_crl_file = ''
# SSL certificate revocation list file
# (change requires restart)
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL'
# Allowed SSL ciphers
# (change requires restart)
#ssl_prefer_server_ciphers = off
# Use server's SSL cipher preferences,
# rather than the client's
# (change requires restart)
#ssl_ecdh_curve = 'prime256v1'
# Name of the curve to use in ECDH key exchange
#ssl_dh_params_file = ''
# Name of the file containing Diffie-Hellman parameters used
# for so-called ephemeral DH family of SSL cipher.
#ssl_passphrase_command=''
# Sets an external command to be invoked when a passphrase
# for decrypting an SSL file needs to be obtained
# (change requires restart)
#------------------------------------------------------------------------------
# POOLS
#------------------------------------------------------------------------------
# - Concurrent session and pool size -
#num_init_children = 32
# Number of concurrent sessions allowed
# (change requires restart)
max_pool = 6
# Number of connection pool caches per connection
# (change requires restart)
# - Life time -
#child_life_time = 5min
# Pool exits after being idle for this many seconds
#child_max_connections = 0
# Pool exits after receiving that many connections
# 0 means no exit
#connection_life_time = 0
# Connection to backend closes after being idle for this many seconds
# 0 means no close
#client_idle_limit = 0
# Client is disconnected after being idle for that many seconds
# (even inside an explicit transactions!)
# 0 means no disconnection
#------------------------------------------------------------------------------
# LOGS
#------------------------------------------------------------------------------
# - Where to log -
#log_destination = 'stderr'
# Where to log
# Valid values are combinations of stderr,
# and syslog. Default to stderr.
# - What to log -
log_line_prefix = '%m: %a pid %p: ' # printf-style string to output at beginning of each log line.
#log_connections = off
# Log connections
#log_disconnections = off
# Log disconnections
#log_hostname = off
# Hostname will be shown in ps status
# and in logs if connections are logged
#log_statement = off
# Log all statements
#log_per_node_statement = off
# Log all statements
# with node and backend informations
#log_client_messages = off
# Log any client messages
#log_standby_delay = 'if_over_threshold'
# Log standby delay
# Valid values are combinations of always,
# if_over_threshold, none
# - Syslog specific -
#syslog_facility = 'LOCAL0'
# Syslog local facility. Default to LOCAL0
#syslog_ident = 'pgpool'
# Syslog program identification string
# Default to 'pgpool'
# - Debug -
log_error_verbosity = verbose # terse, default, or verbose messages
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#log_min_messages = info # values in order of decreasing detail:
log_min_messages = info
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
# This is used when logging to stderr:
#logging_collector = off
# Enable capturing of stderr
# into log files.
# (change requires restart)
# -- Only used if logging_collector is on ---
#log_directory = '/tmp/pgpool_logs'
# directory where log files are written,
# can be absolute
#log_filename = 'pgpool-%Y-%m-%d_%H%M%S.log'
# log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600
# creation mode for log files,
# begin with 0 to use octal notation
#log_truncate_on_rotation = off
# If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
#log_rotation_age = 1d
# Automatic rotation of logfiles will
# happen after that (minutes)time.
# 0 disables time based rotation.
#log_rotation_size = 10MB
# Automatic rotation of logfiles will
# happen after that much (KB) log output.
# 0 disables size based rotation.
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
#pid_file_name = '/var/run/postgresql/pgpool.pid'
# PID file name
# Can be specified as relative to the"
# location of pgpool.conf file or
# as an absolute path
# (change requires restart)
#logdir = '/var/log/postgresql'
# Directory of pgPool status file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTION POOLING
#------------------------------------------------------------------------------
#connection_cache = on
# Activate connection pools
# (change requires restart)
# Semicolon separated list of queries
# to be issued at the end of a session
# The default is for 8.3 and later
#reset_query_list = 'ABORT; DISCARD ALL'
# The following one is for 8.2 and before
#reset_query_list = 'ABORT; RESET ALL; SET SESSION AUTHORIZATION DEFAULT'
#------------------------------------------------------------------------------
# REPLICATION MODE
#------------------------------------------------------------------------------
#replicate_select = off
# Replicate SELECT statements
# when in replication mode
# replicate_select is higher priority than
# load_balance_mode.
#insert_lock = off
# Automatically locks a dummy row or a table
# with INSERT statements to keep SERIAL data
# consistency
# Without SERIAL, no lock will be issued
#lobj_lock_table = ''
# When rewriting lo_creat command in
# replication mode, specify table name to
# lock
# - Degenerate handling -
#replication_stop_on_mismatch = off
# On disagreement with the packet kind
# sent from backend, degenerate the node
# which is most likely "minority"
# If off, just force to exit this session
#failover_if_affected_tuples_mismatch = off
# On disagreement with the number of affected
# tuples in UPDATE/DELETE queries, then
# degenerate the node which is most likely
# "minority".
# If off, just abort the transaction to
# keep the consistency
#------------------------------------------------------------------------------
# LOAD BALANCING MODE
#------------------------------------------------------------------------------
#load_balance_mode = on
# Activate load balancing mode
# (change requires restart)
#ignore_leading_white_space = on
# Ignore leading white spaces of each query
#read_only_function_list = ''
# Comma separated list of function names
# that don't write to database
# Regexp are accepted
#write_function_list = ''
# Comma separated list of function names
# that write to database
# Regexp are accepted
# If both read_only_function_list and write_function_list
# is empty, function's volatile property is checked.
# If it's volatile, the function is regarded as a
# writing function.
#primary_routing_query_pattern_list = ''
# Semicolon separated list of query patterns
# that should be sent to primary node
# Regexp are accepted
# valid for streaming replicaton mode only.
#database_redirect_preference_list = ''
# comma separated list of pairs of database and node id.
# example: postgres:primary,mydb[0-4]:1,mydb[5-9]:2'
# valid for streaming replicaton mode only.
#app_name_redirect_preference_list = ''
# comma separated list of pairs of app name and node id.
# example: 'psql:primary,myapp[0-4]:1,myapp[5-9]:standby'
# valid for streaming replicaton mode only.
#allow_sql_comments = off
# if on, ignore SQL comments when judging if load balance or
# query cache is possible.
# If off, SQL comments effectively prevent the judgment
# (pre 3.4 behavior).
#disable_load_balance_on_write = 'transaction'
# Load balance behavior when write query is issued
# in an explicit transaction.
#
# Valid values:
#
# 'transaction' (default):
# if a write query is issued, subsequent
# read queries will not be load balanced
# until the transaction ends.
#
# 'trans_transaction':
# if a write query is issued, subsequent
# read queries in an explicit transaction
# will not be load balanced until the session ends.
#
# 'dml_adaptive':
# Queries on the tables that have already been
# modified within the current explicit transaction will
# not be load balanced until the end of the transaction.
#
# 'always':
# if a write query is issued, read queries will
# not be load balanced until the session ends.
#
# Note that any query not in an explicit transaction
# is not affected by the parameter except 'always'.
#dml_adaptive_object_relationship_list= ''
# comma separated list of object pairs
# [object]:[dependent-object], to disable load balancing
# of dependent objects within the explicit transaction
# after WRITE statement is issued on (depending-on) object.
#
# example: 'tb_t1:tb_t2,insert_tb_f_func():tb_f,tb_v:my_view'
# Note: function name in this list must also be present in
# the write_function_list
# only valid for disable_load_balance_on_write = 'dml_adaptive'.
#statement_level_load_balance = off
# Enables statement level load balancing
#------------------------------------------------------------------------------
# STREAMING REPLICATION MODE
#------------------------------------------------------------------------------
# - Streaming -
#sr_check_period = 10
# Streaming replication check period
# Disabled (0) by default
sr_check_user = 'pgpool'
# Streaming replication check user
# This is neccessary even if you disable streaming
# replication delay check by sr_check_period = 0
#sr_check_password = ''
# Password for streaming replication check user
# Leaving it empty will make Pgpool-II to first look for the
# Password in pool_passwd file before using the empty password
#sr_check_database = 'postgres'
# Database name for streaming replication check
#delay_threshold = 0
# Threshold before not dispatching query to standby node
# Unit is in bytes
# Disabled (0) by default
#prefer_lower_delay_standby = off
# If delay_threshold is set larger than 0, Pgpool-II send to
# the primary when selected node is delayed over delay_threshold.
# If this is set to on, Pgpool-II send query to other standby
# delayed lower.
# - Special commands -
#follow_primary_command = ''
# Executes this command after main node failover
# Special values:
# %d = failed node id
# %h = failed node host name
# %p = failed node port number
# %D = failed node database cluster path
# %m = new main node id
# %H = new main node hostname
# %M = old main node id
# %P = old primary node id
# %r = new main port number
# %R = new main database cluster path
# %N = old primary node hostname
# %S = old primary node port number
# %% = '%' character
#------------------------------------------------------------------------------
# HEALTH CHECK GLOBAL PARAMETERS
#------------------------------------------------------------------------------
health_check_period = 5
# Health check period
# Disabled (0) by default
#health_check_timeout = 20
# Health check timeout
# 0 means no timeout
health_check_user = 'pgpool'
# Health check user
#health_check_password = ''
# Password for health check user
# Leaving it empty will make Pgpool-II to first look for the
# Password in pool_passwd file before using the empty password
#health_check_database = ''
# Database name for health check. If '', tries 'postgres' frist,
health_check_max_retries = 3
# Maximum number of times to retry a failed health check before giving up.
#health_check_retry_delay = 1
# Amount of time to wait (in seconds) between retries.
#connect_timeout = 10000
# Timeout value in milliseconds before giving up to connect to backend.
# Default is 10000 ms (10 second). Flaky network user may want to increase
# the value. 0 means no timeout.
# Note that this value is not only used for health check,
# but also for ordinary conection to backend.
#------------------------------------------------------------------------------
# HEALTH CHECK PER NODE PARAMETERS (OPTIONAL)
#------------------------------------------------------------------------------
#health_check_period0 = 0
#health_check_timeout0 = 20
#health_check_user0 = 'nobody'
#health_check_password0 = ''
#health_check_database0 = ''
#health_check_max_retries0 = 0
#health_check_retry_delay0 = 1
#connect_timeout0 = 10000
#------------------------------------------------------------------------------
# FAILOVER AND FAILBACK
#------------------------------------------------------------------------------
#failover_command = ''
# Executes this command at failover
# Special values:
# %d = failed node id
# %h = failed node host name
# %p = failed node port number
# %D = failed node database cluster path
# %m = new main node id
# %H = new main node hostname
# %M = old main node id
# %P = old primary node id
# %r = new main port number
# %R = new main database cluster path
# %N = old primary node hostname
# %S = old primary node port number
# %% = '%' character
#failback_command = ''
# Executes this command at failback.
# Special values:
# %d = failed node id
# %h = failed node host name
# %p = failed node port number
# %D = failed node database cluster path
# %m = new main node id
# %H = new main node hostname
# %M = old main node id
# %P = old primary node id
# %r = new main port number
# %R = new main database cluster path
# %N = old primary node hostname
# %S = old primary node port number
# %% = '%' character
failover_on_backend_error = off
# Initiates failover when reading/writing to the
# backend communication socket fails
# If set to off, pgpool will report an
# error and disconnect the session.
#failover_on_backend_shutdown = off
# Initiates failover when backend is shutdown,
# or backend process is killed.
# If set to off, pgpool will report an
# error and disconnect the session.
#detach_false_primary = off
# Detach false primary if on. Only
# valid in streaming replicaton
# mode and with PostgreSQL 9.6 or
# after.
#search_primary_node_timeout = 5min
# Timeout in seconds to search for the
# primary node when a failover occurs.
# 0 means no timeout, keep searching
# for a primary node forever.
#------------------------------------------------------------------------------
# ONLINE RECOVERY
#------------------------------------------------------------------------------
#recovery_user = 'nobody'
# Online recovery user
#recovery_password = ''
# Online recovery password
# Leaving it empty will make Pgpool-II to first look for the
# Password in pool_passwd file before using the empty password
#recovery_1st_stage_command = ''
# Executes a command in first stage
#recovery_2nd_stage_command = ''
# Executes a command in second stage
#recovery_timeout = 90
# Timeout in seconds to wait for the
# recovering node's postmaster to start up
# 0 means no wait
#client_idle_limit_in_recovery = 0
# Client is disconnected after being idle
# for that many seconds in the second stage
# of online recovery
# 0 means no disconnection
# -1 means immediate disconnection
#auto_failback = off
# Dettached backend node reattach automatically
# if replication_state is 'streaming'.
#auto_failback_interval = 1min
# Min interval of executing auto_failback in
# seconds.
#------------------------------------------------------------------------------
# WATCHDOG
#------------------------------------------------------------------------------
# - Enabling -
use_watchdog = on
# Activates watchdog
# (change requires restart)
# -Connection to up stream servers -
trusted_servers = '10.10.20.1'
# trusted server list which are used
# to confirm network connection
# (hostA,hostB,hostC,...)
# (change requires restart)
ping_path = '/bin'
# ping command path
# (change requires restart)
# - Watchdog communication Settings -
#hostname0 = ''
# Host name or IP address of pgpool node
# for watchdog connection
# (change requires restart)
#wd_port0 = 9000
# Port number for watchdog service
# (change requires restart)
#pgpool_port0 = 9999
# Port number for pgpool
# (change requires restart)
#
hostname0 = 'cproxy-psql0-stage-cn1.psqldb-stage.example.com'
wd_port0 = 9000
pgpool_port0 = 9999
hostname1 = 'cproxy-psql0-stage-cn2.psqldb-stage.example.com'
wd_port1 = 9000
pgpool_port1 = 9999
#wd_priority = 1
# priority of this watchdog in leader election
# (change requires restart)
#wd_authkey = ''
# Authentication key for watchdog communication
# (change requires restart)
#wd_ipc_socket_dir = '/tmp'
# Unix domain socket path for watchdog IPC socket
# The Debian package defaults to
# /var/run/postgresql
# (change requires restart)
# - Virtual IP control Setting -
delegate_IP = '10.10.20.110'
# delegate IP address
# If this is empty, virtual IP never bring up.
# (change requires restart)
if_cmd_path = '/bin'
# path to the directory where if_up/down_cmd exists
# If if_up/down_cmd starts with "/", if_cmd_path will be ignored.
# (change requires restart)
if_up_cmd = '/bin/sudo /sbin/ip addr add $_IP_$/24 dev eth1 label eth1:0'
# startup delegate IP command
# (change requires restart)
if_down_cmd = '/usr/bin/sudo /sbin/ip addr del $_IP_$/24 dev eth1'
# shutdown delegate IP command
# (change requires restart)
arping_path = '/sbin'
# arping command path
# If arping_cmd starts with "/", if_cmd_path will be ignored.
# (change requires restart)
arping_cmd = '/bin/sudo /sbin/arping -U $_IP_$ -w 1 -I eth1'
# arping command
# (change requires restart)
# - Behavior on escalation Setting -
#clear_memqcache_on_escalation = on
# Clear all the query cache on shared memory
# when standby pgpool escalate to active pgpool
# (= virtual IP holder).
# This should be off if client connects to pgpool
# not using virtual IP.
# (change requires restart)
#wd_escalation_command = ''
# Executes this command at escalation on new active pgpool.
# (change requires restart)
#wd_de_escalation_command = ''
# Executes this command when leader pgpool resigns from being leader.
# (change requires restart)
# - Watchdog consensus settings for failover -
#failover_when_quorum_exists = on
# Only perform backend node failover
# when the watchdog cluster holds the quorum
# (change requires restart)
#failover_require_consensus = on
# Perform failover when majority of Pgpool-II nodes
# aggrees on the backend node status change
# (change requires restart)
#allow_multiple_failover_requests_from_node = off
# A Pgpool-II node can cast multiple votes
# for building the consensus on failover
# (change requires restart)
enable_consensus_with_half_votes = on
# apply majority rule for consensus and quorum computation
# at 50% of votes in a cluster with even number of nodes.
# when enabled the existence of quorum and consensus
# on failover is resolved after receiving half of the
# total votes in the cluster, otherwise both these
# decisions require at least one more vote than
# half of the total votes.
# (change requires restart)
# - Watchdog cluster membership settings for quorum computation -
#wd_remove_shutdown_nodes = off
# when enabled cluster membership of properly shutdown
# watchdog nodes gets revoked, After that the node does
# not count towards the quorum and consensus computations
#wd_lost_node_removal_timeout = 0s
# Timeout after which the cluster membership of LOST watchdog
# nodes gets revoked. After that the node node does not
# count towards the quorum and consensus computations
# setting timeout to 0 will never revoke the membership
# of LOST nodes
#wd_no_show_node_removal_timeout = 0s
# Time to wait for Watchdog node to connect to the cluster.
# After that time the cluster membership of NO-SHOW node gets
# revoked and it does not count towards the quorum and
# consensus computations
# setting timeout to 0 will not revoke the membership
# of NO-SHOW nodes
# - Lifecheck Setting -
# -- common --
#wd_monitoring_interfaces_list = ''
# Comma separated list of interfaces names to monitor.
# if any interface from the list is active the watchdog will
# consider the network is fine
# 'any' to enable monitoring on all interfaces except loopback
# '' to disable monitoring
# (change requires restart)
#wd_lifecheck_method = 'heartbeat'
# Method of watchdog lifecheck ('heartbeat' or 'query' or 'external')
# (change requires restart)
#wd_interval = 10
# lifecheck interval (sec) > 0
# (change requires restart)
# -- heartbeat mode --
#heartbeat_hostname0 = ''
# Host name or IP address used
# for sending heartbeat signal.
# (change requires restart)
#heartbeat_port0 = 9694
# Port number used for receiving/sending heartbeat signal
# Usually this is the same as heartbeat_portX.
# (change requires restart)
#heartbeat_device0 = ''
# Name of NIC device (such like 'eth0')
# used for sending/receiving heartbeat
# signal to/from destination 0.
# This works only when this is not empty
# and pgpool has root privilege.
# (change requires restart)
heartbeat_hostname0 = 'cproxy-psql0-stage-cn1.psqldb-stage.example.com'
heartbeat_port0 = 9694
heartbeat_device0 = ''
heartbeat_hostname1 = 'cproxy-psql0-stage-cn2.psqldb-stage.example.com'
heartbeat_port1 = 9694
heartbeat_device1 = ''
#wd_heartbeat_keepalive = 2
# Interval time of sending heartbeat signal (sec)
# (change requires restart)
#wd_heartbeat_deadtime = 30
# Deadtime interval for heartbeat signal (sec)
# (change requires restart)
# -- query mode --
#wd_life_point = 3
# lifecheck retry times
# (change requires restart)
#wd_lifecheck_query = 'SELECT 1'
# lifecheck query to pgpool from watchdog
# (change requires restart)
#wd_lifecheck_dbname = 'template1'
# Database name connected for lifecheck
# (change requires restart)
#wd_lifecheck_user = 'nobody'
# watchdog user monitoring pgpools in lifecheck
# (change requires restart)
#wd_lifecheck_password = ''
# Password for watchdog user in lifecheck
# Leaving it empty will make Pgpool-II to first look for the
# Password in pool_passwd file before using the empty password
# (change requires restart)
#------------------------------------------------------------------------------
# OTHERS
#------------------------------------------------------------------------------
#relcache_expire = 0
# Life time of relation cache in seconds.
# 0 means no cache expiration(the default).
# The relation cache is used for cache the
# query result against PostgreSQL system
# catalog to obtain various information
# including table structures or if it's a
# temporary table or not. The cache is
# maintained in a pgpool child local memory
# and being kept as long as it survives.
# If someone modify the table by using
# ALTER TABLE or some such, the relcache is
# not consistent anymore.
# For this purpose, cache_expiration
# controls the life time of the cache.
#relcache_size = 256
# Number of relation cache
# entry. If you see frequently:
# "pool_search_relcache: cache replacement happend"
# in the pgpool log, you might want to increate this number.
#check_temp_table = catalog
# Temporary table check method. catalog, trace or none.
# Default is catalog.
#check_unlogged_table = on
# If on, enable unlogged table check in SELECT statements.
# This initiates queries against system catalog of primary/main
# thus increases load of primary.
# If you are absolutely sure that your system never uses unlogged tables
# and you want to save access to primary/main, you could turn this off.
# Default is on.
#enable_shared_relcache = on
# If on, relation cache stored in memory cache,
# the cache is shared among child process.
# Default is on.
# (change requires restart)
#relcache_query_target = primary
# Target node to send relcache queries. Default is primary node.
# If load_balance_node is specified, queries will be sent to load balance node.
#------------------------------------------------------------------------------
# IN MEMORY QUERY MEMORY CACHE
#------------------------------------------------------------------------------
#memory_cache_enabled = off
# If on, use the memory cache functionality, off by default
# (change requires restart)
#memqcache_method = 'shmem'
# Cache storage method. either 'shmem'(shared memory) or
# 'memcached'. 'shmem' by default
# (change requires restart)
#memqcache_memcached_host = 'localhost'
# Memcached host name or IP address. Mandatory if
# memqcache_method = 'memcached'.
# Defaults to localhost.
# (change requires restart)
#memqcache_memcached_port = 11211
# Memcached port number. Mondatory if memqcache_method = 'memcached'.
# Defaults to 11211.
# (change requires restart)
#memqcache_total_size = 64MB
# Total memory size in bytes for storing memory cache.
# Mandatory if memqcache_method = 'shmem'.
# Defaults to 64MB.
# (change requires restart)
#memqcache_max_num_cache = 1000000
# Total number of cache entries. Mandatory
# if memqcache_method = 'shmem'.
# Each cache entry consumes 48 bytes on shared memory.
# Defaults to 1,000,000(45.8MB).
# (change requires restart)
#memqcache_expire = 0
# Memory cache entry life time specified in seconds.
# 0 means infinite life time. 0 by default.
# (change requires restart)
#memqcache_auto_cache_invalidation = on
# If on, invalidation of query cache is triggered by corresponding
# DDL/DML/DCL(and memqcache_expire). If off, it is only triggered
# by memqcache_expire. on by default.
# (change requires restart)
#memqcache_maxcache = 400kB
# Maximum SELECT result size in bytes.
# Must be smaller than memqcache_cache_block_size. Defaults to 400KB.
# (change requires restart)
#memqcache_cache_block_size = 1MB
# Cache block size in bytes. Mandatory if memqcache_method = 'shmem'.
# Defaults to 1MB.
# (change requires restart)
#memqcache_oiddir = '/var/log/pgpool/oiddir'
# Temporary work directory to record table oids
# (change requires restart)
#cache_safe_memqcache_table_list = ''
# Comma separated list of table names to memcache
# that don't write to database
# Regexp are accepted
#cache_unsafe_memqcache_table_list = ''
# Comma separated list of table names not to memcache
# that don't write to database
# Regexp are accepted
centos8_pgpool.log (20,464 bytes)
Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.611: main pid 28536: LOG: Backend status file /tmp/pgpool_status discarded Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.611: main pid 28536: LOCATION: pgpool_main.c:3698 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.611: main pid 28536: LOG: health_check_stats_shared_memory_size: requested size: 12288 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.611: main pid 28536: LOCATION: health_check.c:541 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.611: main pid 28536: LOG: memory cache initialized Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.611: main pid 28536: DETAIL: memcache blocks :64 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.611: main pid 28536: LOCATION: pool_memqcache.c:2061 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.611: main pid 28536: LOG: allocating (138292600) bytes of shared memory segment Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.611: main pid 28536: LOCATION: pgpool_main.c:3546 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.611: main pid 28536: LOG: allocating shared memory segment of size: 138292600 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.611: main pid 28536: LOCATION: pool_shmem.c:61 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.727: main pid 28536: LOG: health_check_stats_shared_memory_size: requested size: 12288 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.727: main pid 28536: LOCATION: health_check.c:541 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.727: main pid 28536: LOG: health_check_stats_shared_memory_size: requested size: 12288 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.727: main pid 28536: LOCATION: health_check.c:541 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.727: main pid 28536: LOG: memory cache initialized Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.727: main pid 28536: DETAIL: memcache blocks :64 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.727: main pid 28536: LOCATION: pool_memqcache.c:2061 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.732: main pid 28536: LOG: pool_discard_oid_maps: discarded memqcache oid maps Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.732: main pid 28536: LOCATION: pgpool_main.c:3627 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.743: main pid 28536: LOG: waiting for watchdog to initialize Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.743: main pid 28536: LOCATION: pgpool_main.c:331 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.744: watchdog pid 28538: LOG: setting the local watchdog node name to "cproxy-psql0-stage-cn1.psqldb-stage.example.com:9999 Linux cproxy-psql0-stage-cn1.psqldb-stage.example.com" Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.744: watchdog pid 28538: LOCATION: watchdog.c:771 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.744: watchdog pid 28538: LOG: watchdog cluster is configured with 1 remote nodes Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.744: watchdog pid 28538: LOCATION: watchdog.c:781 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.744: watchdog pid 28538: LOG: watchdog remote node:0 on cproxy-psql0-stage-cn2.psqldb-stage.example.com:9000 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.744: watchdog pid 28538: LOCATION: watchdog.c:798 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.744: watchdog pid 28538: LOG: interface monitoring is disabled in watchdog Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.744: watchdog pid 28538: LOCATION: watchdog.c:667 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.744: watchdog pid 28538: INFO: IPC socket path: "/tmp/.s.PGPOOLWD_CMD.9000" Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.744: watchdog pid 28538: LOCATION: watchdog.c:1350 Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.745: watchdog pid 28538: LOG: watchdog node state changed from [DEAD] to [LOADING] Sep 30 13:21:27 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:27.745: watchdog pid 28538: LOCATION: watchdog.c:7222 Sep 30 13:21:32 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:32.752: watchdog pid 28538: LOG: watchdog node state changed from [LOADING] to [JOINING] Sep 30 13:21:32 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:32.752: watchdog pid 28538: LOCATION: watchdog.c:7222 Sep 30 13:21:36 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:36.757: watchdog pid 28538: LOG: watchdog node state changed from [JOINING] to [INITIALIZING] Sep 30 13:21:36 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:36.757: watchdog pid 28538: LOCATION: watchdog.c:7222 Sep 30 13:21:37 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:37.758: watchdog pid 28538: LOG: I am the only alive node in the watchdog cluster Sep 30 13:21:37 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:37.758: watchdog pid 28538: HINT: skipping stand for coordinator state Sep 30 13:21:37 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:37.758: watchdog pid 28538: LOCATION: watchdog.c:5833 Sep 30 13:21:37 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:37.758: watchdog pid 28538: LOG: watchdog node state changed from [INITIALIZING] to [LEADER] Sep 30 13:21:37 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:37.758: watchdog pid 28538: LOCATION: watchdog.c:7222 Sep 30 13:21:37 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:37.758: watchdog pid 28538: LOG: I am announcing my self as leader/coordinator watchdog node Sep 30 13:21:37 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:37.758: watchdog pid 28538: LOCATION: watchdog.c:6025 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.763: watchdog pid 28538: LOG: I am the cluster leader node Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.763: watchdog pid 28538: DETAIL: our declare coordinator message is accepted by all nodes Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.763: watchdog pid 28538: LOCATION: watchdog.c:6062 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.763: watchdog pid 28538: LOG: setting the local node "cproxy-psql0-stage-cn1.psqldb-stage.example.com:9999 Linux cproxy-psql0-stage-cn1.psqldb-stage.example.com" as watchdog cluster leader Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.763: watchdog pid 28538: LOCATION: watchdog.c:7961 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.763: watchdog pid 28538: LOG: signal_user1_to_parent_with_reason(1) Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.763: watchdog pid 28538: LOCATION: pgpool_main.c:612 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.763: watchdog pid 28538: LOG: I am the cluster leader node. Starting escalation process Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.763: watchdog pid 28538: LOCATION: watchdog.c:6081 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.764: watchdog pid 28538: LOG: escalation process started with PID:28539 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.764: watchdog pid 28538: LOCATION: watchdog.c:6737 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.764: main pid 28536: LOG: watchdog process is initialized Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.765: main pid 28536: DETAIL: watchdog messaging data version: 1.2 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.765: main pid 28536: LOCATION: pgpool_main.c:346 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.765: main pid 28536: LOG: Pgpool-II parent process received SIGUSR1 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.765: main pid 28536: LOCATION: pgpool_main.c:1293 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.765: main pid 28536: LOG: Pgpool-II parent process received watchdog state change signal from watchdog Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.765: main pid 28536: LOCATION: pgpool_main.c:1337 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.765: watchdog pid 28538: LOG: new IPC connection received Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.765: watchdog pid 28538: LOCATION: watchdog.c:3439 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.766: main pid 28536: LOG: Setting up socket for 0.0.0.0:5432 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.766: main pid 28536: LOCATION: pgpool_main.c:813 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.766: main pid 28536: LOG: Setting up socket for :::5432 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.766: main pid 28536: LOCATION: pgpool_main.c:813 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.772: main pid 28536: LOG: find_primary_node_repeatedly: waiting for finding a primary node Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.772: main pid 28536: LOCATION: pgpool_main.c:3418 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.781: watchdog_utility pid 28539: LOG: watchdog: escalation started Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.782: watchdog_utility pid 28539: LOCATION: wd_escalation.c:94 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.795: watchdog pid 28538: LOG: new IPC connection received Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.795: watchdog pid 28538: LOCATION: watchdog.c:3439 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.796: life_check pid 28540: LOG: 2 watchdog nodes are configured for lifecheck Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.796: life_check pid 28540: LOCATION: wd_lifecheck.c:495 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.796: life_check pid 28540: LOG: watchdog nodes ID:0 Name:"cproxy-psql0-stage-cn1.psqldb-stage.example.com:9999 Linux cproxy-psql0-stage-cn1.psqldb-stage.example.com" Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.796: life_check pid 28540: DETAIL: Host:"cproxy-psql0-stage-cn1.psqldb-stage.example.com" WD Port:9000 pgpool-II port:9999 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.796: life_check pid 28540: LOCATION: wd_lifecheck.c:503 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.796: life_check pid 28540: LOG: watchdog nodes ID:1 Name:"Not_Set" Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.796: life_check pid 28540: DETAIL: Host:"cproxy-psql0-stage-cn2.psqldb-stage.example.com" WD Port:9000 pgpool-II port:9999 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.796: life_check pid 28540: LOCATION: wd_lifecheck.c:503 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.797: life_check pid 28540: LOG: watchdog lifecheck trusted server "10.10.20.1" added for the availability check Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.797: life_check pid 28540: LOCATION: wd_lifecheck.c:1100 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.890: main pid 28536: LOG: find_primary_node: standby node is 0 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.891: main pid 28536: LOCATION: pgpool_main.c:3344 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.891: main pid 28536: LOG: find_primary_node: primary node is 1 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.891: main pid 28536: LOCATION: pgpool_main.c:3338 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.903: health_check pid 28584: LOG: process started Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.903: health_check pid 28584: LOCATION: pgpool_main.c:729 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.904: pcp_main pid 28581: LOG: PCP process: 28581 started Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.904: pcp_main pid 28581: LOCATION: pcp_child.c:162 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.904: health_check pid 28583: LOG: process started Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.904: health_check pid 28583: LOCATION: pgpool_main.c:729 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.905: sr_check_worker pid 28582: LOG: process started Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.905: sr_check_worker pid 28582: LOCATION: pgpool_main.c:729 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.905: watchdog pid 28538: LOG: new IPC connection received Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.905: watchdog pid 28538: LOCATION: watchdog.c:3439 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.985: main pid 28536: LOG: pgpool-II successfully started. version 4.3.3 (tamahomeboshi) Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.985: main pid 28536: LOCATION: pgpool_main.c:490 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.985: main pid 28536: LOG: node status[0]: 2 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.985: main pid 28536: LOCATION: pgpool_main.c:501 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.985: main pid 28536: LOG: node status[1]: 1 Sep 30 13:21:41 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:41.985: main pid 28536: LOCATION: pgpool_main.c:501 Sep 30 13:21:42 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:42.808: heart_beat_sender pid 28575: LOG: set SO_REUSEPORT option to the socket Sep 30 13:21:42 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:42.808: heart_beat_sender pid 28575: LOCATION: wd_heartbeat.c:691 Sep 30 13:21:42 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:42.808: heart_beat_sender pid 28575: LOG: creating socket for sending heartbeat Sep 30 13:21:42 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:42.808: heart_beat_sender pid 28575: DETAIL: set SO_REUSEPORT Sep 30 13:21:42 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:42.808: heart_beat_sender pid 28575: LOCATION: wd_heartbeat.c:148 Sep 30 13:21:42 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:42.809: heart_beat_receiver pid 28574: LOG: set SO_REUSEPORT option to the socket Sep 30 13:21:42 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:42.809: heart_beat_receiver pid 28574: LOCATION: wd_heartbeat.c:691 Sep 30 13:21:42 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:42.809: heart_beat_receiver pid 28574: LOG: creating watchdog heartbeat receive socket. Sep 30 13:21:42 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:42.809: heart_beat_receiver pid 28574: DETAIL: set SO_REUSEPORT Sep 30 13:21:42 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:42.809: heart_beat_receiver pid 28574: LOCATION: wd_heartbeat.c:231 Sep 30 13:21:46 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:46.135: watchdog_utility pid 28539: LOG: successfully acquired the delegate IP:"10.10.20.110" Sep 30 13:21:46 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:46.135: watchdog_utility pid 28539: DETAIL: 'if_up_cmd' returned with success Sep 30 13:21:46 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:46.135: watchdog_utility pid 28539: LOCATION: wd_if.c:180 Sep 30 13:21:46 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:46.137: watchdog pid 28538: LOG: watchdog escalation process with pid: 28539 exit with SUCCESS. Sep 30 13:21:46 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:46.137: watchdog pid 28538: LOCATION: watchdog.c:3268 Sep 30 13:21:50 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:50.508: pcp_main pid 28581: LOG: forked new pcp worker, pid=28605 socket=8 Sep 30 13:21:50 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:50.508: pcp_main pid 28581: LOCATION: pcp_child.c:299 Sep 30 13:21:50 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:50.510: watchdog pid 28538: LOG: new IPC connection received Sep 30 13:21:50 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:50.510: watchdog pid 28538: LOCATION: watchdog.c:3439 Sep 30 13:21:50 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:50.519: pcp_main pid 28581: LOG: PCP process with pid: 28605 exit with SUCCESS. Sep 30 13:21:50 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:50.519: pcp_main pid 28581: LOCATION: pcp_child.c:355 Sep 30 13:21:50 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:50.519: pcp_main pid 28581: LOG: PCP process with pid: 28605 exits with status 0 Sep 30 13:21:50 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:50.519: pcp_main pid 28581: LOCATION: pcp_child.c:369 Sep 30 13:21:52 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:52.044: watchdog pid 28538: LOG: new IPC connection received Sep 30 13:21:52 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:21:52.044: watchdog pid 28538: LOCATION: watchdog.c:3439 Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28616]: 2022-09-30 13:22:00.780: main pid 28616: LOG: stop request sent to pgpool (pid: 28536). waiting for termination... Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28616]: 2022-09-30 13:22:00.780: main pid 28616: LOCATION: main.c:546 Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:22:00.782: main pid 28536: LOG: shutting down by signal 2 Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:22:00.782: main pid 28536: LOCATION: pgpool_main.c:1185 Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:22:00.782: main pid 28536: LOG: terminating all child processes Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:22:00.782: main pid 28536: LOCATION: pgpool_main.c:1195 Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:22:00.809: watchdog pid 28538: LOG: Watchdog is shutting down Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:22:00.809: watchdog pid 28538: LOCATION: watchdog.c:3293 Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:22:00.816: watchdog_utility pid 28617: LOG: watchdog: de-escalation started Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:22:00.816: watchdog_utility pid 28617: LOCATION: wd_escalation.c:182 Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:22:00.962: watchdog_utility pid 28617: LOG: successfully released the delegate IP:"10.10.20.110" Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:22:00.962: watchdog_utility pid 28617: DETAIL: 'if_down_cmd' returned with success Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:22:00.962: watchdog_utility pid 28617: LOCATION: wd_if.c:226 Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:22:00.965: main pid 28536: LOG: Pgpool-II system is shutdown Sep 30 13:22:00 cproxy-psql0-stage-cn1 pgpool[28536]: 2022-09-30 13:22:00.965: main pid 28536: LOCATION: pgpool_main.c:1223 Sep 30 13:22:01 cproxy-psql0-stage-cn1 pgpool[28616]: .done. debian_pgpool.conf (49,903 bytes)
# ----------------------------
# pgPool-II configuration file
# ----------------------------
#
# This file consists of lines of the form:
#
# name = value
#
# Whitespace may be used. Comments are introduced with "#" anywhere on a line.
# The complete list of parameter names and allowed values can be found in the
# pgPool-II documentation.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal. If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, or use "pgpool reload". Some
# parameters, which are marked below, require a server shutdown and restart to
# take effect.
#
#------------------------------------------------------------------------------
# BACKEND CLUSTERING MODE
# Choose one of: 'streaming_replication', 'native_replication',
# 'logical_replication', 'slony', 'raw' or 'snapshot_isolation'
# (change requires restart)
#------------------------------------------------------------------------------
backend_clustering_mode = 'streaming_replication'
#------------------------------------------------------------------------------
# CONNECTIONS
#------------------------------------------------------------------------------
# - pgpool Connection Settings -
listen_addresses = '*'
# Host name or IP address to listen on:
# '*' for all, '' for no TCP/IP connections
# (change requires restart)
port = 5432
# Port number
# (change requires restart)
#socket_dir = '/var/run/postgresql'
# Unix domain socket path
# The Debian package defaults to
# /var/run/postgresql
# (change requires restart)
#reserved_connections = 0
# Number of reserved connections.
# Pgpool-II does not accept connections if over
# num_init_chidlren - reserved_connections.
# - pgpool Communication Manager Connection Settings -
pcp_listen_addresses = '*'
# Host name or IP address for pcp process to listen on:
# '*' for all, '' for no TCP/IP connections
# (change requires restart)
#pcp_port = 9898
# Port number for pcp
# (change requires restart)
#pcp_socket_dir = '/var/run/postgresql'
# Unix domain socket path for pcp
# The Debian package defaults to
# /var/run/postgresql
# (change requires restart)
#listen_backlog_multiplier = 2
# Set the backlog parameter of listen(2) to
# num_init_children * listen_backlog_multiplier.
# (change requires restart)
#serialize_accept = off
# whether to serialize accept() call to avoid thundering herd problem
# (change requires restart)
# - Backend Connection Settings -
#backend_hostname0 = 'host1'
# Host name or IP address to connect to for backend 0
#backend_port0 = 5432
# Port number for backend 0
#backend_weight0 = 1
# Weight for backend 0 (only in load balancing mode)
#backend_data_directory0 = '/data'
# Data directory for backend 0
#backend_flag0 = 'ALLOW_TO_FAILOVER'
# Controls various backend behavior
# ALLOW_TO_FAILOVER, DISALLOW_TO_FAILOVER
# or ALWAYS_PRIMARY
#backend_application_name0 = 'server0'
# walsender's application_name, used for "show pool_nodes" command
backend_hostname0 = 'psql0-stage-cn1.psqldb-stage.example.com'
backend_port0 = 5432
backend_weight0 = 1
backend_data_directory0 = '/var/lib/postgresql/14/main'
backend_flag0 = 'ALLOW_TO_FAILOVER'
backend_application_name0 = 'psql0-stage-cn1.psqldb-stage.example.com'
backend_hostname1 = 'psql0-stage-cn2.psqldb-stage.example.com'
backend_port1 = 5432
backend_weight1 = 1
backend_data_directory1 = '/var/lib/postgresql/14/main'
backend_flag1 = 'ALLOW_TO_FAILOVER'
backend_application_name1 = 'psql0-stage-cn2.psqldb-stage.example.com'
# - Authentication -
#enable_pool_hba = off
# Use pool_hba.conf for client authentication
#pool_passwd = 'pool_passwd'
# File name of pool_passwd for md5 authentication.
# "" disables pool_passwd.
# (change requires restart)
#authentication_timeout = 1min
# Delay in seconds to complete client authentication
# 0 means no timeout.
allow_clear_text_frontend_auth = on
# Allow Pgpool-II to use clear text password authentication
# with clients, when pool_passwd does not
# contain the user password
# - SSL Connections -
ssl = on
# Enable SSL support
# (change requires restart)
ssl_key = '/etc/pgpool2/server.key'
# SSL private key file
# (change requires restart)
ssl_cert = '/etc/pgpool2/server.crt'
# SSL public certificate file
# (change requires restart)
#ssl_ca_cert = ''
# Single PEM format file containing
# CA root certificate(s)
# (change requires restart)
#ssl_ca_cert_dir = ''
# Directory containing CA root certificate(s)
# (change requires restart)
#ssl_crl_file = ''
# SSL certificate revocation list file
# (change requires restart)
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL'
# Allowed SSL ciphers
# (change requires restart)
#ssl_prefer_server_ciphers = off
# Use server's SSL cipher preferences,
# rather than the client's
# (change requires restart)
#ssl_ecdh_curve = 'prime256v1'
# Name of the curve to use in ECDH key exchange
#ssl_dh_params_file = ''
# Name of the file containing Diffie-Hellman parameters used
# for so-called ephemeral DH family of SSL cipher.
#ssl_passphrase_command=''
# Sets an external command to be invoked when a passphrase
# for decrypting an SSL file needs to be obtained
# (change requires restart)
#------------------------------------------------------------------------------
# POOLS
#------------------------------------------------------------------------------
# - Concurrent session and pool size -
#num_init_children = 32
# Number of concurrent sessions allowed
# (change requires restart)
max_pool = 6
# Number of connection pool caches per connection
# (change requires restart)
# - Life time -
#child_life_time = 5min
# Pool exits after being idle for this many seconds
#child_max_connections = 0
# Pool exits after receiving that many connections
# 0 means no exit
#connection_life_time = 0
# Connection to backend closes after being idle for this many seconds
# 0 means no close
#client_idle_limit = 0
# Client is disconnected after being idle for that many seconds
# (even inside an explicit transactions!)
# 0 means no disconnection
#------------------------------------------------------------------------------
# LOGS
#------------------------------------------------------------------------------
# - Where to log -
#log_destination = 'stderr'
# Where to log
# Valid values are combinations of stderr,
# and syslog. Default to stderr.
# - What to log -
log_line_prefix = '%m: %a pid %p: ' # printf-style string to output at beginning of each log line.
#log_connections = off
# Log connections
#log_disconnections = off
# Log disconnections
#log_hostname = off
# Hostname will be shown in ps status
# and in logs if connections are logged
#log_statement = off
# Log all statements
#log_per_node_statement = off
# Log all statements
# with node and backend informations
#log_client_messages = off
# Log any client messages
#log_standby_delay = 'if_over_threshold'
# Log standby delay
# Valid values are combinations of always,
# if_over_threshold, none
# - Syslog specific -
#syslog_facility = 'LOCAL0'
# Syslog local facility. Default to LOCAL0
#syslog_ident = 'pgpool'
# Syslog program identification string
# Default to 'pgpool'
# - Debug -
log_error_verbosity = verbose # terse, default, or verbose messages
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#log_min_messages = info # values in order of decreasing detail:
log_min_messages = info
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
# This is used when logging to stderr:
#logging_collector = off
# Enable capturing of stderr
# into log files.
# (change requires restart)
# -- Only used if logging_collector is on ---
#log_directory = '/tmp/pgpool_logs'
# directory where log files are written,
# can be absolute
#log_filename = 'pgpool-%Y-%m-%d_%H%M%S.log'
# log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600
# creation mode for log files,
# begin with 0 to use octal notation
#log_truncate_on_rotation = off
# If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
#log_rotation_age = 1d
# Automatic rotation of logfiles will
# happen after that (minutes)time.
# 0 disables time based rotation.
#log_rotation_size = 10MB
# Automatic rotation of logfiles will
# happen after that much (KB) log output.
# 0 disables size based rotation.
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
#pid_file_name = '/var/run/postgresql/pgpool.pid'
# PID file name
# Can be specified as relative to the"
# location of pgpool.conf file or
# as an absolute path
# (change requires restart)
#logdir = '/var/log/postgresql'
# Directory of pgPool status file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTION POOLING
#------------------------------------------------------------------------------
#connection_cache = on
# Activate connection pools
# (change requires restart)
# Semicolon separated list of queries
# to be issued at the end of a session
# The default is for 8.3 and later
#reset_query_list = 'ABORT; DISCARD ALL'
# The following one is for 8.2 and before
#reset_query_list = 'ABORT; RESET ALL; SET SESSION AUTHORIZATION DEFAULT'
#------------------------------------------------------------------------------
# REPLICATION MODE
#------------------------------------------------------------------------------
#replicate_select = off
# Replicate SELECT statements
# when in replication mode
# replicate_select is higher priority than
# load_balance_mode.
#insert_lock = off
# Automatically locks a dummy row or a table
# with INSERT statements to keep SERIAL data
# consistency
# Without SERIAL, no lock will be issued
#lobj_lock_table = ''
# When rewriting lo_creat command in
# replication mode, specify table name to
# lock
# - Degenerate handling -
#replication_stop_on_mismatch = off
# On disagreement with the packet kind
# sent from backend, degenerate the node
# which is most likely "minority"
# If off, just force to exit this session
#failover_if_affected_tuples_mismatch = off
# On disagreement with the number of affected
# tuples in UPDATE/DELETE queries, then
# degenerate the node which is most likely
# "minority".
# If off, just abort the transaction to
# keep the consistency
#------------------------------------------------------------------------------
# LOAD BALANCING MODE
#------------------------------------------------------------------------------
#load_balance_mode = on
# Activate load balancing mode
# (change requires restart)
#ignore_leading_white_space = on
# Ignore leading white spaces of each query
#read_only_function_list = ''
# Comma separated list of function names
# that don't write to database
# Regexp are accepted
#write_function_list = ''
# Comma separated list of function names
# that write to database
# Regexp are accepted
# If both read_only_function_list and write_function_list
# is empty, function's volatile property is checked.
# If it's volatile, the function is regarded as a
# writing function.
#primary_routing_query_pattern_list = ''
# Semicolon separated list of query patterns
# that should be sent to primary node
# Regexp are accepted
# valid for streaming replicaton mode only.
#database_redirect_preference_list = ''
# comma separated list of pairs of database and node id.
# example: postgres:primary,mydb[0-4]:1,mydb[5-9]:2'
# valid for streaming replicaton mode only.
#app_name_redirect_preference_list = ''
# comma separated list of pairs of app name and node id.
# example: 'psql:primary,myapp[0-4]:1,myapp[5-9]:standby'
# valid for streaming replicaton mode only.
#allow_sql_comments = off
# if on, ignore SQL comments when judging if load balance or
# query cache is possible.
# If off, SQL comments effectively prevent the judgment
# (pre 3.4 behavior).
#disable_load_balance_on_write = 'transaction'
# Load balance behavior when write query is issued
# in an explicit transaction.
#
# Valid values:
#
# 'transaction' (default):
# if a write query is issued, subsequent
# read queries will not be load balanced
# until the transaction ends.
#
# 'trans_transaction':
# if a write query is issued, subsequent
# read queries in an explicit transaction
# will not be load balanced until the session ends.
#
# 'dml_adaptive':
# Queries on the tables that have already been
# modified within the current explicit transaction will
# not be load balanced until the end of the transaction.
#
# 'always':
# if a write query is issued, read queries will
# not be load balanced until the session ends.
#
# Note that any query not in an explicit transaction
# is not affected by the parameter except 'always'.
#dml_adaptive_object_relationship_list= ''
# comma separated list of object pairs
# [object]:[dependent-object], to disable load balancing
# of dependent objects within the explicit transaction
# after WRITE statement is issued on (depending-on) object.
#
# example: 'tb_t1:tb_t2,insert_tb_f_func():tb_f,tb_v:my_view'
# Note: function name in this list must also be present in
# the write_function_list
# only valid for disable_load_balance_on_write = 'dml_adaptive'.
#statement_level_load_balance = off
# Enables statement level load balancing
#------------------------------------------------------------------------------
# STREAMING REPLICATION MODE
#------------------------------------------------------------------------------
# - Streaming -
#sr_check_period = 10
# Streaming replication check period
# Disabled (0) by default
sr_check_user = 'pgpool'
# Streaming replication check user
# This is neccessary even if you disable streaming
# replication delay check by sr_check_period = 0
#sr_check_password = ''
# Password for streaming replication check user
# Leaving it empty will make Pgpool-II to first look for the
# Password in pool_passwd file before using the empty password
#sr_check_database = 'postgres'
# Database name for streaming replication check
#delay_threshold = 0
# Threshold before not dispatching query to standby node
# Unit is in bytes
# Disabled (0) by default
#prefer_lower_delay_standby = off
# If delay_threshold is set larger than 0, Pgpool-II send to
# the primary when selected node is delayed over delay_threshold.
# If this is set to on, Pgpool-II send query to other standby
# delayed lower.
# - Special commands -
#follow_primary_command = ''
# Executes this command after main node failover
# Special values:
# %d = failed node id
# %h = failed node host name
# %p = failed node port number
# %D = failed node database cluster path
# %m = new main node id
# %H = new main node hostname
# %M = old main node id
# %P = old primary node id
# %r = new main port number
# %R = new main database cluster path
# %N = old primary node hostname
# %S = old primary node port number
# %% = '%' character
#------------------------------------------------------------------------------
# HEALTH CHECK GLOBAL PARAMETERS
#------------------------------------------------------------------------------
health_check_period = 5
# Health check period
# Disabled (0) by default
#health_check_timeout = 20
# Health check timeout
# 0 means no timeout
health_check_user = 'pgpool'
# Health check user
#health_check_password = ''
# Password for health check user
# Leaving it empty will make Pgpool-II to first look for the
# Password in pool_passwd file before using the empty password
#health_check_database = ''
# Database name for health check. If '', tries 'postgres' frist,
health_check_max_retries = 3
# Maximum number of times to retry a failed health check before giving up.
#health_check_retry_delay = 1
# Amount of time to wait (in seconds) between retries.
#connect_timeout = 10000
# Timeout value in milliseconds before giving up to connect to backend.
# Default is 10000 ms (10 second). Flaky network user may want to increase
# the value. 0 means no timeout.
# Note that this value is not only used for health check,
# but also for ordinary conection to backend.
#------------------------------------------------------------------------------
# HEALTH CHECK PER NODE PARAMETERS (OPTIONAL)
#------------------------------------------------------------------------------
#health_check_period0 = 0
#health_check_timeout0 = 20
#health_check_user0 = 'nobody'
#health_check_password0 = ''
#health_check_database0 = ''
#health_check_max_retries0 = 0
#health_check_retry_delay0 = 1
#connect_timeout0 = 10000
#------------------------------------------------------------------------------
# FAILOVER AND FAILBACK
#------------------------------------------------------------------------------
#failover_command = ''
# Executes this command at failover
# Special values:
# %d = failed node id
# %h = failed node host name
# %p = failed node port number
# %D = failed node database cluster path
# %m = new main node id
# %H = new main node hostname
# %M = old main node id
# %P = old primary node id
# %r = new main port number
# %R = new main database cluster path
# %N = old primary node hostname
# %S = old primary node port number
# %% = '%' character
#failback_command = ''
# Executes this command at failback.
# Special values:
# %d = failed node id
# %h = failed node host name
# %p = failed node port number
# %D = failed node database cluster path
# %m = new main node id
# %H = new main node hostname
# %M = old main node id
# %P = old primary node id
# %r = new main port number
# %R = new main database cluster path
# %N = old primary node hostname
# %S = old primary node port number
# %% = '%' character
failover_on_backend_error = off
# Initiates failover when reading/writing to the
# backend communication socket fails
# If set to off, pgpool will report an
# error and disconnect the session.
#failover_on_backend_shutdown = off
# Initiates failover when backend is shutdown,
# or backend process is killed.
# If set to off, pgpool will report an
# error and disconnect the session.
#detach_false_primary = off
# Detach false primary if on. Only
# valid in streaming replicaton
# mode and with PostgreSQL 9.6 or
# after.
#search_primary_node_timeout = 5min
# Timeout in seconds to search for the
# primary node when a failover occurs.
# 0 means no timeout, keep searching
# for a primary node forever.
#------------------------------------------------------------------------------
# ONLINE RECOVERY
#------------------------------------------------------------------------------
#recovery_user = 'nobody'
# Online recovery user
#recovery_password = ''
# Online recovery password
# Leaving it empty will make Pgpool-II to first look for the
# Password in pool_passwd file before using the empty password
#recovery_1st_stage_command = ''
# Executes a command in first stage
#recovery_2nd_stage_command = ''
# Executes a command in second stage
#recovery_timeout = 90
# Timeout in seconds to wait for the
# recovering node's postmaster to start up
# 0 means no wait
#client_idle_limit_in_recovery = 0
# Client is disconnected after being idle
# for that many seconds in the second stage
# of online recovery
# 0 means no disconnection
# -1 means immediate disconnection
#auto_failback = off
# Dettached backend node reattach automatically
# if replication_state is 'streaming'.
#auto_failback_interval = 1min
# Min interval of executing auto_failback in
# seconds.
#------------------------------------------------------------------------------
# WATCHDOG
#------------------------------------------------------------------------------
# - Enabling -
use_watchdog = on
# Activates watchdog
# (change requires restart)
# -Connection to up stream servers -
trusted_servers = '10.10.20.1'
# trusted server list which are used
# to confirm network connection
# (hostA,hostB,hostC,...)
# (change requires restart)
ping_path = '/usr/bin'
# ping command path
# (change requires restart)
# - Watchdog communication Settings -
#hostname0 = ''
# Host name or IP address of pgpool node
# for watchdog connection
# (change requires restart)
#wd_port0 = 9000
# Port number for watchdog service
# (change requires restart)
#pgpool_port0 = 9999
# Port number for pgpool
# (change requires restart)
#
hostname0 = 'proxy-psql0-stage-cn1.psqldb-stage.example.com'
wd_port0 = 9000
pgpool_port0 = 9999
hostname1 = 'proxy-psql0-stage-cn2.psqldb-stage.example.com'
wd_port1 = 9000
pgpool_port1 = 9999
#wd_priority = 1
# priority of this watchdog in leader election
# (change requires restart)
#wd_authkey = ''
# Authentication key for watchdog communication
# (change requires restart)
#wd_ipc_socket_dir = '/tmp'
# Unix domain socket path for watchdog IPC socket
# The Debian package defaults to
# /var/run/postgresql
# (change requires restart)
# - Virtual IP control Setting -
delegate_IP = '10.10.20.10'
# delegate IP address
# If this is empty, virtual IP never bring up.
# (change requires restart)
if_cmd_path = '/usr/bin'
# path to the directory where if_up/down_cmd exists
# If if_up/down_cmd starts with "/", if_cmd_path will be ignored.
# (change requires restart)
if_up_cmd = '/usr/bin/sudo /usr/bin/ip addr add $_IP_$/24 dev eth1 label eth1:0'
# startup delegate IP command
# (change requires restart)
if_down_cmd = '/usr/bin/sudo /usr/bin/ip addr del $_IP_$/24 dev eth1'
# shutdown delegate IP command
# (change requires restart)
arping_path = '/usr/sbin'
# arping command path
# If arping_cmd starts with "/", if_cmd_path will be ignored.
# (change requires restart)
arping_cmd = '/usr/bin/sudo /usr/sbin/arping -U $_IP_$ -w 1 -I eth1'
# arping command
# (change requires restart)
# - Behavior on escalation Setting -
#clear_memqcache_on_escalation = on
# Clear all the query cache on shared memory
# when standby pgpool escalate to active pgpool
# (= virtual IP holder).
# This should be off if client connects to pgpool
# not using virtual IP.
# (change requires restart)
#wd_escalation_command = ''
# Executes this command at escalation on new active pgpool.
# (change requires restart)
#wd_de_escalation_command = ''
# Executes this command when leader pgpool resigns from being leader.
# (change requires restart)
# - Watchdog consensus settings for failover -
#failover_when_quorum_exists = on
# Only perform backend node failover
# when the watchdog cluster holds the quorum
# (change requires restart)
#failover_require_consensus = on
# Perform failover when majority of Pgpool-II nodes
# aggrees on the backend node status change
# (change requires restart)
#allow_multiple_failover_requests_from_node = off
# A Pgpool-II node can cast multiple votes
# for building the consensus on failover
# (change requires restart)
enable_consensus_with_half_votes = on
# apply majority rule for consensus and quorum computation
# at 50% of votes in a cluster with even number of nodes.
# when enabled the existence of quorum and consensus
# on failover is resolved after receiving half of the
# total votes in the cluster, otherwise both these
# decisions require at least one more vote than
# half of the total votes.
# (change requires restart)
# - Watchdog cluster membership settings for quorum computation -
#wd_remove_shutdown_nodes = off
# when enabled cluster membership of properly shutdown
# watchdog nodes gets revoked, After that the node does
# not count towards the quorum and consensus computations
#wd_lost_node_removal_timeout = 0s
# Timeout after which the cluster membership of LOST watchdog
# nodes gets revoked. After that the node node does not
# count towards the quorum and consensus computations
# setting timeout to 0 will never revoke the membership
# of LOST nodes
#wd_no_show_node_removal_timeout = 0s
# Time to wait for Watchdog node to connect to the cluster.
# After that time the cluster membership of NO-SHOW node gets
# revoked and it does not count towards the quorum and
# consensus computations
# setting timeout to 0 will not revoke the membership
# of NO-SHOW nodes
# - Lifecheck Setting -
# -- common --
#wd_monitoring_interfaces_list = ''
# Comma separated list of interfaces names to monitor.
# if any interface from the list is active the watchdog will
# consider the network is fine
# 'any' to enable monitoring on all interfaces except loopback
# '' to disable monitoring
# (change requires restart)
#wd_lifecheck_method = 'heartbeat'
# Method of watchdog lifecheck ('heartbeat' or 'query' or 'external')
# (change requires restart)
#wd_interval = 10
# lifecheck interval (sec) > 0
# (change requires restart)
# -- heartbeat mode --
#heartbeat_hostname0 = ''
# Host name or IP address used
# for sending heartbeat signal.
# (change requires restart)
#heartbeat_port0 = 9694
# Port number used for receiving/sending heartbeat signal
# Usually this is the same as heartbeat_portX.
# (change requires restart)
#heartbeat_device0 = ''
# Name of NIC device (such like 'eth0')
# used for sending/receiving heartbeat
# signal to/from destination 0.
# This works only when this is not empty
# and pgpool has root privilege.
# (change requires restart)
heartbeat_hostname0 = 'proxy-psql0-stage-cn1.psqldb-stage.example.com'
heartbeat_port0 = 9694
heartbeat_device0 = ''
heartbeat_hostname1 = 'proxy-psql0-stage-cn2.psqldb-stage.example.com'
heartbeat_port1 = 9694
heartbeat_device1 = ''
#wd_heartbeat_keepalive = 2
# Interval time of sending heartbeat signal (sec)
# (change requires restart)
#wd_heartbeat_deadtime = 30
# Deadtime interval for heartbeat signal (sec)
# (change requires restart)
# -- query mode --
#wd_life_point = 3
# lifecheck retry times
# (change requires restart)
#wd_lifecheck_query = 'SELECT 1'
# lifecheck query to pgpool from watchdog
# (change requires restart)
#wd_lifecheck_dbname = 'template1'
# Database name connected for lifecheck
# (change requires restart)
#wd_lifecheck_user = 'nobody'
# watchdog user monitoring pgpools in lifecheck
# (change requires restart)
#wd_lifecheck_password = ''
# Password for watchdog user in lifecheck
# Leaving it empty will make Pgpool-II to first look for the
# Password in pool_passwd file before using the empty password
# (change requires restart)
#------------------------------------------------------------------------------
# OTHERS
#------------------------------------------------------------------------------
#relcache_expire = 0
# Life time of relation cache in seconds.
# 0 means no cache expiration(the default).
# The relation cache is used for cache the
# query result against PostgreSQL system
# catalog to obtain various information
# including table structures or if it's a
# temporary table or not. The cache is
# maintained in a pgpool child local memory
# and being kept as long as it survives.
# If someone modify the table by using
# ALTER TABLE or some such, the relcache is
# not consistent anymore.
# For this purpose, cache_expiration
# controls the life time of the cache.
#relcache_size = 256
# Number of relation cache
# entry. If you see frequently:
# "pool_search_relcache: cache replacement happend"
# in the pgpool log, you might want to increate this number.
#check_temp_table = catalog
# Temporary table check method. catalog, trace or none.
# Default is catalog.
#check_unlogged_table = on
# If on, enable unlogged table check in SELECT statements.
# This initiates queries against system catalog of primary/main
# thus increases load of primary.
# If you are absolutely sure that your system never uses unlogged tables
# and you want to save access to primary/main, you could turn this off.
# Default is on.
#enable_shared_relcache = on
# If on, relation cache stored in memory cache,
# the cache is shared among child process.
# Default is on.
# (change requires restart)
#relcache_query_target = primary
# Target node to send relcache queries. Default is primary node.
# If load_balance_node is specified, queries will be sent to load balance node.
#------------------------------------------------------------------------------
# IN MEMORY QUERY MEMORY CACHE
#------------------------------------------------------------------------------
#memory_cache_enabled = off
# If on, use the memory cache functionality, off by default
# (change requires restart)
#memqcache_method = 'shmem'
# Cache storage method. either 'shmem'(shared memory) or
# 'memcached'. 'shmem' by default
# (change requires restart)
#memqcache_memcached_host = 'localhost'
# Memcached host name or IP address. Mandatory if
# memqcache_method = 'memcached'.
# Defaults to localhost.
# (change requires restart)
#memqcache_memcached_port = 11211
# Memcached port number. Mondatory if memqcache_method = 'memcached'.
# Defaults to 11211.
# (change requires restart)
#memqcache_total_size = 64MB
# Total memory size in bytes for storing memory cache.
# Mandatory if memqcache_method = 'shmem'.
# Defaults to 64MB.
# (change requires restart)
#memqcache_max_num_cache = 1000000
# Total number of cache entries. Mandatory
# if memqcache_method = 'shmem'.
# Each cache entry consumes 48 bytes on shared memory.
# Defaults to 1,000,000(45.8MB).
# (change requires restart)
#memqcache_expire = 0
# Memory cache entry life time specified in seconds.
# 0 means infinite life time. 0 by default.
# (change requires restart)
#memqcache_auto_cache_invalidation = on
# If on, invalidation of query cache is triggered by corresponding
# DDL/DML/DCL(and memqcache_expire). If off, it is only triggered
# by memqcache_expire. on by default.
# (change requires restart)
#memqcache_maxcache = 400kB
# Maximum SELECT result size in bytes.
# Must be smaller than memqcache_cache_block_size. Defaults to 400KB.
# (change requires restart)
#memqcache_cache_block_size = 1MB
# Cache block size in bytes. Mandatory if memqcache_method = 'shmem'.
# Defaults to 1MB.
# (change requires restart)
#memqcache_oiddir = '/var/log/pgpool/oiddir'
# Temporary work directory to record table oids
# (change requires restart)
#cache_safe_memqcache_table_list = ''
# Comma separated list of table names to memcache
# that don't write to database
# Regexp are accepted
#cache_unsafe_memqcache_table_list = ''
# Comma separated list of table names not to memcache
# that don't write to database
# Regexp are accepted
debian_pgpool.log (25,142 bytes)
Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.700: main pid 8745: LOG: Backend status file /var/log/postgresql/pgpool_status discarded Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.700: main pid 8745: LOCATION: pgpool_main.c:3697 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.701: main pid 8745: LOG: health_check_stats_shared_memory_size: requested size: 12288 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.701: main pid 8745: LOCATION: health_check.c:541 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.701: main pid 8745: LOG: memory cache initialized Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.701: main pid 8745: DETAIL: memcache blocks :64 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.701: main pid 8745: LOCATION: pool_memqcache.c:2059 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.701: main pid 8745: LOG: allocating (138292616) bytes of shared memory segment Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.701: main pid 8745: LOCATION: pgpool_main.c:3545 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.701: main pid 8745: LOG: allocating shared memory segment of size: 138292616 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.701: main pid 8745: LOCATION: pool_shmem.c:60 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.832: main pid 8745: LOG: health_check_stats_shared_memory_size: requested size: 12288 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.832: main pid 8745: LOCATION: health_check.c:541 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.832: main pid 8745: LOG: health_check_stats_shared_memory_size: requested size: 12288 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.832: main pid 8745: LOCATION: health_check.c:541 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.832: main pid 8745: LOG: memory cache initialized Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.832: main pid 8745: DETAIL: memcache blocks :64 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.832: main pid 8745: LOCATION: pool_memqcache.c:2059 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.834: main pid 8745: LOG: pool_discard_oid_maps: discarded memqcache oid maps Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.834: main pid 8745: LOCATION: pgpool_main.c:3626 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.844: main pid 8745: LOG: waiting for watchdog to initialize Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:01.844: main pid 8745: LOCATION: pgpool_main.c:330 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:01.845: watchdog pid 8748: LOG: setting the local watchdog node name to "proxy-psql0-stage-cn1.psqldb-stage.example.com:9999 Linux proxy-psql0-stage-cn1" Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:01.845: watchdog pid 8748: LOCATION: watchdog.c:770 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:01.845: watchdog pid 8748: LOG: watchdog cluster is configured with 1 remote nodes Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:01.845: watchdog pid 8748: LOCATION: watchdog.c:780 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:01.846: watchdog pid 8748: LOG: watchdog remote node:0 on proxy-psql0-stage-cn2.psqldb-stage.example.com:9000 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:01.846: watchdog pid 8748: LOCATION: watchdog.c:797 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:01.846: watchdog pid 8748: LOG: interface monitoring is disabled in watchdog Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:01.846: watchdog pid 8748: LOCATION: watchdog.c:666 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:01.846: watchdog pid 8748: INFO: IPC socket path: "/var/run/postgresql/.s.PGPOOLWD_CMD.9000" Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:01.846: watchdog pid 8748: LOCATION: watchdog.c:1349 Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:01.846: watchdog pid 8748: LOG: watchdog node state changed from [DEAD] to [LOADING] Sep 30 13:19:01 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:01.846: watchdog pid 8748: LOCATION: watchdog.c:7221 Sep 30 13:19:06 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:06.853: watchdog pid 8748: LOG: watchdog node state changed from [LOADING] to [JOINING] Sep 30 13:19:06 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:06.853: watchdog pid 8748: LOCATION: watchdog.c:7221 Sep 30 13:19:10 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:10.859: watchdog pid 8748: LOG: watchdog node state changed from [JOINING] to [INITIALIZING] Sep 30 13:19:10 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:10.859: watchdog pid 8748: LOCATION: watchdog.c:7221 Sep 30 13:19:11 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:11.860: watchdog pid 8748: LOG: I am the only alive node in the watchdog cluster Sep 30 13:19:11 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:11.860: watchdog pid 8748: HINT: skipping stand for coordinator state Sep 30 13:19:11 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:11.860: watchdog pid 8748: LOCATION: watchdog.c:5831 Sep 30 13:19:11 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:11.860: watchdog pid 8748: LOG: watchdog node state changed from [INITIALIZING] to [LEADER] Sep 30 13:19:11 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:11.860: watchdog pid 8748: LOCATION: watchdog.c:7221 Sep 30 13:19:11 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:11.860: watchdog pid 8748: LOG: I am announcing my self as leader/coordinator watchdog node Sep 30 13:19:11 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:11.860: watchdog pid 8748: LOCATION: watchdog.c:6024 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:15.864: watchdog pid 8748: LOG: I am the cluster leader node Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:15.864: watchdog pid 8748: DETAIL: our declare coordinator message is accepted by all nodes Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:15.864: watchdog pid 8748: LOCATION: watchdog.c:6060 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:15.864: watchdog pid 8748: LOG: setting the local node "proxy-psql0-stage-cn1.psqldb-stage.example.com:9999 Linux proxy-psql0-stage-cn1" as watchdog cluster leader Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:15.864: watchdog pid 8748: LOCATION: watchdog.c:7958 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:15.864: watchdog pid 8748: LOG: signal_user1_to_parent_with_reason(1) Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:15.864: watchdog pid 8748: LOCATION: pgpool_main.c:611 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:15.865: watchdog pid 8748: LOG: I am the cluster leader node. Starting escalation process Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:15.865: watchdog pid 8748: LOCATION: watchdog.c:6080 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:15.865: watchdog pid 8748: LOG: escalation process started with PID:8751 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:15.865: watchdog pid 8748: LOCATION: watchdog.c:6736 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:15.867: main pid 8745: LOG: watchdog process is initialized Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:15.867: main pid 8745: DETAIL: watchdog messaging data version: 1.2 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:15.867: main pid 8745: LOCATION: pgpool_main.c:344 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:15.868: main pid 8745: LOG: Pgpool-II parent process received SIGUSR1 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:15.868: main pid 8745: LOCATION: pgpool_main.c:1292 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:15.868: main pid 8745: LOG: Pgpool-II parent process received watchdog state change signal from watchdog Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:15.868: main pid 8745: LOCATION: pgpool_main.c:1336 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:15.869: watchdog pid 8748: LOG: new IPC connection received Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:15.869: watchdog pid 8748: LOCATION: watchdog.c:3438 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:15.870: main pid 8745: LOG: Setting up socket for 0.0.0.0:5432 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:15.870: main pid 8745: LOCATION: pgpool_main.c:812 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:15.870: main pid 8745: LOG: Setting up socket for :::5432 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:15.870: main pid 8745: LOCATION: pgpool_main.c:812 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:15.878: watchdog pid 8748: LOG: new IPC connection received Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:15.878: watchdog pid 8748: LOCATION: watchdog.c:3438 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8751]: 2022-09-30 13:19:15.881: watchdog_utility pid 8751: LOG: watchdog: escalation started Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8751]: 2022-09-30 13:19:15.881: watchdog_utility pid 8751: LOCATION: wd_escalation.c:93 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8752]: 2022-09-30 13:19:15.883: life_check pid 8752: LOG: 2 watchdog nodes are configured for lifecheck Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8752]: 2022-09-30 13:19:15.883: life_check pid 8752: LOCATION: wd_lifecheck.c:494 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8752]: 2022-09-30 13:19:15.883: life_check pid 8752: LOG: watchdog nodes ID:0 Name:"proxy-psql0-stage-cn1.psqldb-stage.example.com:9999 Linux proxy-psql0-stage-cn1" Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8752]: 2022-09-30 13:19:15.883: life_check pid 8752: DETAIL: Host:"proxy-psql0-stage-cn1.psqldb-stage.example.com" WD Port:9000 pgpool-II port:9999 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8752]: 2022-09-30 13:19:15.883: life_check pid 8752: LOCATION: wd_lifecheck.c:498 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8752]: 2022-09-30 13:19:15.883: life_check pid 8752: LOG: watchdog nodes ID:1 Name:"Not_Set" Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8752]: 2022-09-30 13:19:15.883: life_check pid 8752: DETAIL: Host:"proxy-psql0-stage-cn2.psqldb-stage.example.com" WD Port:9000 pgpool-II port:9999 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8752]: 2022-09-30 13:19:15.883: life_check pid 8752: LOCATION: wd_lifecheck.c:498 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8752]: 2022-09-30 13:19:15.884: life_check pid 8752: LOG: watchdog lifecheck trusted server "10.10.20.1" added for the availability check Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8752]: 2022-09-30 13:19:15.884: life_check pid 8752: LOCATION: wd_lifecheck.c:1099 Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:15.942: main pid 8745: LOG: find_primary_node_repeatedly: waiting for finding a primary node Sep 30 13:19:15 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:15.942: main pid 8745: LOCATION: pgpool_main.c:3417 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:16.006: main pid 8745: LOG: find_primary_node: standby node is 0 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:16.006: main pid 8745: LOCATION: pgpool_main.c:3343 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:16.006: main pid 8745: LOG: find_primary_node: primary node is 1 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:16.006: main pid 8745: LOCATION: pgpool_main.c:3337 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8792]: 2022-09-30 13:19:16.011: sr_check_worker pid 8792: LOG: process started Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8792]: 2022-09-30 13:19:16.011: sr_check_worker pid 8792: LOCATION: pgpool_main.c:728 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8793]: 2022-09-30 13:19:16.011: health_check pid 8793: LOG: process started Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8793]: 2022-09-30 13:19:16.012: health_check pid 8793: LOCATION: pgpool_main.c:728 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:16.013: watchdog pid 8748: LOG: new IPC connection received Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:16.013: watchdog pid 8748: LOCATION: watchdog.c:3438 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:19:16.014: pcp_main pid 8791: LOG: PCP process: 8791 started Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:19:16.014: pcp_main pid 8791: LOCATION: pcp_child.c:161 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8794]: 2022-09-30 13:19:16.016: health_check pid 8794: LOG: process started Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8794]: 2022-09-30 13:19:16.016: health_check pid 8794: LOCATION: pgpool_main.c:728 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:16.245: main pid 8745: LOG: pgpool-II successfully started. version 4.3.3 (tamahomeboshi) Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:16.245: main pid 8745: LOCATION: pgpool_main.c:489 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:16.245: main pid 8745: LOG: node status[0]: 2 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:16.245: main pid 8745: LOCATION: pgpool_main.c:500 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:16.245: main pid 8745: LOG: node status[1]: 1 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:19:16.245: main pid 8745: LOCATION: pgpool_main.c:500 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8759]: 2022-09-30 13:19:16.893: heart_beat_receiver pid 8759: LOG: set SO_REUSEPORT option to the socket Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8759]: 2022-09-30 13:19:16.893: heart_beat_receiver pid 8759: LOCATION: wd_heartbeat.c:690 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8759]: 2022-09-30 13:19:16.893: heart_beat_receiver pid 8759: LOG: creating watchdog heartbeat receive socket. Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8759]: 2022-09-30 13:19:16.893: heart_beat_receiver pid 8759: DETAIL: set SO_REUSEPORT Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8759]: 2022-09-30 13:19:16.893: heart_beat_receiver pid 8759: LOCATION: wd_heartbeat.c:229 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8760]: 2022-09-30 13:19:16.894: heart_beat_sender pid 8760: LOG: set SO_REUSEPORT option to the socket Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8760]: 2022-09-30 13:19:16.894: heart_beat_sender pid 8760: LOCATION: wd_heartbeat.c:690 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8760]: 2022-09-30 13:19:16.894: heart_beat_sender pid 8760: LOG: creating socket for sending heartbeat Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8760]: 2022-09-30 13:19:16.894: heart_beat_sender pid 8760: DETAIL: set SO_REUSEPORT Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8760]: 2022-09-30 13:19:16.894: heart_beat_sender pid 8760: LOCATION: wd_heartbeat.c:146 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8751]: 2022-09-30 13:19:16.969: watchdog_utility pid 8751: LOG: failed to acquire the delegate IP address Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8751]: 2022-09-30 13:19:16.969: watchdog_utility pid 8751: DETAIL: 'if_up_cmd' failed Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8751]: 2022-09-30 13:19:16.969: watchdog_utility pid 8751: LOCATION: wd_if.c:182 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8751]: 2022-09-30 13:19:16.969: watchdog_utility pid 8751: WARNING: watchdog escalation failed to acquire delegate IP Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8751]: 2022-09-30 13:19:16.969: watchdog_utility pid 8751: LOCATION: wd_escalation.c:140 Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:16.971: watchdog pid 8748: LOG: watchdog escalation process with pid: 8751 exit with SUCCESS. Sep 30 13:19:16 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:16.971: watchdog pid 8748: LOCATION: watchdog.c:3267 Sep 30 13:19:26 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:26.087: watchdog pid 8748: LOG: new IPC connection received Sep 30 13:19:26 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:26.087: watchdog pid 8748: LOCATION: watchdog.c:3438 Sep 30 13:19:36 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:36.166: watchdog pid 8748: LOG: new IPC connection received Sep 30 13:19:36 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:36.166: watchdog pid 8748: LOCATION: watchdog.c:3438 Sep 30 13:19:37 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:19:37.818: pcp_main pid 8791: LOG: forked new pcp worker, pid=8796 socket=7 Sep 30 13:19:37 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:19:37.818: pcp_main pid 8791: LOCATION: pcp_child.c:297 Sep 30 13:19:37 proxy-psql0-stage-cn1 pgpool[8796]: 2022-09-30 13:19:37.819: pcp_child pid 8796: FATAL: authentication failed for user "pcpadm" Sep 30 13:19:37 proxy-psql0-stage-cn1 pgpool[8796]: 2022-09-30 13:19:37.819: pcp_child pid 8796: DETAIL: username and/or password does not match Sep 30 13:19:37 proxy-psql0-stage-cn1 pgpool[8796]: 2022-09-30 13:19:37.819: pcp_child pid 8796: LOCATION: pcp_worker.c:517 Sep 30 13:19:37 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:19:37.821: pcp_main pid 8791: LOG: PCP process with pid: 8796 exit with SUCCESS. Sep 30 13:19:37 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:19:37.821: pcp_main pid 8791: LOCATION: pcp_child.c:354 Sep 30 13:19:37 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:19:37.821: pcp_main pid 8791: LOG: PCP process with pid: 8796 exits with status 256 Sep 30 13:19:37 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:19:37.821: pcp_main pid 8791: LOCATION: pcp_child.c:368 Sep 30 13:19:46 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:46.233: watchdog pid 8748: LOG: new IPC connection received Sep 30 13:19:46 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:46.233: watchdog pid 8748: LOCATION: watchdog.c:3438 Sep 30 13:19:50 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:19:50.856: pcp_main pid 8791: LOG: forked new pcp worker, pid=8798 socket=7 Sep 30 13:19:50 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:19:50.856: pcp_main pid 8791: LOCATION: pcp_child.c:297 Sep 30 13:19:50 proxy-psql0-stage-cn1 pgpool[8798]: 2022-09-30 13:19:50.859: pcp_child pid 8798: FATAL: authentication failed for user "pcpadm" Sep 30 13:19:50 proxy-psql0-stage-cn1 pgpool[8798]: 2022-09-30 13:19:50.859: pcp_child pid 8798: DETAIL: username and/or password does not match Sep 30 13:19:50 proxy-psql0-stage-cn1 pgpool[8798]: 2022-09-30 13:19:50.859: pcp_child pid 8798: LOCATION: pcp_worker.c:517 Sep 30 13:19:50 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:19:50.864: pcp_main pid 8791: LOG: PCP process with pid: 8798 exit with SUCCESS. Sep 30 13:19:50 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:19:50.864: pcp_main pid 8791: LOCATION: pcp_child.c:354 Sep 30 13:19:50 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:19:50.864: pcp_main pid 8791: LOG: PCP process with pid: 8798 exits with status 256 Sep 30 13:19:50 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:19:50.864: pcp_main pid 8791: LOCATION: pcp_child.c:368 Sep 30 13:19:56 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:56.306: watchdog pid 8748: LOG: new IPC connection received Sep 30 13:19:56 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:19:56.306: watchdog pid 8748: LOCATION: watchdog.c:3438 Sep 30 13:20:06 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:20:06.413: watchdog pid 8748: LOG: new IPC connection received Sep 30 13:20:06 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:20:06.413: watchdog pid 8748: LOCATION: watchdog.c:3438 Sep 30 13:20:16 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:20:16.523: watchdog pid 8748: LOG: new IPC connection received Sep 30 13:20:16 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:20:16.523: watchdog pid 8748: LOCATION: watchdog.c:3438 Sep 30 13:20:25 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:20:25.415: pcp_main pid 8791: LOG: forked new pcp worker, pid=8813 socket=7 Sep 30 13:20:25 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:20:25.415: pcp_main pid 8791: LOCATION: pcp_child.c:297 Sep 30 13:20:25 proxy-psql0-stage-cn1 pgpool[8813]: 2022-09-30 13:20:25.416: pcp_child pid 8813: FATAL: authentication failed for user "pcpadm" Sep 30 13:20:25 proxy-psql0-stage-cn1 pgpool[8813]: 2022-09-30 13:20:25.416: pcp_child pid 8813: DETAIL: username and/or password does not match Sep 30 13:20:25 proxy-psql0-stage-cn1 pgpool[8813]: 2022-09-30 13:20:25.416: pcp_child pid 8813: LOCATION: pcp_worker.c:517 Sep 30 13:20:25 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:20:25.418: pcp_main pid 8791: LOG: PCP process with pid: 8813 exit with SUCCESS. Sep 30 13:20:25 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:20:25.418: pcp_main pid 8791: LOCATION: pcp_child.c:354 Sep 30 13:20:25 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:20:25.418: pcp_main pid 8791: LOG: PCP process with pid: 8813 exits with status 256 Sep 30 13:20:25 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:20:25.418: pcp_main pid 8791: LOCATION: pcp_child.c:368 Sep 30 13:20:26 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:20:26.619: watchdog pid 8748: LOG: new IPC connection received Sep 30 13:20:26 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:20:26.619: watchdog pid 8748: LOCATION: watchdog.c:3438 Sep 30 13:20:36 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:20:36.694: watchdog pid 8748: LOG: new IPC connection received Sep 30 13:20:36 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:20:36.694: watchdog pid 8748: LOCATION: watchdog.c:3438 Sep 30 13:20:46 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:20:46.804: watchdog pid 8748: LOG: new IPC connection received Sep 30 13:20:46 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:20:46.804: watchdog pid 8748: LOCATION: watchdog.c:3438 Sep 30 13:20:47 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:20:47.813: pcp_main pid 8791: LOG: forked new pcp worker, pid=8817 socket=7 Sep 30 13:20:47 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:20:47.813: pcp_main pid 8791: LOCATION: pcp_child.c:297 Sep 30 13:20:47 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:20:47.815: watchdog pid 8748: LOG: new IPC connection received Sep 30 13:20:47 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:20:47.815: watchdog pid 8748: LOCATION: watchdog.c:3438 Sep 30 13:20:47 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:20:47.819: pcp_main pid 8791: LOG: PCP process with pid: 8817 exit with SUCCESS. Sep 30 13:20:47 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:20:47.819: pcp_main pid 8791: LOCATION: pcp_child.c:354 Sep 30 13:20:47 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:20:47.819: pcp_main pid 8791: LOG: PCP process with pid: 8817 exits with status 0 Sep 30 13:20:47 proxy-psql0-stage-cn1 pgpool[8791]: 2022-09-30 13:20:47.819: pcp_main pid 8791: LOCATION: pcp_child.c:368 Sep 30 13:20:56 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:20:56.901: watchdog pid 8748: LOG: new IPC connection received Sep 30 13:20:56 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:20:56.901: watchdog pid 8748: LOCATION: watchdog.c:3438 Sep 30 13:20:57 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:20:57.098: main pid 8745: LOG: shutting down by signal 2 Sep 30 13:20:57 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:20:57.098: main pid 8745: LOCATION: pgpool_main.c:1184 Sep 30 13:20:57 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:20:57.098: main pid 8745: LOG: terminating all child processes Sep 30 13:20:57 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:20:57.098: main pid 8745: LOCATION: pgpool_main.c:1194 Sep 30 13:20:57 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:20:57.134: watchdog pid 8748: LOG: Watchdog is shutting down Sep 30 13:20:57 proxy-psql0-stage-cn1 pgpool[8748]: 2022-09-30 13:20:57.134: watchdog pid 8748: LOCATION: watchdog.c:3292 Sep 30 13:20:57 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:20:57.144: main pid 8745: LOG: Pgpool-II system is shutdown Sep 30 13:20:57 proxy-psql0-stage-cn1 pgpool[8745]: 2022-09-30 13:20:57.144: main pid 8745: LOCATION: pgpool_main.c:1222 |
|
|
Sorry for late response. Does this issue always occur when stopping pgpool on Debian? How did you stop pgpool? |
|
|
Hello pengbo, thank you for your response. Yes this issue occurs every time on Debian. I stop pgpool via systemd with: ``` systemctl stop pgpool2.service ``` I also noticed I had an issue with arping returning 1. I fixed that by installing `iputils-arping` . Also as I workaround for now, I have created an `escalation.sh` script to force the previous leader node to release the VIP before it is acquired by the new leader. And now I also notice this issue: ``` Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[393]: 2022-10-11 08:10:01.781: watchdog_utility pid 393: LOG: watchdog: escalation started Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[393]: 2022-10-11 08:10:01.781: watchdog_utility pid 393: LOCATION: wd_escalation.c:93 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[250]: 2022-10-11 08:10:01.784: main pid 250: LOG: find_primary_node_repeatedly: waiting for finding a primary node Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[250]: 2022-10-11 08:10:01.784: main pid 250: LOCATION: pgpool_main.c:3417 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[430]: + PGPOOL_STARTUP_USER=postgres Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[430]: + SSH_KEY_FILE=id_ecdsa Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[430]: + SSH_OPTIONS='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ~/.ssh/id_ecdsa' Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[430]: + PGPOOLS=('proxy-psql0-stage-cn1.psqldb-stage-mgmt.example.com' 'proxy-psql0-stage-cn2.psqldb-stage-mgmt.example.com') Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[430]: + VIP=10.10.20.10 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[430]: + DEVICE=eth1 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[430]: + LOCALHOSTNAME=proxy-psql0-stage-cn1.psqldb-stage-mgmt.example.com Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[430]: + for pgpool in "${PGPOOLS[@]}" Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[430]: + '[' proxy-psql0-stage-cn1.psqldb-stage-mgmt.example.com = proxy-psql0-stage-cn1.psqldb-stage-mgmt.example.com ']' Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[430]: + continue Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[430]: + for pgpool in "${PGPOOLS[@]}" Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[430]: + '[' proxy-psql0-stage-cn1.psqldb-stage-mgmt.example.com = proxy-psql0-stage-cn2.psqldb-stage-mgmt.example.com ']' Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[430]: + ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i '~/.ssh/id_ecdsa' postgres@proxy-psql0-stage-cn2.psqldb-stage-mgmt.example.com ' Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[430]: /usr/bin/sudo /sbin/ip addr del 10.10.20.10/24 dev eth1 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[430]: ' Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[250]: 2022-10-11 08:10:01.812: main pid 250: LOG: find_primary_node: standby node is 0 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[250]: 2022-10-11 08:10:01.812: main pid 250: LOCATION: pgpool_main.c:3343 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[250]: 2022-10-11 08:10:01.813: main pid 250: LOG: find_primary_node: primary node is 1 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[250]: 2022-10-11 08:10:01.813: main pid 250: LOCATION: pgpool_main.c:3337 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[431]: Warning: Permanently added 'proxy-psql0-stage-cn2.psqldb-stage-mgmt.example.com,10.10.30.22' (ECDSA) to the list of known hosts. Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[435]: 2022-10-11 08:10:01.816: health_check pid 435: LOG: process started Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[435]: 2022-10-11 08:10:01.816: health_check pid 435: LOCATION: pgpool_main.c:728 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[434]: 2022-10-11 08:10:01.817: health_check pid 434: LOG: process started Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[434]: 2022-10-11 08:10:01.817: health_check pid 434: LOCATION: pgpool_main.c:728 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[433]: 2022-10-11 08:10:01.818: sr_check_worker pid 433: LOG: process started Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[433]: 2022-10-11 08:10:01.818: sr_check_worker pid 433: LOCATION: pgpool_main.c:728 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[319]: 2022-10-11 08:10:01.818: watchdog pid 319: LOG: new IPC connection received Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[319]: 2022-10-11 08:10:01.818: watchdog pid 319: LOCATION: watchdog.c:3438 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[432]: 2022-10-11 08:10:01.819: pcp_main pid 432: LOG: PCP process: 432 started Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[432]: 2022-10-11 08:10:01.819: pcp_main pid 432: LOCATION: pcp_child.c:161 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[250]: 2022-10-11 08:10:01.821: main pid 250: LOG: pgpool-II successfully started. version 4.3.3 (tamahomeboshi) Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[250]: 2022-10-11 08:10:01.821: main pid 250: LOCATION: pgpool_main.c:489 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[250]: 2022-10-11 08:10:01.821: main pid 250: LOG: node status[0]: 2 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[250]: 2022-10-11 08:10:01.821: main pid 250: LOCATION: pgpool_main.c:500 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[250]: 2022-10-11 08:10:01.821: main pid 250: LOG: node status[1]: 1 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[250]: 2022-10-11 08:10:01.821: main pid 250: LOCATION: pgpool_main.c:500 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[431]: RTNETLINK answers: Cannot assign requested address Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[430]: + exit 0 Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[393]: 2022-10-11 08:10:01.970: watchdog_utility pid 393: LOG: watchdog escalation successful Oct 11 08:10:01 proxy-psql0-stage-cn1 pgpool[393]: 2022-10-11 08:10:01.970: watchdog_utility pid 393: LOCATION: wd_escalation.c:119 Oct 11 08:10:01 proxy-psql0-stage-cn1 sudo[436]: postgres : PWD=/ ; USER=root ; COMMAND=/usr/bin/ip addr add 10.10.20.10/24 dev eth1 label eth1:0 Oct 11 08:10:01 proxy-psql0-stage-cn1 sudo[436]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=109) Oct 11 08:10:01 proxy-psql0-stage-cn1 sudo[436]: pam_unix(sudo:session): session closed for user root Oct 11 08:10:01 proxy-psql0-stage-cn1 sudo[438]: postgres : PWD=/ ; USER=root ; COMMAND=/usr/sbin/arping -U 10.10.20.10 -w 1 -I eth1 Oct 11 08:10:01 proxy-psql0-stage-cn1 sudo[438]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=109) Oct 11 08:10:02 proxy-psql0-stage-cn1 pgpool[416]: 2022-10-11 08:10:02.785: heart_beat_receiver pid 416: LOG: set SO_REUSEPORT option to the socket Oct 11 08:10:02 proxy-psql0-stage-cn1 pgpool[416]: 2022-10-11 08:10:02.785: heart_beat_receiver pid 416: LOCATION: wd_heartbeat.c:690 Oct 11 08:10:02 proxy-psql0-stage-cn1 pgpool[416]: 2022-10-11 08:10:02.785: heart_beat_receiver pid 416: LOG: creating watchdog heartbeat receive socket. Oct 11 08:10:02 proxy-psql0-stage-cn1 pgpool[416]: 2022-10-11 08:10:02.785: heart_beat_receiver pid 416: DETAIL: set SO_REUSEPORT Oct 11 08:10:02 proxy-psql0-stage-cn1 pgpool[416]: 2022-10-11 08:10:02.785: heart_beat_receiver pid 416: LOCATION: wd_heartbeat.c:229 Oct 11 08:10:02 proxy-psql0-stage-cn1 pgpool[417]: 2022-10-11 08:10:02.785: heart_beat_sender pid 417: LOG: set SO_REUSEPORT option to the socket Oct 11 08:10:02 proxy-psql0-stage-cn1 pgpool[417]: 2022-10-11 08:10:02.785: heart_beat_sender pid 417: LOCATION: wd_heartbeat.c:690 Oct 11 08:10:02 proxy-psql0-stage-cn1 pgpool[417]: 2022-10-11 08:10:02.785: heart_beat_sender pid 417: LOG: creating socket for sending heartbeat Oct 11 08:10:02 proxy-psql0-stage-cn1 pgpool[417]: 2022-10-11 08:10:02.785: heart_beat_sender pid 417: DETAIL: set SO_REUSEPORT Oct 11 08:10:02 proxy-psql0-stage-cn1 pgpool[417]: 2022-10-11 08:10:02.785: heart_beat_sender pid 417: LOCATION: wd_heartbeat.c:146 Oct 11 08:10:03 proxy-psql0-stage-cn1 sudo[438]: pam_unix(sudo:session): session closed for user root Oct 11 08:10:03 proxy-psql0-stage-cn1 pgpool[393]: 2022-10-11 08:10:03.002: watchdog_utility pid 393: LOG: failed to acquire the delegate IP address Oct 11 08:10:03 proxy-psql0-stage-cn1 pgpool[393]: 2022-10-11 08:10:03.002: watchdog_utility pid 393: DETAIL: 'if_up_cmd' failed Oct 11 08:10:03 proxy-psql0-stage-cn1 pgpool[393]: 2022-10-11 08:10:03.002: watchdog_utility pid 393: LOCATION: wd_if.c:182 Oct 11 08:10:03 proxy-psql0-stage-cn1 pgpool[393]: 2022-10-11 08:10:03.002: watchdog_utility pid 393: WARNING: watchdog escalation failed to acquire delegate IP Oct 11 08:10:03 proxy-psql0-stage-cn1 pgpool[393]: 2022-10-11 08:10:03.002: watchdog_utility pid 393: LOCATION: wd_escalation.c:140 Oct 11 08:10:03 proxy-psql0-stage-cn1 pgpool[319]: 2022-10-11 08:10:03.006: watchdog pid 319: LOG: watchdog escalation process with pid: 393 exit with SUCCESS. ``` It claims if_up_cmd has failed and the escalation has not been able to acquire the delegate IP, although I can see from `ip addr` that it actually has acquired the delegate IP. |
| Date Modified | Username | Field | Change |
|---|---|---|---|
| 2022-09-30 23:14 | dcvythoulkas | New Issue | |
| 2022-09-30 23:14 | dcvythoulkas | File Added: centos8_pgpool.conf | |
| 2022-09-30 23:14 | dcvythoulkas | File Added: centos8_pgpool.log | |
| 2022-09-30 23:14 | dcvythoulkas | File Added: debian_pgpool.conf | |
| 2022-09-30 23:14 | dcvythoulkas | File Added: debian_pgpool.log | |
| 2022-09-30 23:17 | dcvythoulkas | Tag Attached: vip | |
| 2022-09-30 23:17 | dcvythoulkas | Tag Attached: watchdog | |
| 2022-09-30 23:17 | dcvythoulkas | Tag Attached: networking | |
| 2022-10-03 09:05 | pengbo | Assigned To | => pengbo |
| 2022-10-03 09:05 | pengbo | Status | new => assigned |
| 2022-10-11 01:46 | pengbo | Note Added: 0004126 | |
| 2022-10-11 17:46 | dcvythoulkas | Note Added: 0004127 |