[Pgpool-general] Problem with replication mode!!!

Tatsuo Ishii ishii at sraoss.co.jp
Tue Nov 9 02:32:45 UTC 2010


I couldn't reproduce your problem by using pgpool-II 3.1(CVS HEAD).
Following is the test case.

create table t1(i int);	-- via pgpool
insert into t1 values(1); -- via pgpool

delete from t1; -- execute on 1th DB node

update t1 set i = 10;	-- via pgpool

pgpool detected the error and did failover.

2010-11-09 11:21:58 LOG:   pid 29737: DB node id: 0 backend pid: 29741 statement: update t1 set i = 10;
2010-11-09 11:21:58 LOG:   pid 29737: DB node id: 1 backend pid: 29742 statement: update t1 set i = 10;
2010-11-09 11:21:58 ERROR: pid 29737: pgpool detected difference of the number of inserted, updated or deleted tuples. Possible last query was: "update t1 set i = 10;"
2010-11-09 11:21:58 LOG:   pid 29737: CommandComplete: Number of affected tuples are: 1 0
2010-11-09 11:21:58 LOG:   pid 29737: ReadyForQuery: Degenerate backends: 1
2010-11-09 11:21:58 LOG:   pid 29737: ReadyForQuery: Number of affected tuples are: 1 0
2010-11-09 11:21:58 LOG:   pid 29737: notice_backend_error: 1 fail over request from pid 29737
2010-11-09 11:21:58 LOG:   pid 29705: starting degeneration. shutdown host (5433)
2010-11-09 11:21:58 LOG:   pid 29705: failover_handler: set new master node: 0
2010-11-09 11:21:58 LOG:   pid 29705: failover done. shutdown host (5433)

pgpool.conf attached...
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

> Thanks. I will look into this.
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese: http://www.sraoss.co.jp
> 
>> Thank you very much for your answer, the database only has one table with one column of type integer. 
>> In the first node, the table only has one value (1), in the other node , the table is empty, but failover never take place when I execute this statement "UPDATE tabla1 set id = 2 where id = 1;" . 
>> Can you help me to resolve this problem? 
>> Regards. 
>> Thank you very much for your time. 
>> 
>> ----- "Tatsuo Ishii" <ishii at sraoss.co.jp> escribió: 
>>> > Hello every one. 
>>> > I have this problem. 
>>> > I have one database in two nodes, I use Pgpool-II version 2.3.3. 
>>> > This database has one table, in the first node (master) there is a table with one record and in the other node this table there isn't any record, but when I execute a Select or Update over this table, the secondary node is never degenerate (failover is not perform). 
>>> 
>>> Can you please provide self contained test case? i.e. the SQL and 
>>> table data please. 
>>> -- 
>>> Tatsuo Ishii 
>>> SRA OSS, Inc. Japan 
>>> English: http://www.sraoss.co.jp/index_en.php 
>>> Japanese: http://www.sraoss.co.jp 
>>> 
>>> > I tested this problem in Pgpool-II version 3.0.1 and I have the same situation. 
>>> > 
>>> > pgpool.conf of Pgpool-II version 2.3.3. 
>>> > replication_stop_on_mismatch = true 
>>> > 
>>> > pgpool.conf of Pgpool-II version 2.3.3. 
>>> > replication_stop_on_mismatch = true 
>>> > failover_if_affected_tuples_mismatch = true 
>>> > 
>>> > The documentation of pgpool says: 
>>> > failover_if_affected_tuples_mismatch 
>>> > 
>>> > When set to true, if a backend returns number of affected tuples by INSERT/UPDATE/DELETE different between the backends, the backends that differ from most frequent result set are degenerated. If set to false, the session is terminated and the backends are not degenerated. Default is false. replication_stop_on_mismatch 
>>> > 
>>> > When set to true, if a backend returns packet kind different between the backends, the backends that differ from most frequent result set are degenerated. Typical use case is the SELECT statement is part of a transaction and replicate_select is set to true, and SELECT returns diffrent number of rows among backends. Other than SELECT statement might trigger this though. For example, a backend succeeded in an UPDATE, while others failed. Also please note that pgpool does NOT examine content of records returned from SELECT. If set to false, the session is terminated and the backends are not degenerated. Default is false. 
>>> > Anyone knows why is it? 
>>> > 
>>> > Regards. 
>>> > Thank you very much for your time. 
>>> > 
>>>
> _______________________________________________
> Pgpool-general mailing list
> Pgpool-general at pgfoundry.org
> http://pgfoundry.org/mailman/listinfo/pgpool-general
-------------- next part --------------
#
# pgpool-II configuration file sample for replication mode
# $Header: /cvsroot/pgpool/pgpool-II/pgpool.conf.sample-replication,v 1.12 2010/10/30 11:12:43 t-ishii Exp $

# Host name or IP address to listen on: '*' for all, '' for no TCP/IP
# connections
listen_addresses = 'localhost'

# Port number for pgpool
port = 9999

# Port number for pgpool communication manager
pcp_port = 9898

# Unix domain socket path.  (The Debian package defaults to
# /var/run/postgresql.)
socket_dir = '/tmp'

# Unix domain socket path for pgpool communication manager.
# (Debian package defaults to /var/run/postgresql)
pcp_socket_dir = '/tmp'

# Unix domain socket path for the backend. Debian package defaults to /var/run/postgresql!
backend_socket_dir = '/tmp'

# pgpool communication manager timeout. 0 means no timeout. This parameter is ignored now.
pcp_timeout = 10

# number of pre-forked child process
num_init_children = 32

# Number of connection pools allowed for a child process
max_pool = 4

# If idle for this many seconds, child exits.  0 means no timeout.
child_life_time = 300

# If idle for this many seconds, connection to PostgreSQL closes.
# 0 means no timeout.
connection_life_time = 0

# If child_max_connections connections were received, child exits.
# 0 means no exit.
child_max_connections = 0

# If client_idle_limit is n (n > 0), the client is forced to be
# disconnected whenever after n seconds idle (even inside an explicit
# transactions!)
# 0 means no disconnect.
client_idle_limit = 0

# Maximum time in seconds to complete client authentication.
# 0 means no timeout.
authentication_timeout = 60

# Logging directory
logdir = '/home/t-ishii/pgpool-II/3.1/var/log'

# pid file name
pid_file_name = '/home/t-ishii/pgpool-II/3.1/var/run/pgpool/pgpool.pid'

# Replication mode
replication_mode = true

# Load balancing mode, i.e., all SELECTs are load balanced.
load_balance_mode = true

# If there's a disagreement with the packet kind sent from backend,
# then degenrate the node which is most likely "minority".  If false,
# just force to exit this session.
replication_stop_on_mismatch = false

# If there's a disagreement with the number of affected tuples in
# UPDATE/DELETE, then degenrate the node which is most likely
# "minority".
# If false, just abort the transaction to keep the consistency.
failover_if_affected_tuples_mismatch = true

# If true, replicate SELECT statement when replication_mode or parallel_mode is enabled.
# A priority of replicate_select is higher than load_balance_mode.
replicate_select = false

# Semicolon separated list of queries to be issued at the end of a
# session
reset_query_list = 'ABORT; DISCARD ALL'
# for 8.2 or older this should be as follows. 
#reset_query_list = 'ABORT; RESET ALL; SET SESSION AUTHORIZATION DEFAULT'

# white_function_list is a comma separated list of function names
# those do not write to database. Any functions not listed here
# are regarded to write to database and SELECTs including such 
# writer-functions will be executed on master(primary) in master/slave
# mode, or executed on all DB nodes in replication mode.
#
# black_function_list is a comma separated list of function names
# those write to database. Any functions not listed here
# are regarded not to write to database and SELECTs including such 
# read-only-functions will be executed on any DB nodes.
#
# You cannot make full both white_function_list and
# black_function_list at the same time. If you specify something in
# one of them, you should make empty other.
#
# Pre 3.0 pgpool-II recognizes nextval and setval in hard coded
# way. Following setting will do the same as the previous version.
# white_function_list = ''
# black_function_list = 'nextval,setval'
white_function_list = ''
black_function_list = 'nextval,setval'

# If true print timestamp on each log line.
print_timestamp = true

# If true, operate in master/slave mode.
master_slave_mode = false

# Master/slave sub mode. either 'slony' or 'stream'. Default is 'slony'.
master_slave_sub_mode = 'slony'

# If the standby server delays more than delay_threshold,
# any query goes to the primary only. The unit is in bytes.
# 0 disables the check. Default is 0.
# Note that health_check_period required to be greater than 0
# to enable the functionality.
delay_threshold = 0

# 'always' logs the standby delay whenever health check runs.
# 'if_over_threshold' logs only if the delay exceeds delay_threshold.
# 'none' disables the delay log.
log_standby_delay = 'none'

# If true, cache connection pool.
connection_cache = true

# Health check timeout.  0 means no timeout.
health_check_timeout = 20

# Health check period.  0 means no health check.
health_check_period = 0

# Health check user
health_check_user = 'nobody'

# Execute command by failover.
# special values:  %d = node id
#                  %h = host name
#                  %p = port number
#                  %D = database cluster path
#                  %m = new master node id
#                  %H = hostname of the new master node
#                  %M = old master node id
#                  %P = old primary node id
#                  %% = '%' character
#
failover_command = ''

# Execute command by failback.
# special values:  %d = node id
#                  %h = host name
#                  %p = port number
#                  %D = database cluster path
#                  %m = new master node id
#                  %H = hostname of the new master node
#                  %M = old master node id
#                  %P = old primary node id
#                  %% = '%' character
#
failback_command = ''

# If true, trigger fail over when writing to the backend communication
# socket fails. This is the same behavior of pgpool-II 2.2.x or
# earlier. If set to false, pgpool will report an error and disconnect
# the session.
fail_over_on_backend_error = true

# If true, automatically locks a table with INSERT statements to keep
# SERIAL data consistency.  If the data does not have SERIAL data
# type, no lock will be issued. An /*INSERT LOCK*/ comment has the
# same effect.  A /NO INSERT LOCK*/ comment disables the effect.
insert_lock = true

# If true, ignore leading white spaces of each query while pgpool judges
# whether the query is a SELECT so that it can be load balanced.  This
# is useful for certain APIs such as DBI/DBD which is known to adding an
# extra leading white space.
ignore_leading_white_space = true

# If true, print all statements to the log.  Like the log_statement option
# to PostgreSQL, this allows for observing queries without engaging in full
# debugging.
log_statement = false

# If true, print all statements to the log. Similar to log_statement except
# that prints DB node id and backend process id info.
log_per_node_statement = true

# If true, incoming connections will be printed to the log.
log_connections = false

# If true, hostname will be shown in ps status. Also shown in
# connection log if log_connections = true.
# Be warned that this feature will add overhead to look up hostname.
log_hostname = false

# if non 0, run in parallel query mode
parallel_mode = false

# if non 0, use query cache
enable_query_cache = false

#set pgpool2 hostname 
pgpool2_hostname = ''

# system DB info
system_db_hostname = 'localhost'
system_db_port = 5432
system_db_dbname = 'pgpool'
system_db_schema = 'pgpool_catalog'
system_db_user = 'pgpool'
system_db_password = ''

# backend_hostname, backend_port, backend_weight
# here are examples
backend_hostname0 = ''
backend_port0 = 5432
backend_weight0 = 1
backend_data_directory0 = '/data'
backend_hostname1 = ''
backend_port1 = 5433
backend_weight1 = 1
backend_data_directory1 = '/data1'

# - HBA -

# If true, use pool_hba.conf for client authentication.
enable_pool_hba = false

# - online recovery -
# online recovery user
recovery_user = 'nobody'

# online recovery password
recovery_password = ''

# execute a command in first stage.
recovery_1st_stage_command = ''

# execute a command in second stage.
recovery_2nd_stage_command = ''

# maximum time in seconds to wait for the recovering node's postmaster
# start-up. 0 means no wait.
# this is also used as a timer waiting for clients disconnected before
# starting 2nd stage
recovery_timeout = 90

# If client_idle_limit_in_recovery is n (n > 0), the client is forced
# to be disconnected whenever after n seconds idle (even inside an
# explicit transactions!) in the second stage of online recovery.
# n = -1 forces clients to be disconnected immediately.
# 0 disables this functionality(wait forever).
# This parameter only takes effect in recovery 2nd stage.
client_idle_limit_in_recovery = 0

# Specify table name to lock. This is used when rewriting lo_creat
# command in replication mode. The table must exist and has writable
# permission to public. If the table name is '', no rewriting occurs.
lobj_lock_table = ''

# If true, enable SSL support for both frontend and backend connections.
# note that you must also set ssl_key and ssl_cert for SSL to work in
# the frontend connections.
ssl = false
# path to the SSL private key file
#ssl_key = './server.key'
# path to the SSL public certificate file
#ssl_cert = './server.cert'

# If either ssl_ca_cert or ssl_ca_cert_dir is set, then certificate
# verification will be performed to establish the authenticity of the
# certificate.  If neither is set to a nonempty string then no such
# verification takes place.  ssl_ca_cert should be a path to a single
# PEM format file containing CA root certificate(s), whereas ssl_ca_cert_dir
# should be a directory containing such files.  These are analagous to the
# -CAfile and -CApath options to openssl verify(1), respectively.
#ssl_ca_cert = ''
#ssl_ca_cert_dir = ''

# Debug message verbosity level. 0: no message, 1 <= : more verbose
debug_level = 0


More information about the Pgpool-general mailing list