<html><body><div style="color:#000; background-color:#fff; font-family:HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:12pt">Hi,<br>I'll try this and keep you updated.<br>Anyway with this config I have the following problem : <br><div><b>SQL error:</b><br></div><pre class="error">ERROR: kind mismatch among backends. Possible last query was: "commit" kind details are: 0[C] 1[N: there is no transaction in progress] 2[N: there is no transaction in progress]
HINT: check data consistency among db nodes
ERROR: kind mismatch among backends. Possible last query was: "commit" kind details are: 0[C] 1[N: there is no transaction in progress] 2[N: there is no transaction in progress]
HINT: check data consistency among db nodes</pre>
<div><b>In statement:</b><br></div><pre class="data"> SELECT pdb.datname AS datname, pr.rolname AS datowner, pg_encoding_to_char(encoding) AS datencoding,
(SELECT description FROM pg_catalog.pg_shdescription pd WHERE pdb.oid=pd.objoid AND pd.classoid='pg_database'::regclass) AS datcomment,
(SELECT spcname FROM pg_catalog.pg_tablespace pt WHERE pt.oid=pdb.dattablespace) AS tablespace,
CASE WHEN pg_catalog.has_database_privilege(current_user, pdb.oid, 'CONNECT')
THEN pg_catalog.pg_database_size(pdb.oid)
ELSE -1 -- set this magic value, which we will convert to no access later
END as dbsize, pdb.datcollate, pdb.datctype
FROM pg_catalog.pg_database pdb
LEFT JOIN pg_catalog.pg_roles pr ON (pdb.datdba = pr.oid)
WHERE true
AND NOT pdb.datistemplate
ORDER BY pdb.datname<br><br><br>Steps to reproduce the issue : <br>1. Connect to cluster with some user<br>2. Create a DB on cluster with that specific user (the db is created on all nodes as it should)<br>3. Try to change the owner of that database with some other user. (here is where the problem appears)<br><br>I have tried with pgpool 3.3.1 and 3.3.2. The same behavior.<br>Also it seems that the same problem was reported here : <br>http://www.sraoss.jp/pipermail/pgpool-general/2012-November/001210.html<br><br><br></pre>Regards,<br><div>Adrian Videanu</div><div><br></div> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 12pt;"> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 12pt;"> <div dir="ltr"> <hr size="1"> <font face="Arial" size="2"> <b><span style="font-weight:bold;">From:</span></b> Yugo Nagata
<nagata@sraoss.co.jp><br> <b><span style="font-weight: bold;">To:</span></b> Videanu Adrian <videanuadrian@yahoo.com> <br><b><span style="font-weight: bold;">Cc:</span></b> "pgpool-general@pgpool.net" <pgpool-general@pgpool.net> <br> <b><span style="font-weight: bold;">Sent:</span></b> Friday, November 29, 2013 12:27 PM<br> <b><span style="font-weight: bold;">Subject:</span></b> Re: [pgpool-general: 2274] Re: pgpool 3.3.1 replicate setup with 3 nodes<br> </font> </div> <div class="y_msg_container"><br>I'm sorry for late response.<br><br>I have reproduced and understood the problem.<br>In pgpool_recovery_pitr script, nextval/setval should be executed for all backend nodes. In your case, nextval/setval was executed only for Node1. Node3 is OK since this is recovered from Node1. However, Node2's sequence got delayed since nextval/setval is not excuted.<br><br>Fix your pgpool_recovery_pitr as below.<br>(Hostname and port should be rewriten
properly)<br><br> if [ "$i" != "" ];then<br> psql -h Node1 -p $port -c "SELECT setval(oid, nextval(oid)) FROM pg_class WHERE relkind = 'S'" $i<br> psql -h Node2 -p $port -c "SELECT setval(oid, nextval(oid)) FROM pg_class WHERE relkind = 'S'" $i<br> psql -h Node3 -p $port -c "SELECT setval(oid, nextval(oid)) FROM pg_class WHERE relkind = 'S'" $i<br> fi<br><br><br><br>On Sun, 24 Nov 2013 23:23:46 -0800 (PST)<br>Videanu Adrian <<a ymailto="mailto:videanuadrian@yahoo.com" href="mailto:videanuadrian@yahoo.com">videanuadrian@yahoo.com</a>> wrote:<br><br>> anybody ..... ?<br>> <br>> <br>> <br>> Regards,<br>> Adrian Videanu<br>> <br>> <br>> ________________________________<br>> From: Videanu Adrian <<a ymailto="mailto:videanuadrian@yahoo.com" href="mailto:videanuadrian@yahoo.com">videanuadrian@yahoo.com</a>><br>> To: Yugo Nagata <<a
ymailto="mailto:nagata@sraoss.co.jp" href="mailto:nagata@sraoss.co.jp">nagata@sraoss.co.jp</a>> <br>> Cc: "<a ymailto="mailto:pgpool-general@pgpool.net" href="mailto:pgpool-general@pgpool.net">pgpool-general@pgpool.net</a>" <<a ymailto="mailto:pgpool-general@pgpool.net" href="mailto:pgpool-general@pgpool.net">pgpool-general@pgpool.net</a>> <br>> Sent: Tuesday, November 12, 2013 4:46 PM<br>> Subject: [pgpool-general: 2274] Re: pgpool 3.3.1 replicate setup with 3 nodes<br>> <br>> <br>> <br>> Hi,<br>> <br>> I have updated the scripts with the ones that you provided but I have the same problem.<br>> My pgpool_recovery_pitr.sh script is the same as before, so I suspect that the problem arise from this script. Because after I perform online recovery on one of the nodes, the node that is not involved into online_recovery has the sequence value with 1 unit less than other servers.<br>> So when I perform an
insert I get : kind mismatch...<br>> <br>> <br>> Example, db = replication_bench :<br>> <br>> Initial setup <br>> <br>> Node1 - sequence1 = 1184<br>> Node2 - sequence1 = 1184<br>> Node3 - sequence1 = 1184<br>> ------------------------------------------------------<br>> <br>> Node 3 has failed.<br>> <br>> ------------------------------------------------------<br>> Start online recovery:<br>> <br>> pgpool_recovery_pitr.sh<br>> <br>> + read i<br>> + '[' replication_bench '!=' ''<br>> ']'<br>> + psql -p 5432 -c 'SELECT setval(oid, nextval(oid)) FROM pg_class WHERE relkind = '\''S'\''' replication_bench<br>> setval<br>> --------<br>> 11084<br>> 11084<br>> (2 rows)<br>> <br>> -------------------------------------------------------<br>> Node 3 is back online BUT:<br>> <br>> Node1 - sequence1 = 1185<br>> Node2 - sequence1 =
1184 <------- Wrong sequence<br>> Node3 - sequence1 = 1185<br>> ----------------------------------------------------------<br>> insert into ........ => kind mismatch among backends.<br>> <br>> <br>> Hope this is clear the the previous examples<br>> <br>> <br>> <br>> Regards,<br>> Adrian Videanu<br>> <br>> <br>> ________________________________<br>> From: Yugo Nagata <<a ymailto="mailto:nagata@sraoss.co.jp" href="mailto:nagata@sraoss.co.jp">nagata@sraoss.co.jp</a>><br>> To: Videanu Adrian <<a ymailto="mailto:videanuadrian@yahoo.com" href="mailto:videanuadrian@yahoo.com">videanuadrian@yahoo.com</a>> <br>> Cc: "<a ymailto="mailto:pgpool-general@pgpool.net" href="mailto:pgpool-general@pgpool.net">pgpool-general@pgpool.net</a>" <<a ymailto="mailto:pgpool-general@pgpool.net" href="mailto:pgpool-general@pgpool.net">pgpool-general@pgpool.net</a>> <br>>
Sent: Monday, November 11, 2013 8:07 AM<br>> Subject: Re: [pgpool-general: 2237] pgpool 3.3.1 replicate setup with 3 nodes<br>> <br>> <br>> Hi,<br>> <br>> On Thu, 7 Nov 2013 06:23:56 -0800 (PST)<br>> Videanu Adrian <<a ymailto="mailto:videanuadrian@yahoo.com" href="mailto:videanuadrian@yahoo.com">videanuadrian@yahoo.com</a>> wrote:<br>> <br>> > Hi <br>> > <br>> > <br>> > I have just followed the tutorial, and in the tutorial, the pgpool_remote_start looks like :<br>> <br>> I'm very sorry for confusing you. The manual's sample script is wrong. I'll fix this.<br>> <br>> The right pgpool_remote_start is here; and I attached the same script.<br>> <a href="http://www.pgpool.net/docs/latest/pgpool_remote_start" target="_blank">http://www.pgpool.net/docs/latest/pgpool_remote_start</a><br>> <br>> In addition, I send the right basebackup.sh too.<br>> In the previous wrong script,
recovery.conf wasn't sent to recovery target.<br>> <br>> Could you try the new scripts attached?<br>> (You might have to edit the scripts for your environment about ssh port, pgctl path etc.)<br>> <br>> If those doesn't work well still, please send pgpool and postgresql logs for analysis.<br>> <br>> > <br>> > #!<br>> /bin/sh<br>> > DEST=$1<br>> > DESTDIR=$2<br>> > PGCTL=/usr/local/pgsql/bin/pg_ctl # Deploy a base backup<br>> > ssh -T $DEST 'cd /data/; tar zxf pgsql.tar.gz' 2>/dev/null 1>/dev/null < /dev/null<br>> > # Startup PostgreSQL server<br>> > ssh -T $DEST $PGCTL -w -D $DESTDIR start 2>/dev/null 1>/dev/null < /dev/null &<br>> > so, the first line decompress the pgsql.tar.gz and the starts the postgres server.<br>> > <br>> > <br>> > <br>> > I have updated my scripts:<br>> > <br>> > basebackup.sh<br>> >
<br>> > #!/bin/bash -x<br>> > <br>> > DATA=$1<br>> > RECOVERY_TARGET=$2<br>> > RECOVERY_DATA=$3<br>> > <br>> > psql -c "select pg_start_backup('pgpool-recovery')" postgres<br>> > echo "restore_command = 'scp -P 2022 $HOSTNAME:/var/lib/postgresql/9.2/archive/%f %p'" > /var/lib/postgresql/9.2/data/recovery.conf<br>> > tar -C /var/lib/postgresql/9.2/data -zcf pgsql.tar.gz . --exclude='postmaster.log' <br>> --exclude='postmaster.pid' --exclude='postmaster.opts'<br>> > psql -c 'select pg_stop_backup()' postgres<br>> > scp -P 2022 pgsql.tar.gz $RECOVERY_TARGET:$RECOVERY_DATA<br>> > ssh -T -p 2022 root@$RECOVERY_TARGET 'cd /var/lib/postgresql/9.2/data; tar zxf pgsql.tar.gz'<br>> > <br>> > <br>> > #same pgpool_recovery_pitr script<br>> > <br>> > <br>> > pgpool_remote_start<br>> > <br>> > #!/bin/bash -x<br>> > #<br>>
> # Start PostgreSQL on the recovery target node<br>> > #<br>> > DEST=$1<br>> > DESTDIR=$2<br>> > # Deploy a base backup<br>> > ssh -T -p 2022 root@$DEST '/etc/init.d/postgresql-9.2 start'<br>> > rm -rf pgsql.tar.gz<br>> > rm -rf recovery.conf<br>> > <br>> > <br>> > but the problem persists.<br>> > Is this what you are trying to say or I have misunderstood ? <br>> > <br>> > <br>> > <br>> > Regards,<br>> > Adrian Videanu<br>> > <br>> > <br>> > ________________________________<br>> > From: Yugo Nagata<br>> <<a ymailto="mailto:nagata@sraoss.co.jp" href="mailto:nagata@sraoss.co.jp">nagata@sraoss.co.jp</a>><br>> > To: Videanu Adrian <<a ymailto="mailto:videanuadrian@yahoo.com" href="mailto:videanuadrian@yahoo.com">videanuadrian@yahoo.com</a>> <br>> > Cc: "<a
ymailto="mailto:pgpool-general@pgpool.net" href="mailto:pgpool-general@pgpool.net">pgpool-general@pgpool.net</a>" <<a ymailto="mailto:pgpool-general@pgpool.net" href="mailto:pgpool-general@pgpool.net">pgpool-general@pgpool.net</a>> <br>> > Sent: Thursday, November 7, 2013 1:58 PM<br>> > Subject: Re: [pgpool-general: 2237] pgpool 3.3.1 replicate setup with 3 nodes<br>> > <br>> > <br>> > Hi,<br>> > <br>> > In your basebackup.sh, backuped data (pgsql.tar.gz) is sent to the recovery target<br>> > host, but isn't extracted. This should be extracted at 1st stage in basebackup.sh<br>> > and updated to latest state at 2nd stage in pgpool_recovery_pitr.sh.<br>> > So, you should<br>> not overwrite the data in pgpool_remote_start. This only have<br>> > to restart postgresql.<br>> > <br>> > <br>> > <br>> > On Thu, 7 Nov 2013 03:15:44 -0800 (PST)<br>>
> Videanu Adrian <<a ymailto="mailto:videanuadrian@yahoo.com" href="mailto:videanuadrian@yahoo.com">videanuadrian@yahoo.com</a>> wrote:<br>> > <br>> > > Hi,<br>> > > <br>> > > Here are my scripts: <br>> > > (<br>> > > There must be something related to this line <br>> > > <br>> > > psql -p $port -c "SELECT setval(oid, nextval(oid)) FROM pg_class WHERE relkind = 'S'" $i<br>> > > )<br>> > > <br>> > > <br>> > > <br>> > > basebackup.sh<br>> > > <br>> > > #!/bin/bash -x<br>> > > <br>> > > DATA=$1<br>> > > RECOVERY_TARGET=$2<br>> > > RECOVERY_DATA=$3<br>> > > <br>> > > psql -c "select pg_start_backup('pgpool-recovery')" postgres<br>> > ><br>> echo "restore_command = 'scp -P 2022
$HOSTNAME:/var/lib/postgresql/9.2/archive/%f %p'" > /var/lib/postgresql/9.2/data/recovery.conf<br>> > > tar -C /var/lib/postgresql/9.2/data -zcf pgsql.tar.gz . --exclude='postmaster.log' --exclude='postmaster.pid' --exclude='postmaster.opts'<br>> > > psql -c 'select pg_stop_backup()' postgres<br>> > > scp -P 2022 pgsql.tar.gz $RECOVERY_TARGET:$RECOVERY_DATA<br>> > > <br>> > > <br>> > > pgpool_recovery_pitr.sh<br>> > > <br>> > > #!/bin/bash<br>> > > <br>> > > # Online recovery 2nd stage script<br>> > > #<br>> > > datadir=$1 # master dabatase cluster<br>> > > DEST=$2 # hostname of the DB node to be recovered<br>> > > DESTDIR=$3 # database cluster of the DB node to be recovered<br>> >
><br>> port=5432 # PostgreSQL port number<br>> > > archdir=/var/lib/postgresql/9.2/archive # archive log directory<br>> > > <br>> > > # Force to flush current value of sequences to xlog<br>> > > psql -p $port -t -c 'SELECT datname FROM pg_database WHERE NOT datistemplate AND datallowconn' template1|<br>> > > while read i<br>> > > do<br>> > > if [ "$i" != "" ];then<br>> > > psql -p $port -c "SELECT setval(oid, nextval(oid)) FROM pg_class WHERE relkind = 'S'" $i<br>> > > fi<br>> > > done<br>> > > <br>> > > psql -p $port -c "SELECT pgpool_switch_xlog('$archdir')" template1<br>> > > <br>> > > <br>> > > pgpool_remote_start<br>> > > <br>> > > #!/bin/bash -x<br>> > > #<br>> > > # Start
PostgreSQL on the recovery target node<br>> > > #<br>> > > DEST=$1<br>> > > DESTDIR=$2<br>> > > # Deploy a<br>> base backup<br>> > > ssh -T -p 2022 root@$DEST 'cd /var/lib/postgresql/9.2/data; tar zxf pgsql.tar.gz'<br>> > > ssh -T -p 2022 root@$DEST '/etc/init.d/postgresql-9.2 start'<br>> > > rm -rf pgsql.tar.gz<br>> > > rm -rf recovery.conf<br>> > > <br>> > > <br>> > > <br>> > > These are my scripts used in recovery process. I will put here the config file also:<br>> > > <br>> > > # ----------------------------<br>> > > # pgPool-II configuration file<br>> > > # ----------------------------<br>> > > #<br>> > > # This file consists of lines of the form:<br>> > > #<br>> > > # name = value<br>> > > #<br>> > > # Whitespace may be used.
Comments are introduced with "#" anywhere on a line.<br>> > > # The complete list of parameter names and allowed values can be found in the<br>> > > # pgPool-II documentation.<br>> > > #<br>> > > # This file is read on server<br>> startup and when the server receives a SIGHUP<br>> > > # signal. If you edit the file on a running system, you have to SIGHUP the<br>> > > # server for the changes to take effect, or use "pgpool reload". Some<br>> > > # parameters, which are marked below, require a server shutdown and restart to<br>> > > # take effect.<br>> > > #<br>> > > <br>> > > <br>> > > #------------------------------------------------------------------------------<br>> > > # CONNECTIONS<br>> > > #------------------------------------------------------------------------------<br>> > > <br>> > >
# - pgpool Connection Settings -<br>> > > <br>> > > listen_addresses = '*'<br>> > > # Host name or IP address to<br>> listen on:<br>> > > # '*' for all, '' for no TCP/IP connections<br>> > > # (change requires restart)<br>> > > port = 9999<br>> > >
# Port number<br>> > ><br>> # (change requires restart)<br>> > > socket_dir = '/tmp'<br>> > > # Unix domain socket path<br>> > >
# The Debian package defaults to<br>> > ><br>> # /var/run/postgresql<br>> > > # (change requires restart)<br>> > > <br>> > > <br>> > > # - pgpool Communication Manager Connection Settings -<br>> > > <br>> > > pcp_port = 9898<br>> > >
# Port number for pcp<br>> > ><br>> # (change requires restart)<br>> > > pcp_socket_dir = '/tmp'<br>> > > # Unix domain socket path for pcp<br>> > >
# The Debian package defaults to<br>> > ><br>> # /var/run/postgresql<br>> > > # (change requires restart)<br>> > > <br>> > > # - Backend Connection Settings -<br>> > > <br>> > > backend_hostname0 = '192.168.91.33'<br>> > >
# Host name or IP address to connect to for backend 0<br>> > > backend_port0 = 5432<br>> > ><br>> # Port number for backend 0<br>> > > backend_weight0 = 1<br>> > > # Weight for backend 0 (only in load balancing mode)<br>> > > backend_data_directory0 = '/var/lib/postgresql/9.2/data'<br>> >
> # Data directory for backend 0<br>> > > backend_flag0 = 'ALLOW_TO_FAILOVER'<br>> > ><br>> # Controls various backend behavior<br>> > > # ALLOW_TO_FAILOVER or DISALLOW_TO_FAILOVER<br>> > > <br>> > > backend_hostname1 = '192.168.91.34'<br>> > > backend_port1 = 5432<br>> > >
backend_weight1 = 1<br>> > > backend_data_directory1 = '/var/lib/postgresql/9.2/data'<br>> > > backend_flag1 = 'ALLOW_TO_FAILOVER'<br>> > > <br>> > > <br>> > > backend_hostname2 = '192.168.91.35'<br>> > > backend_port2 = 5432<br>> > > backend_weight2 = 1<br>> > > backend_data_directory2 =<br>> '/var/lib/postgresql/9.2/data'<br>> > > backend_flag2 = 'ALLOW_TO_FAILOVER'<br>> > > <br>> > > <br>> > > # - Authentication -<br>> > > <br>> > > enable_pool_hba = on<br>> > > # Use pool_hba.conf for client authentication<br>> > > pool_passwd = ''<br>> > >
# File name of pool_passwd for md5 authentication.<br>> > ><br>> # "" disables pool_passwd.<br>> > > # (change requires restart)<br>> > > authentication_timeout = 60<br>> > >
# Delay in seconds to complete client authentication<br>> > ><br>> # 0 means no timeout.<br>> > > <br>> > > # - SSL Connections -<br>> > > <br>> > > ssl = off<br>> > > # Enable SSL support<br>> > >
# (change requires restart)<br>> > > #ssl_key = './server.key'<br>> > ><br>> # Path to the SSL private key file<br>> > > # (change requires restart)<br>> > > #ssl_cert = './server.cert'<br>> > >
# Path to the SSL public certificate file<br>> > ><br>> # (change requires restart)<br>> > > #ssl_ca_cert = ''<br>> > > # Path to a single PEM format file<br>> > >
# containing CA root certificate(s)<br>> > ><br>> # (change requires restart)<br>> > > #ssl_ca_cert_dir = ''<br>> > > # Directory containing CA root certificate(s)<br>> > >
# (change requires restart)<br>> > > <br>> > > <br>> > > #------------------------------------------------------------------------------<br>> > > # POOLS<br>> > ><br>> #------------------------------------------------------------------------------<br>> > > <br>> > > # - Pool size -<br>> > > <br>> > > num_init_children = 100<br>> > > # Number of pools<br>> > >
# (change requires restart)<br>> > > max_pool = 10<br>> > > # Number of connections per pool<br>> > ><br>> # (change requires restart)<br>> > > <br>> > > # - Life time -<br>> > > <br>> > > child_life_time = 300<br>> > >
# Pool exits after being idle for this many seconds<br>> > > child_max_connections = 0<br>> > > # Pool exits after receiving that many connections<br>> > ><br>> # 0 means no exit<br>> > > connection_life_time = 0<br>> > >
# Connection to backend closes after being idle for this many seconds<br>> > > # 0 means no close<br>> > > client_idle_limit = 0<br>> > ><br>> # Client is disconnected after being idle for that many seconds<br>> > >
# (even inside an explicit transactions!)<br>> > > # 0 means no disconnection<br>> > > <br>> > > <br>> > > #------------------------------------------------------------------------------<br>> > > # LOGS<br>> > ><br>> #------------------------------------------------------------------------------<br>> > > <br>> > > # - Where to log -<br>> > > <br>> > > log_destination = 'syslog'<br>> > >
# Where to log<br>> > > # Valid values are combinations of stderr,<br>> > > # and syslog. Default to stderr.<br>> > > <br>> > > # - What to log<br>> -<br>> > > <br>> > > print_timestamp = on<br>> > >
# Print timestamp on each line<br>> > > # (change requires restart)<br>> > > <br>> > > log_connections = on<br>> > > # Log connections<br>> > > log_hostname = on<br>> > ><br>>
# Hostname will be shown in ps status<br>> > > # and in logs if connections are logged<br>> > > log_statement = off<br>> > > # Log all statements<br>> > > log_per_node_statement = on<br>> > ><br>>
# Log all statements<br>> > > # with node and backend informations<br>> > > log_standby_delay = 'none'<br>> > > # Log standby delay<br>> > ><br>>
# Valid values are combinations of always,<br>> > > # if_over_threshold, none<br>> > > <br>> > > # - Syslog specific -<br>> > > <br>> > > syslog_facility = 'LOCAL0'<br>> > > # Syslog local facility. Default to LOCAL0<br>> > > syslog_ident = 'pgpool'<br>> > ><br>>
# Syslog program identification string<br>> > > # Default to 'pgpool'<br>> > > <br>> > > # - Debug -<br>> > > <br>> > > debug_level = 0<br>> > > # Debug message verbosity level<br>> > ><br>>
# 0 means no message, 1 or more mean verbose<br>> > > <br>> > > <br>> > > #------------------------------------------------------------------------------<br>> > > # FILE LOCATIONS<br>> > > #------------------------------------------------------------------------------<br>> > > <br>> > > pid_file_name = '/var/run/pgpool.pid'<br>> > > # PID file name<br>> > ><br>>
# (change requires restart)<br>> > > logdir = '/tmp'<br>> > > # Directory of pgPool status file<br>> > > # (change requires restart)<br>> > > <br>> > > <br>> > > #------------------------------------------------------------------------------<br>> > > # CONNECTION POOLING<br>> >
><br>> #------------------------------------------------------------------------------<br>> > > <br>> > > connection_cache = on<br>> > > # Activate connection pools<br>> > > # (change requires restart)<br>> > > <br>> > > # Semicolon separated list of queries<br>> >
><br>> # to be issued at the end of a session<br>> > > # The default is for 8.3 and later<br>> > > reset_query_list = 'ABORT; DISCARD ALL'<br>> > > # The following one is for 8.2 and before<br>> > > #reset_query_list = 'ABORT; RESET ALL; SET SESSION AUTHORIZATION DEFAULT'<br>> > > <br>> >
> <br>> > ><br>> #------------------------------------------------------------------------------<br>> > > # REPLICATION MODE<br>> > > #------------------------------------------------------------------------------<br>> > > <br>> > > replication_mode = on<br>> > > # Activate replication mode<br>> > > # (change requires restart)<br>> > > replicate_select = off<br>> > ><br>>
# Replicate SELECT statements<br>> > > # when in replication or parallel mode<br>> > > # replicate_select is higher priority than<br>> > ><br>>
# load_balance_mode.<br>> > > <br>> > > insert_lock = on<br>> > > # Automatically locks a dummy row or a table<br>> > > # with INSERT statements to keep SERIAL data<br>> > ><br>>
# consistency<br>> > > # Without SERIAL, no lock will be issued<br>> > > lobj_lock_table = ''<br>> > > # When rewriting lo_creat command in<br>> > ><br>>
# replication mode, specify table name to<br>> > > # lock<br>> > > <br>> > > # - Degenerate handling -<br>> > > <br>> > > replication_stop_on_mismatch = off<br>> > > # On disagreement with the packet kind<br>> > ><br>>
# sent from backend, degenerate the node<br>> > > # which is most likely "minority"<br>> > > # If off, just force to exit this session<br>> > > <br>> > > failover_if_affected_tuples_mismatch = off<br>> > ><br>>
# On disagreement with the number of affected<br>> > > # tuples in UPDATE/DELETE queries, then<br>> > > # degenerate the node which is most likely<br>> > ><br>>
# "minority".<br>> > > # If off, just abort the transaction to<br>> > > # keep the consistency<br>> > > <br>> > > <br>> > > #------------------------------------------------------------------------------<br>> > > # LOAD BALANCING MODE<br>> > ><br>>
#------------------------------------------------------------------------------<br>> > > <br>> > > load_balance_mode = on<br>> > > # Activate load balancing mode<br>> > > # (change requires restart)<br>> > > ignore_leading_white_space = on<br>> > > # Ignore leading white spaces of each
query<br>> > > white_function_list =<br>> ''<br>> > > # Comma separated list of function names<br>> > > # that don't write to database<br>> > > # Regexp are accepted<br>> > > black_function_list = 'nextval,setval'<br>> > ><br>>
# Comma separated list of function names<br>> > > # that write to database<br>> > > # Regexp are accepted<br>> > > <br>> > > <br>> > > #------------------------------------------------------------------------------<br>> > > # MASTER/SLAVE MODE<br>> > ><br>>
#------------------------------------------------------------------------------<br>> > > <br>> > > master_slave_mode = off<br>> > > # Activate master/slave mode<br>> > > # (change requires restart)<br>> > > master_slave_sub_mode = 'slony'<br>> > > # Master/slave sub mode<br>> >
><br>> # Valid values are combinations slony or<br>> > > # stream. Default is slony.<br>> > > # (change requires restart)<br>> > > <br>> > > # - Streaming -<br>> > > <br>> > > sr_check_period = 0<br>> > ><br>>
# Streaming replication check period<br>> > > # Disabled (0) by default<br>> > > sr_check_user = 'postgres'<br>> > > # Streaming replication check user<br>> > ><br>>
# This is necessary even if you disable<br>> > > # streaming replication delay check with<br>> > > # sr_check_period = 0<br>> > > sr_check_password = ''<br>> > ><br>>
# Password for streaming replication check user<br>> > > delay_threshold = 0<br>> > > # Threshold before not dispatching query to standby node<br>> > > # Unit is in bytes<br>> > ><br>>
# Disabled (0) by default<br>> > > <br>> > > # - Special commands -<br>> > > <br>> > > follow_master_command = ''<br>> > > # Executes this command after master failover<br>> > > # Special values:<br>> > ><br>>
# %d = node id<br>> > > # %h = host name<br>> > > # %p = port number<br>> > >
# %D =<br>> database cluster path<br>> > > # %m = new master node id<br>> > > # %H = hostname of the new master node<br>> > > # %M = old master node id<br>> > ><br>>
# %P = old primary node id<br>> > > # %r = new master port number<br>> > > # %R = new master database cluster path<br>> > ><br>>
# %% = '%' character<br>> > > <br>> > > <br>> > > #------------------------------------------------------------------------------<br>> > > # PARALLEL MODE<br>> > > #------------------------------------------------------------------------------<br>> > > <br>> > > parallel_mode = off<br>> > > # Activates parallel query mode<br>> > ><br>>
# (change requires restart)<br>> > > pgpool2_hostname = ''<br>> > > # Set pgpool2 hostname<br>> > > # (change requires restart)<br>> > > <br>> > > # - System DB info -<br>> > > <br>> > > system_db_hostname = 'localhost'<br>> > ><br>>
# (change requires restart)<br>> > > system_db_port = 5432<br>> > > # (change requires restart)<br>> > > system_db_dbname = 'pgpool'<br>> > > # (change requires restart)<br>> > > system_db_schema = 'pgpool_catalog'<br>> > ><br>>
# (change requires restart)<br>> > > system_db_user = 'pgpool'<br>> > > # (change requires restart)<br>> > > system_db_password = ''<br>> > > # (change requires restart)<br>> > > <br>> > > <br>> > > #------------------------------------------------------------------------------<br>>
> > # HEALTH CHECK<br>> > ><br>> #------------------------------------------------------------------------------<br>> > > <br>> > > health_check_period = 2<br>> > > # Health check period<br>> > > # Disabled (0) by default<br>> > > health_check_timeout = 20<br>> > > # Health
check timeout<br>> > ><br>> # 0 means no timeout<br>> > > health_check_user = 'postgres'<br>> > > # Health check user<br>> > > health_check_password = ''<br>> > > # Password for health check user<br>> > > health_check_max_retries = 0<br>> > ><br>>
# Maximum number of times to retry a failed health check before giving up.<br>> > > health_check_retry_delay = 1<br>> > > # Amount of time to wait (in seconds) between retries.<br>> > > <br>> > > <br>> > > #------------------------------------------------------------------------------<br>> > > # FAILOVER AND FAILBACK<br>> > > #------------------------------------------------------------------------------<br>> > > <br>> > > failover_command = ''<br>> >
><br>> # Executes this command at failover<br>> > > # Special values:<br>> > > # %d = node id<br>> > >
# %h = host<br>> name<br>> > > # %p = port number<br>> > > # %D = database cluster path<br>> > >
# %m = new master node id<br>> > ><br>> # %H = hostname of the new master node<br>> > > # %M = old master node id<br>> > >
# %P = old primary node id<br>> > ><br>> # %r = new master port number<br>> > > # %R = new master database cluster path<br>> > >
# %% = '%' character<br>> > > failback_command = ''<br>> > ><br>> # Executes this command at failback.<br>> > > # Special values:<br>> > >
# %d = node id<br>> > > # %h = host<br>> name<br>> > > # %p = port number<br>> > >
# %D = database cluster path<br>> > > # %m = new master node id<br>> > ><br>> # %H = hostname of the new master node<br>> > >
# %M = old master node id<br>> > > # %P = old primary node id<br>> > ><br>> # %r = new master port number<br>> > >
# %R = new master database cluster path<br>> > > # %% = '%' character<br>> > > <br>> > > fail_over_on_backend_error = on<br>> > ><br>> # Initiates failover when reading/writing to the<br>> > >
# backend communication socket fails<br>> > > # If set to off, pgpool will report an<br>> > ><br>> # error and disconnect the session.<br>> > > <br>> > > search_primary_node_timeout = 10<br>> > >
# Timeout in seconds to search for the<br>> > > # primary node when a failover occurs.<br>> > ><br>> # 0 means no timeout, keep searching<br>> > >
# for a primary node forever.<br>> > > <br>> > > #------------------------------------------------------------------------------<br>> > > # ONLINE RECOVERY<br>> > > #------------------------------------------------------------------------------<br>> > > <br>> > > recovery_user = 'postgres'<br>> > ><br>> # Online recovery user<br>> > > recovery_password = ''<br>> > >
# Online recovery password<br>> > > recovery_1st_stage_command = 'basebackup.sh'<br>> > > # Executes a command in first stage<br>> > > recovery_2nd_stage_command = 'pgpool_recovery_pitr.sh'<br>> > ><br>> # Executes a command in second stage<br>> > > recovery_timeout = 900<br>> > >
# Timeout in seconds to wait for the<br>> > > # recovering node's postmaster to start up<br>> > ><br>> # 0 means no wait<br>> > > client_idle_limit_in_recovery = 0<br>> > >
# Client is disconnected after being idle<br>> > > # for that many seconds in the second stage<br>> > ><br>> # of online recovery<br>> > >
# 0 means no disconnection<br>> > > # -1 means immediate disconnection<br>> > > <br>> > > <br>> > > #------------------------------------------------------------------------------<br>> > > # WATCHDOG<br>> > ><br>> #------------------------------------------------------------------------------<br>> > > <br>> > > # - Enabling -<br>> > > <br>> > > use_watchdog = on<br>> > >
# Activates watchdog<br>> > > # (change requires restart)<br>> > > <br>> > > # -Connection to up stream servers -<br>> > > <br>> > > trusted_servers = ''<br>> > ><br>> # trusted server list which are used<br>> > >
# to confirm network connection<br>> > > # (hostA,hostB,hostC,...)<br>> > ><br>> # (change requires restart)<br>> > > ping_path = '/bin'<br>> > >
# ping command path<br>> > > # (change requires restart)<br>> > > <br>> > > # - Watchdog communication Settings -<br>> > > <br>> > > wd_hostname = '192.168.91.31'<br>> > ><br>> # Host name or IP address of this watchdog<br>> > >
# (change requires restart)<br>> > > wd_port = 9000<br>> > > # port number for watchdog service<br>> > ><br>> # (change requires restart)<br>> > > wd_authkey = ''<br>> > >
# Authentication key for watchdog communication<br>> > > # (change requires restart)<br>> > > <br>> > > # - Virtual IP control Setting -<br>> > > <br>> > > delegate_IP = '192.168.91.30'<br>> > ><br>> # delegate IP address<br>> > >
# If this is empty, virtual IP never bring up. <br>> > > # (change requires restart)<br>> > > ifconfig_path = '/bin'<br>> > ><br>> # ifconfig command path<br>> > >
# (change requires restart)<br>> > > if_up_cmd = 'ifconfig eth0:0 inet $_IP_$ netmask 255.255.255.0'<br>> > > # startup delegate IP command<br>> > ><br>> # (change requires restart)<br>> > > if_down_cmd = 'ifconfig eth0:0 down'<br>> > >
# shutdown delegate IP command<br>> > > # (change requires restart)<br>> > > <br>> > > arping_path = '/sbin' # arping command path<br>> > ><br>> # (change requires restart)<br>> > > <br>> > > arping_cmd =
'arping -U $_IP_$ -w 1'<br>> > > # arping command<br>> > > # (change requires restart)<br>> > > <br>> > > # - Behaivor on escalation Setting -<br>> > > <br>> > > clear_memqcache_on_escalation = on<br>> > ><br>> # Clear all the query cache on shared
memory<br>> > > # when standby pgpool escalate to active pgpool<br>> > > # (= virtual IP holder).<br>> > ><br>> # This should be off if client connects to pgpool<br>> > >
# not using virtual IP.<br>> > > # (change requires restart)<br>> > > wd_escalation_command = ''<br>> > ><br>> # Executes this command at escalation on new active pgpool.<br>> > >
# (change requires restart)<br>> > > <br>> > > # - Lifecheck Setting - <br>> > > <br>> > > # -- common --<br>> > > <br>> > > #wd_lifecheck_method = 'heartbeat'<br>> > > # Method of watchdog lifecheck ('heartbeat' or 'query')<br>> ><br>> > # (change
requires restart)<br>> > > wd_lifecheck_method = 'heartbeat'<br>> > > wd_interval = 2<br>> > > # lifecheck interval (sec) > 0<br>> > > # (change requires restart)<br>> > > <br>> > > # -- heartbeat mode --<br>> > > <br>> > > wd_heartbeat_port = 9694<br>> > ><br>>
# Port number for receiving heartbeat signal<br>> > > # (change requires restart)<br>> > > wd_heartbeat_keepalive = 2<br>> > > # Interval time of sending heartbeat signal (sec)<br>> > ><br>>
# (change requires restart)<br>> > > wd_heartbeat_deadtime = 30<br>> > > # Deadtime interval for heartbeat signal (sec)<br>> > > # (change requires restart)<br>> > > <br>> > > heartbeat_destination0 = '192.168.91.32'<br>> > ><br>>
# Host name or IP address of destination 0<br>> > > # for sending heartbeat signal.<br>> > > # (change requires restart)<br>> > > heartbeat_destination_port0 = 9694 <br>> > ><br>>
# Port number of destination 0 for sending<br>> > > # heartbeat signal. Usually this is the<br>> > > # same as wd_heartbeat_port.<br>> > ><br>>
# (change requires restart)<br>> > > heartbeat_device0 = 'eth0'<br>> > > # Name of NIC device (such like 'eth0')<br>> > > # used for sending/receiving heartbeat<br>> > ><br>>
# signal to/from destination 0.<br>> > > # This works only when this is not empty<br>> > > # and pgpool has root privilege.<br>> > ><br>>
# (change requires restart)<br>> > > <br>> > > #heartbeat_destination1 = 'host0_ip2'<br>> > > #heartbeat_destination_port1 = 9694<br>> > > #heartbeat_device1 = ''<br>> > > <br>> > > # -- query mode --<br>> > > <br>> > > wd_life_point = 3<br>> > > # lifecheck retry times<br>> > >
# (change<br>> requires restart)<br>> > > wd_lifecheck_query = 'SELECT 1'<br>> > > # lifecheck query to pgpool from watchdog<br>> > > # (change requires restart)<br>> > > wd_lifecheck_dbname = 'template1'<br>> > >
# Database name connected for lifecheck<br>> > ><br>> # (change requires restart)<br>> > > wd_lifecheck_user = 'postgres'<br>> > > # watchdog user monitoring pgpools in lifecheck<br>> > >
# (change requires restart)<br>> > > wd_lifecheck_password = ''<br>> > ><br>> # Password for watchdog user in lifecheck<br>> > > # (change requires restart)<br>> > > <br>> > > # - Other pgpool Connection Settings -<br>> > > <br>> > > other_pgpool_hostname0 =
'192.168.91.32'<br>> > > # Host name or IP address to connect to for other pgpool 0<br>> > ><br>> # (change requires restart)<br>> > > other_pgpool_port0 = 9999<br>> > > # Port number for othet pgpool 0<br>> > >
# (change requires restart)<br>> > > other_wd_port0 = 9000<br>> > ><br>> # Port number for othet watchdog 0<br>> > > # (change requires restart)<br>> > > #other_pgpool_hostname1 = 'host1'<br>> > > #other_pgpool_port1 = 5432<br>> > > #other_wd_port1 = 9000<br>> >
> <br>> > > <br>> > > #------------------------------------------------------------------------------<br>> > > # OTHERS<br>> > > #------------------------------------------------------------------------------<br>> > > relcache_expire = 0<br>> > ><br>> # Life time of relation cache in seconds.<br>> > > # 0 means no cache expiration(the default).<br>> > >
# The relation cache is used for cache the<br>> > ><br>> # query result against PostgreSQL system<br>> > > # catalog to obtain various information<br>> > >
# including table structures or if it's a<br>> > ><br>> # temporary table or not. The cache is<br>> > > # maintained in a pgpool child local memory<br>> > >
# and being kept as long as it survives.<br>> > ><br>> # If someone modify the table by using<br>> > > # ALTER TABLE or some such, the relcache is<br>> > >
# not consistent anymore.<br>> > > <br>> # For this purpose, cache_expiration<br>> > > # controls the life time of the cache.<br>> > > <br>> > > relcache_size = 256<br>> > >
# Number of relation cache<br>> > > # entry. If you see frequently:<br>> > ><br>> # "pool_search_relcache: cache replacement happend"<br>> > >
# in the pgpool log, you might want to increate this number.<br>> > > <br>> > > check_temp_table = on<br>> > > # If on, enable temporary table check in SELECT statements.<br>> > ><br>> # This initiates queries against system catalog of primary/master<br>> > >
# thus increases load of master.<br>> > > # If you are absolutely sure that your system never uses temporary tables<br>> > ><br>> # and you want to save access to primary/master, you could turn this off.<br>> > >
# Default is on.<br>> > > <br>> > > <br>> > > #------------------------------------------------------------------------------<br>> > > # ON MEMORY QUERY MEMORY CACHE<br>> > > #------------------------------------------------------------------------------<br>> > > memory_cache_enabled = off<br>> > > <br>> # If on, use the memory cache functionality, off by default<br>> > > memqcache_method = 'shmem'<br>> > >
# Cache storage method. either 'shmem'(shared memory) or<br>> > > # 'memcached'. 'shmem' by default<br>> > > # (change requires restart)<br>> > > memqcache_memcached_host = 'localhost'<br>> > > <br>> # Memcached host name or IP address. Mandatory if<br>> > >
# memqcache_method = 'memcached'.<br>> > > # Defaults to localhost.<br>> > > # (change requires restart)<br>> > > memqcache_memcached_port = 11211<br>> > > <br>> # Memcached port number. Mondatory if memqcache_method = 'memcached'.<br>> > >
# Defaults to 11211.<br>> > > # (change requires restart)<br>> > > memqcache_total_size = 67108864<br>> > > # Total memory size in bytes for storing memory cache.<br>> > > <br>> # Mandatory if memqcache_method =
'shmem'.<br>> > > # Defaults to 64MB.<br>> > > # (change requires restart)<br>> > > memqcache_max_num_cache = 1000000<br>> > > # Total number of cache entries. Mandatory<br>> > > <br>> # if memqcache_method =
'shmem'.<br>> > > # Each cache entry consumes 48 bytes on shared memory.<br>> > > # Defaults to 1,000,000(45.8MB).<br>> > > # (change requires restart)<br>> > > memqcache_expire = 0<br>> > > <br>> # Memory cache
entry life time specified in seconds.<br>> > > # 0 means infinite life time. 0 by default.<br>> > > # (change requires restart)<br>> > > memqcache_auto_cache_invalidation = on<br>> > > # If on, invalidation of query cache is triggered by corresponding<br>> > > <br>>
# DDL/DML/DCL(and memqcache_expire). If off, it is only triggered<br>> > > # by memqcache_expire. on by default.<br>> > > # (change requires restart)<br>> > > memqcache_maxcache = 409600<br>> > > # Maximum SELECT result size in bytes.<br>> > >
<br>> # Must be smaller than memqcache_cache_block_size. Defaults to 400KB.<br>> > > # (change requires restart)<br>> > > memqcache_cache_block_size = 1048576<br>> > > # Cache block size in bytes. Mandatory if memqcache_method = 'shmem'.<br>> > > # Defaults to
1MB.<br>> > ><br>> # (change requires restart)<br>> > > memqcache_oiddir = '/var/log/pgpool/oiddir'<br>> > > # Temporary work directory to record table oids<br>> > > # (change requires restart)<br>> > > white_memqcache_table_list = ''<br>> > ><br>>
# Comma separated list of table names to memcache<br>> > > # that don't write to database<br>> > > # Regexp are accepted<br>> > > black_memqcache_table_list = ''<br>> > ><br>>
# Comma separated list of table names not to memcache<br>> > > # that don't write to database<br>> > > # Regexp are accepted<br>> > > <br>> > > <br>> > > <br>> > > <br>> > > Also, when I use load balance mode the load balancing is made between connections and not queries, right?<br>>
> > I mean that if i have 1 script<br>> the open one connection and perform 1000 selects, these selects will be performed on one node.<br>> > > <br>> > > <br>> > > <br>> > > <br>> > > Regards,<br>> > > Adrian Videanu<br>> > > <br>> > > <br>> > > ________________________________<br>> > > From: Yugo Nagata <<a ymailto="mailto:nagata@sraoss.co.jp" href="mailto:nagata@sraoss.co.jp">nagata@sraoss.co.jp</a>><br>> > > To: Videanu Adrian <<a ymailto="mailto:videanuadrian@yahoo.com" href="mailto:videanuadrian@yahoo.com">videanuadrian@yahoo.com</a>> <br>> > > Cc: <a ymailto="mailto:pgpool-general@pgpool.net" href="mailto:pgpool-general@pgpool.net">pgpool-general@pgpool.net</a> <br>> > > Sent: Thursday, November 7, 2013 12:44 PM<br>> > > Subject: Re: [pgpool-general: 2237] pgpool 3.3.1 replicate setup
with 3 nodes<br>> > > <br>> > > <br>> > > Hi,<br>> > > <br>> > > Are you setting<br>> recovery_2nd_stage_command properly?<br>> > > (<a href="http://www.pgpool.net/docs/latest/pgpool-en.html#online-recovery" target="_blank">http://www.pgpool.net/docs/latest/pgpool-en.html#online-recovery</a><br>> > > <a href="http://www.pgpool.net/docs/latest/pgpool-en.html#RECOVERY_2ND_STAGE_COMMAND" target="_blank">http://www.pgpool.net/docs/latest/pgpool-en.html#RECOVERY_2ND_STAGE_COMMAND</a>)<br>> > > <br>> > > If this is wrong, recovery might fails and backends might get inconsistent.<br>> > > For analysis, could you please provide your pgpool.conf and all scripts? <br>> > > <br>> > > <br>> > > On Wed, 6 Nov 2013 06:08:05 -0800 (PST)<br>> > > Videanu Adrian <<a ymailto="mailto:videanuadrian@yahoo.com"
href="mailto:videanuadrian@yahoo.com">videanuadrian@yahoo.com</a>> wrote:<br>> > > <br>> > > > Hi all,<br>> > > > <br>> > > > I have a pgpool 3.3.1 with 3 postgresql 9.2<br>> backends.<br>> > > > The pgpool is configured as : [ mode ]<br>> > > > Replication Mode<br>> > > > / Load Balance Mode / Watchdog On<br>> > > > <br>> > > > I have a problem when I use online recovery for one of the nodes.<br>> > > > <br>> > > > Scenario: <br>> > > > 1. The 3rd node failed.<br>> > > > 2. Pgpool detects the failure and present the node as down. (everything is ok)<br>> > > > 3. I press recovery from pgpadmin and the recovery script starts.<br>> > > > 4. The 3rd node is back online, but the sequence for this node and the 1st node are incremented by 1
regarding the 2nd node.<br>> > > > Because of this nothing works because i get a kind mismatch between nodes.<br>> > > > <br>> > > > Does replication setup works for more than 2 nodes and if yes how should i perform online recovery.<br>> > > > <br>> > ><br>> > PS. I use PITR.<br>> > > > <br>> > > > <br>> > > > <br>> > > > Regards,<br>> > > > Adrian Videanu<br>> > > <br>> > > <br>> > > -- <br>> > > Yugo Nagata <<a ymailto="mailto:nagata@sraoss.co.jp" href="mailto:nagata@sraoss.co.jp">nagata@sraoss.co.jp</a><br>> > > ><br>> > <br>> > <br>> > -- <br>> > Yugo Nagata <<a ymailto="mailto:nagata@sraoss.co.jp" href="mailto:nagata@sraoss.co.jp">nagata@sraoss.co.jp</a>><br>> <br>> <br>> -- <br>> Yugo Nagata <<a
ymailto="mailto:nagata@sraoss.co.jp" href="mailto:nagata@sraoss.co.jp">nagata@sraoss.co.jp</a>><br>> <br>> <br>> <br>> _______________________________________________<br>> pgpool-general mailing list<br>> <a ymailto="mailto:pgpool-general@pgpool.net" href="mailto:pgpool-general@pgpool.net">pgpool-general@pgpool.net</a><br>> <a href="http://www.pgpool.net/mailman/listinfo/pgpool-general" target="_blank">http://www.pgpool.net/mailman/listinfo/pgpool-general</a><br><br><br>-- <br>Yugo Nagata <<a ymailto="mailto:nagata@sraoss.co.jp" href="mailto:nagata@sraoss.co.jp">nagata@sraoss.co.jp</a>><br><br></div> </div> </div> </div></body></html>