<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://pgpool.net/mediawiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ishii</id>
	<title>pgpool Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://pgpool.net/mediawiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ishii"/>
	<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=Special:Contributions/Ishii"/>
	<updated>2026-05-04T00:33:57Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.3</generator>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=FAQ&amp;diff=3987</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=FAQ&amp;diff=3987"/>
		<updated>2025-03-03T04:49:13Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Pgpool-II Frequently Asked Questions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II Frequently Asked Questions ==&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why configure fails by &amp;quot;pg_config not found&amp;quot; on my Ubuntu box?&#039;&#039;&#039; ===&lt;br /&gt;
: pg_config is in libpq-dev package. You need to install it before running configure.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why records inserted on the primary node do not appear on the standby nodes?&#039;&#039;&#039; ===&lt;br /&gt;
: Are you using streaming replication and a hash index on the table? Then it&#039;s a known limitation of streaming replication. The inserted record is there. But if you SELECT the record using the hash index, it will not appear. Hash index changes do not produce WAL record thus they are not reflected to the standby nodes. Solutions are: 1) use btree index instead 2) use pgpool-II native replication.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I mix different versions of PostgreSQL as pgpool-II backends?&#039;&#039;&#039; ===&lt;br /&gt;
: You cannot mix different major versions of PostgreSQL, for example 8.4.x and 9.0.x. On the other hand you can mix different minor versions of PostgreSQL, for example 9.0.3 and 9.0.4. Pgpool-II assumes messages from PostgreSQL to pgpool-II are identical anytime. Different major version of PostgreSQL may send out different messages and this would cause trouble for Pgpool-II.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I mix different platforms of PostgreSQL as pgpool-II backends, for example Linux and Windows?&#039;&#039;&#039; ===&lt;br /&gt;
: In streaming replication mode, no. Because streaming replication requires that primary and standby platforms are phsyically identical. On the other hand, pgpool-II&#039;s replication mode only requires logically database clusters identical. Beware, however, that online recovery script does not use rsync or some such, which do phical copying among database clusters. You want to use pg_dumpall instead.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;It seems my pgpool-II does not do load balancing. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: First of all, pgpool-II&#039; load balancing is &amp;quot;session base&amp;quot;, not &amp;quot;statement base&amp;quot;. That means, DB node selection for load balancing is decided at the beginning of session. So all SQL statements are sent to the same DB node until the session ends.&lt;br /&gt;
&lt;br /&gt;
: Another point is, whether statement is in an explicit transaction or not. If the statement is in a transaction, it will not be load balanced in the replication mode. In pgpool-II 3.0 or later, SELECT will be load balanced even in a transaction if operated in the master/slave mode.&lt;br /&gt;
&lt;br /&gt;
: Note the method to choose DB node is not LRU or some such. Pgpool-II chooses DB node randomly considering the &amp;quot;weight&amp;quot; parameter in pgpool.conf. This means that the chosen DB node is not uniformly distributed among DB nodes in short term. You might want to inspect the effect of load balancing after ~100 queries have been sent.&lt;br /&gt;
&lt;br /&gt;
: Also cursor statements are not load balanced in replication mode. i.e.:DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE. Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I observe the effect of load balancing?&#039;&#039;&#039; ===&lt;br /&gt;
: We recommend to enable &amp;quot;log_per_node_statement&amp;quot; directive in pgpool.conf for this. Here is an example of the log:&lt;br /&gt;
: &amp;lt;pre&amp;gt;2011-05-07 08:42:42 LOG:   pid 22382: DB node id: 1 backend pid: 22409 statement: SELECT abalance FROM pgbench_accounts WHERE aid = 62797;&amp;lt;/pre&amp;gt;&lt;br /&gt;
: The &amp;quot;DB node id: 1&amp;quot; shows which DB node was chosen for this loadbalancing session.&lt;br /&gt;
&lt;br /&gt;
: Please make sure that you start pgpool-II with &amp;quot;-n&amp;quot; option to get pgpool-II log. (or you can use syslog in pgpool-II 3.1 or later)&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why am I getting &amp;quot;ProcessFrontendResponse: failed to read kind from frontend. frontend abnormally exited&amp;quot; in my pgool log?&#039;&#039;&#039; ===&lt;br /&gt;
: Well, your clients might be ill-behaved:-) PostgreSQL&#039;s protocol requires clients to send particular packet before they disconnect the connection. pgpool-II complains that clients disconnect without sending the packet. You could reprodcude the problem by using psql. Connect to pgpool using psql. Kill -9 psql. You will silimar message in the log. The message will not appear if you quit psql normaly. Another possibility is unstable network connection between your client machine and pgpool-II. Check the cable and network interface card.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m running pgpool-II in streaming replication mode. It seems it works but I find following errors in the log. Why?&#039;&#039;&#039; ===&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;E&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;[&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: pool_read2: EOF encountered with backend&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: make_persistent_db_connection: s_do_auth failed&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: find_primary_node: make_persistent_connection failed&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
: pgpool-II tries to connect to PostgreSQL to execute some functions such as pg_current_xlog_location(), which is used for detecting primary server or checking replication delay. The messages above indicates that pgpool-II failed to connect with user = health_check_user and password = health_check_password. You need to set them properly even if health_check_period = 0.&lt;br /&gt;
&lt;br /&gt;
: Note that pgpool-II 3.1 or later will use sr_check_user and sr_check_password for it instead.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I run pgbench to test pgpool-II, pgbench hangs. If I directly run pgbench against PostgreSQL, it works fine. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pgbench creates concurrent connections (the number of connections is specified by &amp;quot;-c&amp;quot; option) before starting actual transactions. So if the number of concurrent transactions specified by &amp;quot;-c&amp;quot; exceeds num_init_children, pgbench will stuck because it will wait for pgpool accepting connections forever (remember that pgpool-II accepts up to num_init_children concurrent sessions. If the number of concurrent sessions reach num_init_children, new session will be queued). On the other hand PostgreSQL does not accept concurrent sessions more than max_connections. So in this case you will just see PostgreSQL errors, rather than connection blocking. If you want to test pgpool-II&#039;s connection queuing, you can use psql instead of pgbench. In the example session below, num_init_children = 1 (this is not a recommended setting in the real world. This is just for simplicity).&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;$ psql test &amp;lt;-- connect to pgpool from terminal #1&lt;br /&gt;
psql (9.1.1)&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
test=# &lt;br /&gt;
$ psql test &amp;lt;-- tries to connect to pgpool from terminal #2 but it is blocked.&lt;br /&gt;
test=# SELECT 1; &amp;lt;--- do something from terminal #1 psql&lt;br /&gt;
test=# \q &amp;lt;-- quit psql session on terminal #1&lt;br /&gt;
psql (9.1.1) &amp;lt;-- now psql on terminal #2 accepts session&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
test=# &amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
=== &#039;&#039;&#039;I created pool_hba.conf and pool_passwd to enable md5 authentication through pgpool-II but it does not work. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Probably you made mistake somewhere. For your help here is a table which describes error patterns depending on the setting of pg_hba.conf, pool_hba.conf and pool_passwd.&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&lt;br /&gt;
{|styles=&amp;quot;background:white; color:black&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|pg_hba.conf&lt;br /&gt;
|pool_hba.conf&lt;br /&gt;
|pool_passwd&lt;br /&gt;
|result&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|md5&lt;br /&gt;
|yes&lt;br /&gt;
|md5 auth&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|md5&lt;br /&gt;
|no&lt;br /&gt;
|&amp;quot;MD5&amp;quot; authentication with pgpool failed for user &amp;quot;XX&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|trust&lt;br /&gt;
|yes/no&lt;br /&gt;
|MD5 authentication is unsupported in replication, master-slave and parallel mode&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|md5&lt;br /&gt;
|yes&lt;br /&gt;
|no auth&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|md5&lt;br /&gt;
|no&lt;br /&gt;
|&amp;quot;MD5&amp;quot; authentication with pgpool failed for user &amp;quot;XX&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|trust&lt;br /&gt;
|yes/no&lt;br /&gt;
|no auth&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I set up SSL for pgpool-II?&#039;&#039;&#039; ===&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;SSL support for pgpool-II consists of two parts: 1)between client and pgpool-II 2)pgpool-II and PostgreSQL. #1 and #2 are independent each other. For example, you can only enable SSL connection of #1, or #2. Or you can enable both #1 and #2. I explain #1 (for #2, please take a look at PostgreSQL documentation).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Make sure that pgpool is built with openssl. If you build from source code, use --with-openssl option.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
First create server certificate. In the command below you will be asked PEM pass phrase(It will be asked when pgpool starts up).&lt;br /&gt;
If you want to start pgpool without being asked pass phrase, you can remove it later.&lt;br /&gt;
([[sample server certficate create session]])&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
openssl req -new -text -out server.req&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Remove PEM pass phrase if you want.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openssl rsa -in privkey.pem -out server.key&lt;br /&gt;
Enter pass phrase for privkey.pem:&lt;br /&gt;
writing RSA key&lt;br /&gt;
$ rm privkey.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Turn the certificate into a self-signed certificate.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openssl req -x509 -in server.req -text -key server.key -out server.crt&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy server.key and server.crt to appropreate place. Suppose we copy to /usr/local/etc.&lt;br /&gt;
Make sure that you use cp -p to retain appropreate permission of server.key.&lt;br /&gt;
Alternatively you can set permission later.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ chmod og-rwx /usr/local/etc/server.key&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Set the certificate and key location in pgpool.conf.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssl = on&lt;br /&gt;
ssl_key = &#039;/usr/local/etc/server.key&#039;&lt;br /&gt;
ssl_cert = &#039;/usr/local/etc/server.crt&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Restart pgpool.&lt;br /&gt;
To confirm SSL connection between client and pgpool is working, connect to pgpool using psql.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
psql -h localhost -p 9999 test&lt;br /&gt;
psql (9.1.1)&lt;br /&gt;
SSL connection (cipher: AES256-SHA, bits: 256)&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
&lt;br /&gt;
test=# \q&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you see &amp;quot;SSL connection...&amp;quot;, SSL connection between client and pgpool is working.&lt;br /&gt;
Please make sure that use &amp;quot;-h localhost&amp;quot; option. Because SSL only works with TCP/IP,&lt;br /&gt;
with Unix domain socket SSL does not work. &lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m using pgpool-II in replication mode. I expected that pgpool-II replaces current_timestamp call with time constants in my INSERT query, but actually it doesn&#039;t. Why?&#039;&#039;&#039; ===&lt;br /&gt;
:Probably your INSERT query uses schema qualied table name (like public.mytable) and you did not install pool_regclass function coming pgpool. Without pgpool_reglclass, pgpool-II only deals with table names without schema qualification.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why max_connection must satisfy this formula max_connection &amp;gt;= (num_init_children * max_pool) and not max_connection &amp;gt;= num_init_children?&#039;&#039;&#039; ===&lt;br /&gt;
: Probably you need to understand how pgpool uses these variables. Here is internal processing inside pgpool.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Wait for connection request from clients.&lt;br /&gt;
&amp;lt;li&amp;gt;pgpool child receives connection request from a client.&lt;br /&gt;
&amp;lt;li&amp;gt;The pgpool child looks for existing connection in the pool which&lt;br /&gt;
   has requested database/user pair up to max_pool.&lt;br /&gt;
&amp;lt;li&amp;gt;If found, reuse it.&lt;br /&gt;
&amp;lt;li&amp;gt; If not found, opens a new connection to PostgreSQL and registers to&lt;br /&gt;
   the pool.  If the pool has no empty slot, closes the oldest&lt;br /&gt;
   connection to PostgreSQL and reuse the slot.&lt;br /&gt;
&amp;lt;li&amp;gt;Do some query processing until the client sends session close request.&lt;br /&gt;
&amp;lt;li&amp;gt;Close the connection to client but keeps the connection to&lt;br /&gt;
   PostgreSQL for future use.&lt;br /&gt;
&amp;lt;li&amp;gt;Go to #1&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Is connection pool cache shared among pgpool process?&#039;&#039;&#039; ===&lt;br /&gt;
:No, the connection pool cache is in pgpool&#039;s process private memory and is not shared by other pgpool. This is how the connection cache is managed: Suppose pgpool process 12345 has connection cache for database A/user B but process 12346 does not have connection cache for database A/user B and both 12345 and 12346 are in idle state (no client is connecting at this point). If client connects to pgpool process 12345 with database A/user B, then the exisiting connection of 12345 is reused. On the other hand, If client connects to pgpool process 12346, 12346 needs to create new connection.  Whether 12345 or 12346 is chosen, is not under control of pgpool. However in the long run, each pgpool child process will be equally chosen and it is expected that each process&#039;s pool will be resued equally.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why my SELECTs are not cached?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
:Certain libraries such as iBatis, MyBatis always rollback transactions if they are not explicitely committed. Pgpool never caches SELECTs result in a rollbacked transaction because they might not be inconsistent.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use # comments or blank lines in pool_passwd?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
: The answer is simple. No (just like /etc/passwd).&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I cannot use MD5 authentication if start pgpool without -n option. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: You must have given -f option as a relative path: i.e. &amp;quot;-f pgpool.conf&amp;quot;, rather than full path: i.e. &amp;quot;-f /usr/local/etc/pgpool.conf&amp;quot;. Pgpool tries to locate the full path of pool_passwd (which is neccesary for MD5 auth) from pgpool.conf path. This is fine with -n option. However if pgpool starts without -n option, it changes current directory to &amp;quot;/&amp;quot;, which is neccessary processs for daemonizing. As a result, pgpool tries to open &amp;quot;/pool_passwd&amp;quot;, which will not successs.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I see standby servers go down status in steaming replication mode and see PostgreSQL messages &amp;quot;terminating connection due to conflict&amp;quot; Why?&#039;&#039;&#039; ===&lt;br /&gt;
: If you see following messages along with those, it is likely vacuum on primary server removes rows which SELECTs on standby server want to see. Workaround is setting &amp;quot;hot_standby_feedback = on&amp;quot; in your standby server&#039;s postgresql.conf.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
2013-04-07 19:38:10 UTC FATAL:  terminating connection due to conflict with recovery&lt;br /&gt;
2013-04-07 19:38:10 UTC DETAIL:  User query might have needed to see row versions that must be removed.&lt;br /&gt;
2013-04-07 19:38:10 UTC HINT:  In a moment you should be able to reconnect to the database and repeat your command.&lt;br /&gt;
2013-04-07 19:38:10 UTC LOG:  could not send data to client: Connection reset by peer&lt;br /&gt;
2013-04-07 19:38:10 UTC ERROR:  canceling statement due to conflict with recovery&lt;br /&gt;
2013-04-07 19:38:10 UTC DETAIL:  User query might have needed to see row versions that must be removed.&lt;br /&gt;
2013-04-07 19:38:10 UTC LOG:  could not send data to client: Broken pipe&lt;br /&gt;
2013-04-07 19:38:10 UTC FATAL:  connection to client lost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Every few minites load of the system which pgpool-II running on gets high as much as 5-10. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Mulptiple users stats that this is observed only Linux kernel 3.0. 2.6 or 3.2 does show the behavior. We suspect that there is a problem with 3.0 kernel. See more discussions on &amp;quot;[pgpool-general: 1528] Mysterious Load Spikes&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When watchdog enabled and the connection number reach the number of num_init_children, VIP switchover occurs. Why?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
:When the connection number reach the number of num_init_children, the watchdog will be failed because select 1 is failed, and then VIP will be transfer to another pgpool. Unfortunately, there are no way to discriminate normal client&#039;s connections from watchdog&#039;s connection. Larger num_init_children, wd_life_point and smaller wd_interval may prevent the problem somewhat. &lt;br /&gt;
&lt;br /&gt;
:The next major version, pgpool-II 3.3, will support a new monitoring method which uses UDP heartbeat packets instead of queries such like &#039;SELECT 1&#039; to resolve the problem.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why do I need to install pgpool_regclass? &#039;&#039;&#039; ===&lt;br /&gt;
:  If you are using PostgreSQL 8.0 or later, installing pgpool_regclass function on all PostgreSQL to be accessed by pgpool-II is strongly recommended, as it is used internally by pgpool-II. Without this, handling of duplicate table names in different schema might cause trouble (temporary tables aren&#039;t a problem).&lt;br /&gt;
:Related FAQ is here https://www.pgpool.net/mediawiki/index.php?title=FAQ&amp;amp;action=submit#I.27m_using_pgpool-II_in_replication_mode._I_expected_that_pgpool-II_replaces_current_timestamp_call_with_time_constants_in_my_INSERT_query.2C_but_actually_it_doesn.27t._Why.3F&lt;br /&gt;
: If you are using PostgreSQL 9.4.0 or later and pgpool-II 3.3.4 or later, 3.4.0 or later, you don&#039;t need to install pgpool_regclass since PostgreSQL 9.4 has built-in pgpool_regclass like function &amp;quot;to_regclass&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;md5 authentication does not work. Please help&#039;&#039;&#039; ===&lt;br /&gt;
: There&#039;s an excellent summary of various check points to set up md5 authentication. Please take a look at it.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2013-May/001773.html&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m running pgpool/PostgreSQL on Amazon AWS and occasionaly I get network errors. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: It&#039;s a known problem with AWS. We recommend to complain to the Amazon support.&lt;br /&gt;
: pgpool-II 3.3.4, 3.2.9 or later mitigate the problem by changing timeout value for connect(actually select system call) from 1 second to 10 seconds.&lt;br /&gt;
: Also pgpool-II 3.4 or later has a switch to control the timeout value.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I cannot run pcp command on my Ubuntu box. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pcp commands need libpcp.so. In Ubuntu it is included &amp;quot;libpgpool0&amp;quot; package.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;On line recovery failed. How can I debug this?&#039;&#039;&#039; ===&lt;br /&gt;
: pcp_recovery_node executes recovery_1st_stage_command and/or recovery_2nd_stage_command depending on your configuration. Those scripts are supposed to be executed on the master PostgreSQL node (the first live node in replication mode or primary node in streaming replication mode). &amp;quot;BackendError&amp;quot; means there&#039;s something wrong in pgpool and/or PostgreSQL. To verify this, I recommend followings;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;start pgpool with debug option&lt;br /&gt;
&amp;lt;li&amp;gt;execute pcp_recovery_node&lt;br /&gt;
&amp;lt;li&amp;gt;examin pgpool log and master PostgreSQL log&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039; Watchdog doesn&#039;t start if not all &amp;quot;other&amp;quot; nodes are alive&#039;&#039;&#039;===&lt;br /&gt;
: It&#039;s a feature. Watchdog&#039;s lifeheck will start after all of the pgpools has started. Until this, failover of the virtual IP never occurs.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;If I start transaction, pgool-II also starts a transaction on standby nodes. Why?&#039;&#039;&#039;===&lt;br /&gt;
: This is necessary to deal with the case when JDBC driver wants to use cursors. Pgpool-II takes a liberty of distributing SELECTs to the standby node including cursor statements. Unfortunately cursor statements need to be executed in an explicit transaction.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I use schema qualified table names, pgpool-II does not invalidate on memory query cache and I got outdated data. Why?&#039;&#039;&#039;===&lt;br /&gt;
: It seems you did not install &amp;quot;pgpool_regclass&amp;quot; function. Without the function, pgpool-II ignores the schema name pat of the schema qualified table name and the cache invalidation fails.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I periodically get error message like &amp;quot;read_startup_packet: incorrect packet length&amp;quot;. What does it mean?&#039;&#039;&#039;===&lt;br /&gt;
: Monitoring tools including Zabbix and Nagios periodically sends a packet or ping to the port which pgoool is listening on. Unfortunately those packets do not have correct contents, and pgpool-II complains it. If you are not sure who is sending such a packet, you could turn on &amp;quot;log_connections&amp;quot; to know the source host and port info. If they are from such tools, you could stop the monitoring to avoid the problem  or even better, change the monitoring method to send legal query, for example, &amp;quot;SELECT 1&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m getting repeated errors like this every few minutes on Tomcat: &amp;quot;An I/O Error occurred while sending to the backend&amp;quot; Why?&#039;&#039;&#039;===&lt;br /&gt;
: Tomcat creates persistent connections to pgpool. If you set client_idle_limit to non 0, pgpool disconnects the connection and next time when Tomcat tries to send something to pgpool it breaks with the error message.&lt;br /&gt;
: One solution is set client_idle_limit to 0. However this will leave lots of idle connections.&lt;br /&gt;
: Another solution provided by Lachezar Dobrev is:&lt;br /&gt;
: You might solve that by adding a time-out on the Tomcat side. https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html&lt;br /&gt;
:   What you should set is (AFAIK):&lt;br /&gt;
:    minIdle (default is 10, set to 0)&lt;br /&gt;
:   timeBetweenEvictionRunsMillis (default 5000)&lt;br /&gt;
:   minEvictableIdleTimeMillis    (default 60000)&lt;br /&gt;
:This will try every 5 seconds and close any connections that were not used in the last 60 seconds. If you keep the sum of both numbers below the client time-out on the pgpool size connections should be closed at Tomcat side before they time-out on the pgpool side.&lt;br /&gt;
: It is also beneficial to set the&lt;br /&gt;
:    testOnBorrow (default false, set to true)&lt;br /&gt;
:    validationQuery (default none, set to &#039;SELECT version();&#039; no quotes)&lt;br /&gt;
:  This will help with connections should they expire while waiting, without supplying a disconnected connection to the application.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I check pg_stat_activity view, I see a query like &amp;quot;SELECT count(*) FROM pg_catalog.pg_class AS c WHERE c.oid = pgpool_regclass(&#039;pgbench_accounts&#039;) AND c.relpersistence = &#039;u&#039;&amp;quot; in active state for very long time. Why?&#039;&#039;&#039;===&lt;br /&gt;
: It&#039;s a limitation of pg_stat_activity. You can safely ignore it.&lt;br /&gt;
: Pgpool-II issues queries like above for internal use to master node. When user query runs in extended protocol mode (sent from JDBC driver, for example), pgpool-II&#039;s query also runs in the mode. To make pg_stat_activity recognize the query finishes, pgpool-II needs to send a packet called &amp;quot;Sync&amp;quot;, which unfortunately breaks user&#039;s query (more precisely, unnamed portal). Thus pgpool-II sends &amp;quot;Flush&amp;quot; packet instead but then pg_stat_activity does not recognize the end of the query.&lt;br /&gt;
: Interesting thing is, if you enable log_duration, it logs the query finishes.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Online recovery always fails after certain minutes. Why? &#039;&#039;&#039; ===&lt;br /&gt;
: It is possible that PostgreSQL statement_timeout kills the online recovery process. The process is executed as a SQL statement and if it&#039;s running too long, PostgreSQL sends signal 2 to the SQL and kills it. Varying by the size of the database, the online recovery process takes very long time. Make sure to disable statement_timeout or set it long enough time.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why &amp;quot;SET default_transaction_isolation TO DEFAULT&amp;quot; fails ? &#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
$ psql -h localhost -p 9999 -c &#039;SET default_transaction_isolation to DEFAULT;&#039;&lt;br /&gt;
ERROR: kind mismatch among backends. Possible last query was: &amp;quot;SET default_transaction_isolation to DEFAULT;&amp;quot; kind details are: 0[N: statement: SET default_transaction_isolation to DEFAULT;] 1[C]&lt;br /&gt;
HINT: check data consistency among db nodes&lt;br /&gt;
ERROR: kind mismatch among backends. Possible last query was: &amp;quot;SET default_transaction_isolation to DEFAULT;&amp;quot; kind details are: 0[N: statement: SET default_transaction_isolation to DEFAULT;] 1[C]&lt;br /&gt;
HINT: check data consistency among db nodes&lt;br /&gt;
connection to server was lost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
: Pgpool-II detects that node 0 returns &amp;quot;N&amp;quot; (a NOTICE message comes from PostgreSQL) while node 1 returns &amp;quot;C&amp;quot; (which means the command finished).&lt;br /&gt;
: Though pgpool-II expects that both node 0 and 1 returns identical messages, actually they are not. So pgpool-II threw an error.&lt;br /&gt;
: Probably certain log/message settings are different in node 0 and 1. Please check client_min_messages or something like that.&lt;br /&gt;
: They should be identical.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How does pgpool-II find the primary node?&#039;&#039;&#039;===&lt;br /&gt;
: pgpool-II issues &amp;quot;SELECT pg_is_in_recovery()&amp;quot; to each DB node. If it returns true, then the node is a standby node. If one of DB nodes returns false, then the node is the primary node and done.&lt;br /&gt;
: Because it is possible that promoting node could return true for the SELECT, if no primary node is found and &amp;quot;search_primary_node_timeout&amp;quot; is greater than 0, pgpool-II sleeps 1 second and contines to issues the SELECT query to each DB node again until total sleep time exceeds search_primary_node_timeout.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use pg_cancel_backend() or pg_terminate_backend()?&#039;&#039;&#039;===&lt;br /&gt;
: You can safely use pg_cancel_backend().&lt;br /&gt;
: Be warned that pg_terminate_backend() will cause a fail over because it makes PostgreSQL emit an identical error code as postmaster shutdown. Pgpool-II 3.6 or greater mitigates the problem. See [https://www.pgpool.net/docs/latest/en/html/restrictions.html the manual] for more details.&lt;br /&gt;
&lt;br /&gt;
: Remember that pgpool-II manages multiple PostgreSQL servers. To use the function, you not only need to identify the backend pid but the backend server.&lt;br /&gt;
: If the query is running on the primary server, you can call the function something like&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
/*NO LOAD BALANCE*/ SELECT pg_cancel_backend(pid)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
: SQL comment is to prevent the SELECT be load balanced to standby. Of course you could issue the SELECT against directly the primary server.&lt;br /&gt;
: If the query is running on one of standby servers, you need to issue the SELECT against directly the standby server.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why my client is disconnected to pgpool-II when failover happens?&#039;&#039;&#039;===&lt;br /&gt;
: pgpool-II consists of many process where each process corresponds to client session. When failover occurs, each process may iterate on a loop for each backend without knowing a backend goes down. This may result in incorrect processing, or segfault in the worst case. For this reason, when failover occurs, pgpool-II parent process interrupt child process using signal to let them exit. Note that switch over using pcp_detach node has same effect.&lt;br /&gt;
: From Pgpool-II 3.6 or greater, however, a fail over does not cause the disconnection in certain conditions. See the manual for more details.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why am I getting &amp;quot;LOG:  forked new pcp worker ..,&amp;quot; and &amp;quot;LOG:  PCP process with pid: xxxx exit with SUCCESS.&amp;quot; messages in pgpool log?&#039;&#039;&#039; ===&lt;br /&gt;
: Prior to pgpool-II 3.5, pgpool could only handle single PCP command at a time and all PCP commands were handled by a single PCP child process which lives throughout the lifespan of pgpool-II main process. In pgpool II 3.5 the restriction of single PCP command is removed and pgpool-II can now handle multiple simultaneous PCP commands. For every PCP command issued to pgpool a new PCP child process is forked and that process exits after execution of the PCP command is complete. So these log messages are perfectly normal and are generated whenever a new PCP worker process is created or completes execution for a PCP command.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How does pgpool-II handle md5 authentication?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
# PostgreSQL and pgpool store md5(password+username) into pg_authid or pool_password. From now on I denote string md5(password+username) as &amp;quot;S&amp;quot;.&lt;br /&gt;
# When md5 auth is requested, pgpool sends a random number salt &amp;quot;s0&amp;quot; to frontend.&lt;br /&gt;
# Frontend replies back to pgpool with md5(S+s0).&lt;br /&gt;
# pgpool extracts S from pgpool_passwd and calculate md5(S+s0). If #3 and #4 matches, goes to next step.&lt;br /&gt;
# Each backend sends salt to pgpool. Suppose we have two backends b1 and b2, and salts are s1 and s2.&lt;br /&gt;
# pgpool extracts S from pgpool_passwd and calculate md5(S+s1) and send it to b1. pgpool extracts S from pgpool_passwd and calculate md5(S+s2) and send it to b2.&lt;br /&gt;
# If b1 and b2 agree with the authentication, the whole md5 auth process succeeds.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why Pgpool-II does not automatically recognize a database comes back online?&#039;&#039;&#039; ===&lt;br /&gt;
: It would be technically possible but we don&#039;t think it&#039;s a safe feature.&lt;br /&gt;
: Consider a streaming replication configuration. When a standby comes back online, it does not necessarily means it connects to the current primary node. It may connect to a different primary node , or even it&#039;s not a standby any more. If Pgpool-II automatically recognizes such that standby as online, SELECTs to the standby node will return different result as the primary, which is a disaster for database applications.&lt;br /&gt;
: Also please note that &amp;quot;pgpool reload&amp;quot; does not do anything for recognizing the standby node as online. It just reloads configuration files.&lt;br /&gt;
: Please note that in Pgpool-II 4.1 or later, it is possible to automatically make a standby server online if it&#039;s safe enough. See configuration parameter &amp;quot;auto_failback&amp;quot; for more information.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;After enabling idle_in_transaction_session_timeout, Pgpool-II sets the DB node status to all down&#039;&#039;&#039; ===&lt;br /&gt;
: idle_in_transaction_session_timeout was introduced in PostgreSQL 9.6. It is intended to cancel idle transactions. Unfortunately after the time out occurs, PostgreSQL raises a FATAL error, which triggers failover in Pgpool-II if fail_over_on_backend_error is on.&lt;br /&gt;
: These are some workarounds to avoid the unwanted failover.&lt;br /&gt;
* Disable fail_over_on_backend_error. By this, failover will not happen if the FATAL error occurs, but the session will be terminated.&lt;br /&gt;
* Set connection_life_time、child_life_time and client_idle_limit less than idle_in_transaction_session_timeout. This will not terminate the session even if the FATAL error occurs. However even if the FATAL error does not occur, the connection pools are removed if one or more of items satisfy the specified condition, which may affect performance.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I know PostgreSQL backend status connected by Pgpool-II? &#039;&#039;&#039;===&lt;br /&gt;
: The backend status shown in pg_stat_activity can be examined by using &amp;quot;show pool_pools&amp;quot; command. One of the columns &amp;quot;pool_backendpid&amp;quot; shown by &amp;quot;show pool_pools&amp;quot; is the process id of the corresponding PostgreSQL backend process. Once it is determined, you can examine the output of pg_stat_activiy matching with its &amp;quot;pid&amp;quot; column.&lt;br /&gt;
: You can do this automatically by using dblink extension of PostgreSQL. Here is a sample query:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
SELECT * FROM dblink(&#039;dbname=test host=xxx port=11000 user=t-ishii password=xxx&#039;, &#039;show pool_pools&#039;) as t1 (pool_pid int, start_time text, pool_id int, backend_id int, database text, username text, create_time text,majorversion int, minorversion int, pool_counter int, pool_backendpid int, pool_connected int), pg_stat_activity p WHERE p.pid = t1.pool_backendpid;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: You can execute the SQL above on either PostgreSQL or Pgpool-II. The first argument of dblink is a connection string to connect to Pgpool-II, not PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Where can I get Debian packages for Pgpool-II? &#039;&#039;&#039; ===&lt;br /&gt;
: You can get Debian packages here: https://apt.postgresql.org/pub/repos/apt/pool/main/p/pgpool2/&lt;br /&gt;
: For older releases you can find the packages at: https://atalia.postgresql.org/morgue/p/pgpool2/&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How to run Pgpool-II with non-root user? &#039;&#039;&#039; ===&lt;br /&gt;
: If you install Pgpool-II using RPM packages, Pgpool-II will be running by root by default.&lt;br /&gt;
: You can also run Pgpool-II with a non-root user. But root privilege is required to control the virtual IP, so you have to copy ip/ifconfig/arping command and add the setuid flag to them.&lt;br /&gt;
&lt;br /&gt;
: Following is an example to run Pgpool-II with postgres user.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Edit pgpool.service file to use postgres user to start Pgpool-II&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# cp /usr/lib/systemd/system/pgpool.service /etc/systemd/system/pgpool.service&lt;br /&gt;
&lt;br /&gt;
# vi /etc/systemd/system/pgpool.service&lt;br /&gt;
...&lt;br /&gt;
User=postgres&lt;br /&gt;
Group=postgres&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Change owner of /var/{lib,run}/pgpool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# chown postgres:postgres /var/{lib,run}/pgpool&lt;br /&gt;
# cp /usr/lib/tmpfiles.d/pgpool-II-pgxx.conf /etc/tmpfiles.d&lt;br /&gt;
# vi /etc/tmpfiles.d/pgpool-II-pgxx.conf&lt;br /&gt;
===&lt;br /&gt;
d /var/run/pgpool 0755 postgres postgres -&lt;br /&gt;
===&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Change owner of Pgpool-II config files &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
chown -R postgres:postgres /etc/pgpool-II/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Copy ip/ifconfig/arping commands to somewhere where the user has access permissions and add setuid flag to them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# mkdir /var/lib/pgsql/sbin&lt;br /&gt;
# chown postgres:postgres /var/lib/pgsql/sbin&lt;br /&gt;
# chmod 700 /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/ifconfig /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/arping /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/ip /var/lib/pgsql/sbin&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/ip&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/ifconfig&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/arping &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use repmgr with Pgpool-II? &#039;&#039;&#039; ===&lt;br /&gt;
: No. These software do not consider each other. You should use Pgpool-II without repmger or use repmgr without Pgpool-II. See this message for more details: https://www.pgpool.net/pipermail/pgpool-general/2019-August/006743.html&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Connection fails in CentOS6&#039;&#039;&#039; ===&lt;br /&gt;
: Pgpool-II doesn&#039;t support GSSAPI authentication yet, but GSSAPI is requested in CentOS6. Therefore, the connection attempt will fail in CentOS6. &lt;br /&gt;
: A workaround is to set a environment variable to disable GSSAPI encryption in the client: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
export PGGSSENCMODE=disable&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Watchdog standby does not take over master when the master goes down&#039;&#039;&#039; ===&lt;br /&gt;
: If you have even number of watchdog nodes, you need to turn on enable_consensus_with_half_votes parameter, which is new in 4.1. The reason why you need this is explained in the 4.1 release note:&lt;br /&gt;
&amp;lt;q&amp;gt;&lt;br /&gt;
This changes the behavior of the decision of quorum existence and failover consensus on even number (i.e. 2, 4, 6...) of watchdog clusters. Odd number of clusters (3, 5, 7...) are not affected. When this parameter is off (the default), a 2 node watchdog cluster needs to have both 2 nodes are alive to have a quorum. If the quorum does not exist and 1 node goes down, then 1) VIP will be lost, 2) failover srcript is not executed and 3) no watchdog master exists. Especially #2 could be troublesome because no new primary PostgreSQL exists if existing primary goes down. Probably 2 node watchdog cluster users want to turn on this parameter to keep the existing behavior. On the other hand 4 or more even number of watchdog cluster users will benefit from this parameter is off because now it prevents possible split brain when a half of watchdog nodes go down. &lt;br /&gt;
&amp;lt;/q&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:This kind of error could happen for multiple reasons. Here &amp;quot;52&amp;quot; is ASCII code point in hexa decimal number, that is ASCII &#039;R&#039;. &#039;R&#039; is normal response from backend. &amp;quot;45&amp;quot; is &#039;E&#039; in ASCII, which means PostgreSQL complains something. In summary backend 0 accepts the connection request normaly, while backend 1 complains. To solve the problem, you need to look into pgpool.log. For example, If you set &amp;quot;reject&amp;quot; entry for the connection request in backend 1&#039;s pg_hba.conf:&lt;br /&gt;
&lt;br /&gt;
 local	all	foo	reject&lt;br /&gt;
&lt;br /&gt;
: and try to connect to pgpool, you will get the error. You should be able to find something like below in pgpool log:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: LOG:  pool_read_kind: error message from 1 th backend:pg_hba.conf rejects connection for host &amp;quot;[local]&amp;quot;, user &amp;quot;foo&amp;quot;, database &amp;quot;test&amp;quot;, no encryption&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: ERROR:  unable to read message kind&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: DETAIL:  kind does not match between main(52) slot[1] (45)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: You find that you need to fix pg_hba.conf of backend 1.&lt;br /&gt;
&lt;br /&gt;
: Other types of errors include:&lt;br /&gt;
:&amp;lt;ul&amp;gt;&lt;br /&gt;
:&amp;lt;li&amp;gt; Backend 1&#039;s pg_hba.conf setting refuses the connection from pgpool&lt;br /&gt;
:&amp;lt;li&amp;gt; max_connections parameter of PostgreSQL is not identical among backends&lt;br /&gt;
:&amp;lt;/ul&amp;gt;&lt;br /&gt;
: Note that &amp;quot;main&amp;quot; in the error message is &amp;quot;master&amp;quot; in Pgpool-II 4.1 or before. Also note that the detailed error info (&amp;quot;error message from 1 th backend:...&amp;quot;) is not available in Pgpool-II 3.6 or before.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I am getting an authentication error when Pgpool-II connects to Azure PostgreSQL. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Auzre PostgreSQL only accepts clear text password. md5 authentication nor SCRAM-SHA-256 cannot be used. You need to set clear text password in pool_passwd.&lt;br /&gt;
: related bug track entries:&lt;br /&gt;
: &amp;lt;ul&amp;gt;&lt;br /&gt;
: &amp;lt;li&amp;gt; https://www.pgpool.net/mantisbt/view.php?id=737&lt;br /&gt;
: &amp;lt;li&amp;gt; https://www.pgpool.net/mantisbt/view.php?id=699&lt;br /&gt;
: &amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why show pool_nodes does not show replication delay even I set delay_threshold_by_time to 1?&#039;&#039;&#039; ===&lt;br /&gt;
: There are two possible reasons.&lt;br /&gt;
: First one is sr_check_user does not have enough previlege to query against pg_stat_replication view. Please consult [https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-STATS-VIEWS PostgreSQL manual] for more details.&lt;br /&gt;
: Another reason can be the setting of backend_application_name parameter of the standby node in pgpool.conf. It must match with the application_name in primary_coninfo parameter in postgresql.conf.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;For client authentication I want to avoid maintaining pool_passwd file. What&#039;s the recommended way to do that?&#039;&#039;&#039; ===&lt;br /&gt;
: See this email thread.&lt;br /&gt;
: [https://www.pgpool.net/pipermail/pgpool-general/2023-August/001572.html][pgpool-general: 8897] pgpool forwarding database users/passwords&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I enable SSL, Pgpool-II eats CPU lot. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Pgpool-II uses OpenSSL. It has known performance issues with 3.0.2.&lt;br /&gt;
Some OS use the version of OpenSSL, including Ubuntu 22.04 and 24.04. The issue of OpenSSL has been fixed in 3.1 but never backported to 3.0. In Ubuntu 24.10 the issue has been fixed but 24.10 is not LTS.&lt;br /&gt;
: This original information is provided here.&lt;br /&gt;
https://github.com/pgpool/pgpool2/issues/93#issuecomment-2691037744&lt;br /&gt;
&lt;br /&gt;
== pgpoolAdmin Frequently Asked Questions ==&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;pgpoolAdmin does not show any node in pgpool status and node status. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pgpoolAdmin uses PHP&#039;s PostgreSQL extention (pg_connect and pg_query etc.). Probably the extention does not work as expected. Please check apache error log. Also please check the FAQ item below.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why does node status in pgpoolAdmin show &amp;quot;down&amp;quot; status even if PostgreSQL is up and running?&#039;&#039;&#039; ===&lt;br /&gt;
: pgpoolAdmin checks PostgreSQL status by connecting with user = &amp;quot;health_check_user&amp;quot; and database = template1. Thus you should allow pgpoolAdmin to access PostgreSQL with those user and database without password. You can check PostgreSQL log to verify this. If health_check_user does not exist, you will see something like:&lt;br /&gt;
: &amp;lt;pre&amp;gt;20148 2011-07-06 16:41:59 JST FATAL:  role &amp;quot;foo&amp;quot; does not exist&amp;lt;/pre&amp;gt;&lt;br /&gt;
: If the user is protected by password, you will see:&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;20220 2011-07-06 16:42:16 JST FATAL:  password authentication failed for user &amp;quot;foo&amp;quot;&lt;br /&gt;
20221 2011-07-06 16:42:16 JST LOG:  could not receive data from client: Connection reset by peer&lt;br /&gt;
20221 2011-07-06 16:42:16 JST LOG:  unexpected EOF within message length word&lt;br /&gt;
20246 2011-07-06 16:42:26 JST LOG:  could not receive data from client: Connection reset by peer&lt;br /&gt;
20246 2011-07-06 16:42:26 JST LOG:  unexpected EOF within message length word&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=FAQ&amp;diff=3940</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=FAQ&amp;diff=3940"/>
		<updated>2025-01-06T23:33:55Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* For client authentication I want to avoid maintaining pool_passwd file. What&amp;#039;s the recommended way to do that? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II Frequently Asked Questions ==&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why configure fails by &amp;quot;pg_config not found&amp;quot; on my Ubuntu box?&#039;&#039;&#039; ===&lt;br /&gt;
: pg_config is in libpq-dev package. You need to install it before running configure.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why records inserted on the primary node do not appear on the standby nodes?&#039;&#039;&#039; ===&lt;br /&gt;
: Are you using streaming replication and a hash index on the table? Then it&#039;s a known limitation of streaming replication. The inserted record is there. But if you SELECT the record using the hash index, it will not appear. Hash index changes do not produce WAL record thus they are not reflected to the standby nodes. Solutions are: 1) use btree index instead 2) use pgpool-II native replication.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I mix different versions of PostgreSQL as pgpool-II backends?&#039;&#039;&#039; ===&lt;br /&gt;
: You cannot mix different major versions of PostgreSQL, for example 8.4.x and 9.0.x. On the other hand you can mix different minor versions of PostgreSQL, for example 9.0.3 and 9.0.4. Pgpool-II assumes messages from PostgreSQL to pgpool-II are identical anytime. Different major version of PostgreSQL may send out different messages and this would cause trouble for Pgpool-II.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I mix different platforms of PostgreSQL as pgpool-II backends, for example Linux and Windows?&#039;&#039;&#039; ===&lt;br /&gt;
: In streaming replication mode, no. Because streaming replication requires that primary and standby platforms are phsyically identical. On the other hand, pgpool-II&#039;s replication mode only requires logically database clusters identical. Beware, however, that online recovery script does not use rsync or some such, which do phical copying among database clusters. You want to use pg_dumpall instead.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;It seems my pgpool-II does not do load balancing. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: First of all, pgpool-II&#039; load balancing is &amp;quot;session base&amp;quot;, not &amp;quot;statement base&amp;quot;. That means, DB node selection for load balancing is decided at the beginning of session. So all SQL statements are sent to the same DB node until the session ends.&lt;br /&gt;
&lt;br /&gt;
: Another point is, whether statement is in an explicit transaction or not. If the statement is in a transaction, it will not be load balanced in the replication mode. In pgpool-II 3.0 or later, SELECT will be load balanced even in a transaction if operated in the master/slave mode.&lt;br /&gt;
&lt;br /&gt;
: Note the method to choose DB node is not LRU or some such. Pgpool-II chooses DB node randomly considering the &amp;quot;weight&amp;quot; parameter in pgpool.conf. This means that the chosen DB node is not uniformly distributed among DB nodes in short term. You might want to inspect the effect of load balancing after ~100 queries have been sent.&lt;br /&gt;
&lt;br /&gt;
: Also cursor statements are not load balanced in replication mode. i.e.:DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE. Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I observe the effect of load balancing?&#039;&#039;&#039; ===&lt;br /&gt;
: We recommend to enable &amp;quot;log_per_node_statement&amp;quot; directive in pgpool.conf for this. Here is an example of the log:&lt;br /&gt;
: &amp;lt;pre&amp;gt;2011-05-07 08:42:42 LOG:   pid 22382: DB node id: 1 backend pid: 22409 statement: SELECT abalance FROM pgbench_accounts WHERE aid = 62797;&amp;lt;/pre&amp;gt;&lt;br /&gt;
: The &amp;quot;DB node id: 1&amp;quot; shows which DB node was chosen for this loadbalancing session.&lt;br /&gt;
&lt;br /&gt;
: Please make sure that you start pgpool-II with &amp;quot;-n&amp;quot; option to get pgpool-II log. (or you can use syslog in pgpool-II 3.1 or later)&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why am I getting &amp;quot;ProcessFrontendResponse: failed to read kind from frontend. frontend abnormally exited&amp;quot; in my pgool log?&#039;&#039;&#039; ===&lt;br /&gt;
: Well, your clients might be ill-behaved:-) PostgreSQL&#039;s protocol requires clients to send particular packet before they disconnect the connection. pgpool-II complains that clients disconnect without sending the packet. You could reprodcude the problem by using psql. Connect to pgpool using psql. Kill -9 psql. You will silimar message in the log. The message will not appear if you quit psql normaly. Another possibility is unstable network connection between your client machine and pgpool-II. Check the cable and network interface card.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m running pgpool-II in streaming replication mode. It seems it works but I find following errors in the log. Why?&#039;&#039;&#039; ===&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;E&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;[&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: pool_read2: EOF encountered with backend&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: make_persistent_db_connection: s_do_auth failed&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: find_primary_node: make_persistent_connection failed&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
: pgpool-II tries to connect to PostgreSQL to execute some functions such as pg_current_xlog_location(), which is used for detecting primary server or checking replication delay. The messages above indicates that pgpool-II failed to connect with user = health_check_user and password = health_check_password. You need to set them properly even if health_check_period = 0.&lt;br /&gt;
&lt;br /&gt;
: Note that pgpool-II 3.1 or later will use sr_check_user and sr_check_password for it instead.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I run pgbench to test pgpool-II, pgbench hangs. If I directly run pgbench against PostgreSQL, it works fine. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pgbench creates concurrent connections (the number of connections is specified by &amp;quot;-c&amp;quot; option) before starting actual transactions. So if the number of concurrent transactions specified by &amp;quot;-c&amp;quot; exceeds num_init_children, pgbench will stuck because it will wait for pgpool accepting connections forever (remember that pgpool-II accepts up to num_init_children concurrent sessions. If the number of concurrent sessions reach num_init_children, new session will be queued). On the other hand PostgreSQL does not accept concurrent sessions more than max_connections. So in this case you will just see PostgreSQL errors, rather than connection blocking. If you want to test pgpool-II&#039;s connection queuing, you can use psql instead of pgbench. In the example session below, num_init_children = 1 (this is not a recommended setting in the real world. This is just for simplicity).&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;$ psql test &amp;lt;-- connect to pgpool from terminal #1&lt;br /&gt;
psql (9.1.1)&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
test=# &lt;br /&gt;
$ psql test &amp;lt;-- tries to connect to pgpool from terminal #2 but it is blocked.&lt;br /&gt;
test=# SELECT 1; &amp;lt;--- do something from terminal #1 psql&lt;br /&gt;
test=# \q &amp;lt;-- quit psql session on terminal #1&lt;br /&gt;
psql (9.1.1) &amp;lt;-- now psql on terminal #2 accepts session&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
test=# &amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
=== &#039;&#039;&#039;I created pool_hba.conf and pool_passwd to enable md5 authentication through pgpool-II but it does not work. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Probably you made mistake somewhere. For your help here is a table which describes error patterns depending on the setting of pg_hba.conf, pool_hba.conf and pool_passwd.&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&lt;br /&gt;
{|styles=&amp;quot;background:white; color:black&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|pg_hba.conf&lt;br /&gt;
|pool_hba.conf&lt;br /&gt;
|pool_passwd&lt;br /&gt;
|result&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|md5&lt;br /&gt;
|yes&lt;br /&gt;
|md5 auth&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|md5&lt;br /&gt;
|no&lt;br /&gt;
|&amp;quot;MD5&amp;quot; authentication with pgpool failed for user &amp;quot;XX&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|trust&lt;br /&gt;
|yes/no&lt;br /&gt;
|MD5 authentication is unsupported in replication, master-slave and parallel mode&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|md5&lt;br /&gt;
|yes&lt;br /&gt;
|no auth&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|md5&lt;br /&gt;
|no&lt;br /&gt;
|&amp;quot;MD5&amp;quot; authentication with pgpool failed for user &amp;quot;XX&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|trust&lt;br /&gt;
|yes/no&lt;br /&gt;
|no auth&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I set up SSL for pgpool-II?&#039;&#039;&#039; ===&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;SSL support for pgpool-II consists of two parts: 1)between client and pgpool-II 2)pgpool-II and PostgreSQL. #1 and #2 are independent each other. For example, you can only enable SSL connection of #1, or #2. Or you can enable both #1 and #2. I explain #1 (for #2, please take a look at PostgreSQL documentation).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Make sure that pgpool is built with openssl. If you build from source code, use --with-openssl option.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
First create server certificate. In the command below you will be asked PEM pass phrase(It will be asked when pgpool starts up).&lt;br /&gt;
If you want to start pgpool without being asked pass phrase, you can remove it later.&lt;br /&gt;
([[sample server certficate create session]])&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
openssl req -new -text -out server.req&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Remove PEM pass phrase if you want.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openssl rsa -in privkey.pem -out server.key&lt;br /&gt;
Enter pass phrase for privkey.pem:&lt;br /&gt;
writing RSA key&lt;br /&gt;
$ rm privkey.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Turn the certificate into a self-signed certificate.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openssl req -x509 -in server.req -text -key server.key -out server.crt&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy server.key and server.crt to appropreate place. Suppose we copy to /usr/local/etc.&lt;br /&gt;
Make sure that you use cp -p to retain appropreate permission of server.key.&lt;br /&gt;
Alternatively you can set permission later.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ chmod og-rwx /usr/local/etc/server.key&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Set the certificate and key location in pgpool.conf.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssl = on&lt;br /&gt;
ssl_key = &#039;/usr/local/etc/server.key&#039;&lt;br /&gt;
ssl_cert = &#039;/usr/local/etc/server.crt&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Restart pgpool.&lt;br /&gt;
To confirm SSL connection between client and pgpool is working, connect to pgpool using psql.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
psql -h localhost -p 9999 test&lt;br /&gt;
psql (9.1.1)&lt;br /&gt;
SSL connection (cipher: AES256-SHA, bits: 256)&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
&lt;br /&gt;
test=# \q&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you see &amp;quot;SSL connection...&amp;quot;, SSL connection between client and pgpool is working.&lt;br /&gt;
Please make sure that use &amp;quot;-h localhost&amp;quot; option. Because SSL only works with TCP/IP,&lt;br /&gt;
with Unix domain socket SSL does not work. &lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m using pgpool-II in replication mode. I expected that pgpool-II replaces current_timestamp call with time constants in my INSERT query, but actually it doesn&#039;t. Why?&#039;&#039;&#039; ===&lt;br /&gt;
:Probably your INSERT query uses schema qualied table name (like public.mytable) and you did not install pool_regclass function coming pgpool. Without pgpool_reglclass, pgpool-II only deals with table names without schema qualification.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why max_connection must satisfy this formula max_connection &amp;gt;= (num_init_children * max_pool) and not max_connection &amp;gt;= num_init_children?&#039;&#039;&#039; ===&lt;br /&gt;
: Probably you need to understand how pgpool uses these variables. Here is internal processing inside pgpool.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Wait for connection request from clients.&lt;br /&gt;
&amp;lt;li&amp;gt;pgpool child receives connection request from a client.&lt;br /&gt;
&amp;lt;li&amp;gt;The pgpool child looks for existing connection in the pool which&lt;br /&gt;
   has requested database/user pair up to max_pool.&lt;br /&gt;
&amp;lt;li&amp;gt;If found, reuse it.&lt;br /&gt;
&amp;lt;li&amp;gt; If not found, opens a new connection to PostgreSQL and registers to&lt;br /&gt;
   the pool.  If the pool has no empty slot, closes the oldest&lt;br /&gt;
   connection to PostgreSQL and reuse the slot.&lt;br /&gt;
&amp;lt;li&amp;gt;Do some query processing until the client sends session close request.&lt;br /&gt;
&amp;lt;li&amp;gt;Close the connection to client but keeps the connection to&lt;br /&gt;
   PostgreSQL for future use.&lt;br /&gt;
&amp;lt;li&amp;gt;Go to #1&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Is connection pool cache shared among pgpool process?&#039;&#039;&#039; ===&lt;br /&gt;
:No, the connection pool cache is in pgpool&#039;s process private memory and is not shared by other pgpool. This is how the connection cache is managed: Suppose pgpool process 12345 has connection cache for database A/user B but process 12346 does not have connection cache for database A/user B and both 12345 and 12346 are in idle state (no client is connecting at this point). If client connects to pgpool process 12345 with database A/user B, then the exisiting connection of 12345 is reused. On the other hand, If client connects to pgpool process 12346, 12346 needs to create new connection.  Whether 12345 or 12346 is chosen, is not under control of pgpool. However in the long run, each pgpool child process will be equally chosen and it is expected that each process&#039;s pool will be resued equally.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why my SELECTs are not cached?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
:Certain libraries such as iBatis, MyBatis always rollback transactions if they are not explicitely committed. Pgpool never caches SELECTs result in a rollbacked transaction because they might not be inconsistent.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use # comments or blank lines in pool_passwd?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
: The answer is simple. No (just like /etc/passwd).&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I cannot use MD5 authentication if start pgpool without -n option. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: You must have given -f option as a relative path: i.e. &amp;quot;-f pgpool.conf&amp;quot;, rather than full path: i.e. &amp;quot;-f /usr/local/etc/pgpool.conf&amp;quot;. Pgpool tries to locate the full path of pool_passwd (which is neccesary for MD5 auth) from pgpool.conf path. This is fine with -n option. However if pgpool starts without -n option, it changes current directory to &amp;quot;/&amp;quot;, which is neccessary processs for daemonizing. As a result, pgpool tries to open &amp;quot;/pool_passwd&amp;quot;, which will not successs.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I see standby servers go down status in steaming replication mode and see PostgreSQL messages &amp;quot;terminating connection due to conflict&amp;quot; Why?&#039;&#039;&#039; ===&lt;br /&gt;
: If you see following messages along with those, it is likely vacuum on primary server removes rows which SELECTs on standby server want to see. Workaround is setting &amp;quot;hot_standby_feedback = on&amp;quot; in your standby server&#039;s postgresql.conf.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
2013-04-07 19:38:10 UTC FATAL:  terminating connection due to conflict with recovery&lt;br /&gt;
2013-04-07 19:38:10 UTC DETAIL:  User query might have needed to see row versions that must be removed.&lt;br /&gt;
2013-04-07 19:38:10 UTC HINT:  In a moment you should be able to reconnect to the database and repeat your command.&lt;br /&gt;
2013-04-07 19:38:10 UTC LOG:  could not send data to client: Connection reset by peer&lt;br /&gt;
2013-04-07 19:38:10 UTC ERROR:  canceling statement due to conflict with recovery&lt;br /&gt;
2013-04-07 19:38:10 UTC DETAIL:  User query might have needed to see row versions that must be removed.&lt;br /&gt;
2013-04-07 19:38:10 UTC LOG:  could not send data to client: Broken pipe&lt;br /&gt;
2013-04-07 19:38:10 UTC FATAL:  connection to client lost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Every few minites load of the system which pgpool-II running on gets high as much as 5-10. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Mulptiple users stats that this is observed only Linux kernel 3.0. 2.6 or 3.2 does show the behavior. We suspect that there is a problem with 3.0 kernel. See more discussions on &amp;quot;[pgpool-general: 1528] Mysterious Load Spikes&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When watchdog enabled and the connection number reach the number of num_init_children, VIP switchover occurs. Why?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
:When the connection number reach the number of num_init_children, the watchdog will be failed because select 1 is failed, and then VIP will be transfer to another pgpool. Unfortunately, there are no way to discriminate normal client&#039;s connections from watchdog&#039;s connection. Larger num_init_children, wd_life_point and smaller wd_interval may prevent the problem somewhat. &lt;br /&gt;
&lt;br /&gt;
:The next major version, pgpool-II 3.3, will support a new monitoring method which uses UDP heartbeat packets instead of queries such like &#039;SELECT 1&#039; to resolve the problem.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why do I need to install pgpool_regclass? &#039;&#039;&#039; ===&lt;br /&gt;
:  If you are using PostgreSQL 8.0 or later, installing pgpool_regclass function on all PostgreSQL to be accessed by pgpool-II is strongly recommended, as it is used internally by pgpool-II. Without this, handling of duplicate table names in different schema might cause trouble (temporary tables aren&#039;t a problem).&lt;br /&gt;
:Related FAQ is here https://www.pgpool.net/mediawiki/index.php?title=FAQ&amp;amp;action=submit#I.27m_using_pgpool-II_in_replication_mode._I_expected_that_pgpool-II_replaces_current_timestamp_call_with_time_constants_in_my_INSERT_query.2C_but_actually_it_doesn.27t._Why.3F&lt;br /&gt;
: If you are using PostgreSQL 9.4.0 or later and pgpool-II 3.3.4 or later, 3.4.0 or later, you don&#039;t need to install pgpool_regclass since PostgreSQL 9.4 has built-in pgpool_regclass like function &amp;quot;to_regclass&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;md5 authentication does not work. Please help&#039;&#039;&#039; ===&lt;br /&gt;
: There&#039;s an excellent summary of various check points to set up md5 authentication. Please take a look at it.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2013-May/001773.html&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m running pgpool/PostgreSQL on Amazon AWS and occasionaly I get network errors. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: It&#039;s a known problem with AWS. We recommend to complain to the Amazon support.&lt;br /&gt;
: pgpool-II 3.3.4, 3.2.9 or later mitigate the problem by changing timeout value for connect(actually select system call) from 1 second to 10 seconds.&lt;br /&gt;
: Also pgpool-II 3.4 or later has a switch to control the timeout value.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I cannot run pcp command on my Ubuntu box. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pcp commands need libpcp.so. In Ubuntu it is included &amp;quot;libpgpool0&amp;quot; package.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;On line recovery failed. How can I debug this?&#039;&#039;&#039; ===&lt;br /&gt;
: pcp_recovery_node executes recovery_1st_stage_command and/or recovery_2nd_stage_command depending on your configuration. Those scripts are supposed to be executed on the master PostgreSQL node (the first live node in replication mode or primary node in streaming replication mode). &amp;quot;BackendError&amp;quot; means there&#039;s something wrong in pgpool and/or PostgreSQL. To verify this, I recommend followings;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;start pgpool with debug option&lt;br /&gt;
&amp;lt;li&amp;gt;execute pcp_recovery_node&lt;br /&gt;
&amp;lt;li&amp;gt;examin pgpool log and master PostgreSQL log&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039; Watchdog doesn&#039;t start if not all &amp;quot;other&amp;quot; nodes are alive&#039;&#039;&#039;===&lt;br /&gt;
: It&#039;s a feature. Watchdog&#039;s lifeheck will start after all of the pgpools has started. Until this, failover of the virtual IP never occurs.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;If I start transaction, pgool-II also starts a transaction on standby nodes. Why?&#039;&#039;&#039;===&lt;br /&gt;
: This is necessary to deal with the case when JDBC driver wants to use cursors. Pgpool-II takes a liberty of distributing SELECTs to the standby node including cursor statements. Unfortunately cursor statements need to be executed in an explicit transaction.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I use schema qualified table names, pgpool-II does not invalidate on memory query cache and I got outdated data. Why?&#039;&#039;&#039;===&lt;br /&gt;
: It seems you did not install &amp;quot;pgpool_regclass&amp;quot; function. Without the function, pgpool-II ignores the schema name pat of the schema qualified table name and the cache invalidation fails.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I periodically get error message like &amp;quot;read_startup_packet: incorrect packet length&amp;quot;. What does it mean?&#039;&#039;&#039;===&lt;br /&gt;
: Monitoring tools including Zabbix and Nagios periodically sends a packet or ping to the port which pgoool is listening on. Unfortunately those packets do not have correct contents, and pgpool-II complains it. If you are not sure who is sending such a packet, you could turn on &amp;quot;log_connections&amp;quot; to know the source host and port info. If they are from such tools, you could stop the monitoring to avoid the problem  or even better, change the monitoring method to send legal query, for example, &amp;quot;SELECT 1&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m getting repeated errors like this every few minutes on Tomcat: &amp;quot;An I/O Error occurred while sending to the backend&amp;quot; Why?&#039;&#039;&#039;===&lt;br /&gt;
: Tomcat creates persistent connections to pgpool. If you set client_idle_limit to non 0, pgpool disconnects the connection and next time when Tomcat tries to send something to pgpool it breaks with the error message.&lt;br /&gt;
: One solution is set client_idle_limit to 0. However this will leave lots of idle connections.&lt;br /&gt;
: Another solution provided by Lachezar Dobrev is:&lt;br /&gt;
: You might solve that by adding a time-out on the Tomcat side. https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html&lt;br /&gt;
:   What you should set is (AFAIK):&lt;br /&gt;
:    minIdle (default is 10, set to 0)&lt;br /&gt;
:   timeBetweenEvictionRunsMillis (default 5000)&lt;br /&gt;
:   minEvictableIdleTimeMillis    (default 60000)&lt;br /&gt;
:This will try every 5 seconds and close any connections that were not used in the last 60 seconds. If you keep the sum of both numbers below the client time-out on the pgpool size connections should be closed at Tomcat side before they time-out on the pgpool side.&lt;br /&gt;
: It is also beneficial to set the&lt;br /&gt;
:    testOnBorrow (default false, set to true)&lt;br /&gt;
:    validationQuery (default none, set to &#039;SELECT version();&#039; no quotes)&lt;br /&gt;
:  This will help with connections should they expire while waiting, without supplying a disconnected connection to the application.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I check pg_stat_activity view, I see a query like &amp;quot;SELECT count(*) FROM pg_catalog.pg_class AS c WHERE c.oid = pgpool_regclass(&#039;pgbench_accounts&#039;) AND c.relpersistence = &#039;u&#039;&amp;quot; in active state for very long time. Why?&#039;&#039;&#039;===&lt;br /&gt;
: It&#039;s a limitation of pg_stat_activity. You can safely ignore it.&lt;br /&gt;
: Pgpool-II issues queries like above for internal use to master node. When user query runs in extended protocol mode (sent from JDBC driver, for example), pgpool-II&#039;s query also runs in the mode. To make pg_stat_activity recognize the query finishes, pgpool-II needs to send a packet called &amp;quot;Sync&amp;quot;, which unfortunately breaks user&#039;s query (more precisely, unnamed portal). Thus pgpool-II sends &amp;quot;Flush&amp;quot; packet instead but then pg_stat_activity does not recognize the end of the query.&lt;br /&gt;
: Interesting thing is, if you enable log_duration, it logs the query finishes.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Online recovery always fails after certain minutes. Why? &#039;&#039;&#039; ===&lt;br /&gt;
: It is possible that PostgreSQL statement_timeout kills the online recovery process. The process is executed as a SQL statement and if it&#039;s running too long, PostgreSQL sends signal 2 to the SQL and kills it. Varying by the size of the database, the online recovery process takes very long time. Make sure to disable statement_timeout or set it long enough time.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why &amp;quot;SET default_transaction_isolation TO DEFAULT&amp;quot; fails ? &#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
$ psql -h localhost -p 9999 -c &#039;SET default_transaction_isolation to DEFAULT;&#039;&lt;br /&gt;
ERROR: kind mismatch among backends. Possible last query was: &amp;quot;SET default_transaction_isolation to DEFAULT;&amp;quot; kind details are: 0[N: statement: SET default_transaction_isolation to DEFAULT;] 1[C]&lt;br /&gt;
HINT: check data consistency among db nodes&lt;br /&gt;
ERROR: kind mismatch among backends. Possible last query was: &amp;quot;SET default_transaction_isolation to DEFAULT;&amp;quot; kind details are: 0[N: statement: SET default_transaction_isolation to DEFAULT;] 1[C]&lt;br /&gt;
HINT: check data consistency among db nodes&lt;br /&gt;
connection to server was lost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
: Pgpool-II detects that node 0 returns &amp;quot;N&amp;quot; (a NOTICE message comes from PostgreSQL) while node 1 returns &amp;quot;C&amp;quot; (which means the command finished).&lt;br /&gt;
: Though pgpool-II expects that both node 0 and 1 returns identical messages, actually they are not. So pgpool-II threw an error.&lt;br /&gt;
: Probably certain log/message settings are different in node 0 and 1. Please check client_min_messages or something like that.&lt;br /&gt;
: They should be identical.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How does pgpool-II find the primary node?&#039;&#039;&#039;===&lt;br /&gt;
: pgpool-II issues &amp;quot;SELECT pg_is_in_recovery()&amp;quot; to each DB node. If it returns true, then the node is a standby node. If one of DB nodes returns false, then the node is the primary node and done.&lt;br /&gt;
: Because it is possible that promoting node could return true for the SELECT, if no primary node is found and &amp;quot;search_primary_node_timeout&amp;quot; is greater than 0, pgpool-II sleeps 1 second and contines to issues the SELECT query to each DB node again until total sleep time exceeds search_primary_node_timeout.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use pg_cancel_backend() or pg_terminate_backend()?&#039;&#039;&#039;===&lt;br /&gt;
: You can safely use pg_cancel_backend().&lt;br /&gt;
: Be warned that pg_terminate_backend() will cause a fail over because it makes PostgreSQL emit an identical error code as postmaster shutdown. Pgpool-II 3.6 or greater mitigates the problem. See [https://www.pgpool.net/docs/latest/en/html/restrictions.html the manual] for more details.&lt;br /&gt;
&lt;br /&gt;
: Remember that pgpool-II manages multiple PostgreSQL servers. To use the function, you not only need to identify the backend pid but the backend server.&lt;br /&gt;
: If the query is running on the primary server, you can call the function something like&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
/*NO LOAD BALANCE*/ SELECT pg_cancel_backend(pid)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
: SQL comment is to prevent the SELECT be load balanced to standby. Of course you could issue the SELECT against directly the primary server.&lt;br /&gt;
: If the query is running on one of standby servers, you need to issue the SELECT against directly the standby server.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why my client is disconnected to pgpool-II when failover happens?&#039;&#039;&#039;===&lt;br /&gt;
: pgpool-II consists of many process where each process corresponds to client session. When failover occurs, each process may iterate on a loop for each backend without knowing a backend goes down. This may result in incorrect processing, or segfault in the worst case. For this reason, when failover occurs, pgpool-II parent process interrupt child process using signal to let them exit. Note that switch over using pcp_detach node has same effect.&lt;br /&gt;
: From Pgpool-II 3.6 or greater, however, a fail over does not cause the disconnection in certain conditions. See the manual for more details.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why am I getting &amp;quot;LOG:  forked new pcp worker ..,&amp;quot; and &amp;quot;LOG:  PCP process with pid: xxxx exit with SUCCESS.&amp;quot; messages in pgpool log?&#039;&#039;&#039; ===&lt;br /&gt;
: Prior to pgpool-II 3.5, pgpool could only handle single PCP command at a time and all PCP commands were handled by a single PCP child process which lives throughout the lifespan of pgpool-II main process. In pgpool II 3.5 the restriction of single PCP command is removed and pgpool-II can now handle multiple simultaneous PCP commands. For every PCP command issued to pgpool a new PCP child process is forked and that process exits after execution of the PCP command is complete. So these log messages are perfectly normal and are generated whenever a new PCP worker process is created or completes execution for a PCP command.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How does pgpool-II handle md5 authentication?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
# PostgreSQL and pgpool store md5(password+username) into pg_authid or pool_password. From now on I denote string md5(password+username) as &amp;quot;S&amp;quot;.&lt;br /&gt;
# When md5 auth is requested, pgpool sends a random number salt &amp;quot;s0&amp;quot; to frontend.&lt;br /&gt;
# Frontend replies back to pgpool with md5(S+s0).&lt;br /&gt;
# pgpool extracts S from pgpool_passwd and calculate md5(S+s0). If #3 and #4 matches, goes to next step.&lt;br /&gt;
# Each backend sends salt to pgpool. Suppose we have two backends b1 and b2, and salts are s1 and s2.&lt;br /&gt;
# pgpool extracts S from pgpool_passwd and calculate md5(S+s1) and send it to b1. pgpool extracts S from pgpool_passwd and calculate md5(S+s2) and send it to b2.&lt;br /&gt;
# If b1 and b2 agree with the authentication, the whole md5 auth process succeeds.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why Pgpool-II does not automatically recognize a database comes back online?&#039;&#039;&#039; ===&lt;br /&gt;
: It would be technically possible but we don&#039;t think it&#039;s a safe feature.&lt;br /&gt;
: Consider a streaming replication configuration. When a standby comes back online, it does not necessarily means it connects to the current primary node. It may connect to a different primary node , or even it&#039;s not a standby any more. If Pgpool-II automatically recognizes such that standby as online, SELECTs to the standby node will return different result as the primary, which is a disaster for database applications.&lt;br /&gt;
: Also please note that &amp;quot;pgpool reload&amp;quot; does not do anything for recognizing the standby node as online. It just reloads configuration files.&lt;br /&gt;
: Please note that in Pgpool-II 4.1 or later, it is possible to automatically make a standby server online if it&#039;s safe enough. See configuration parameter &amp;quot;auto_failback&amp;quot; for more information.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;After enabling idle_in_transaction_session_timeout, Pgpool-II sets the DB node status to all down&#039;&#039;&#039; ===&lt;br /&gt;
: idle_in_transaction_session_timeout was introduced in PostgreSQL 9.6. It is intended to cancel idle transactions. Unfortunately after the time out occurs, PostgreSQL raises a FATAL error, which triggers failover in Pgpool-II if fail_over_on_backend_error is on.&lt;br /&gt;
: These are some workarounds to avoid the unwanted failover.&lt;br /&gt;
* Disable fail_over_on_backend_error. By this, failover will not happen if the FATAL error occurs, but the session will be terminated.&lt;br /&gt;
* Set connection_life_time、child_life_time and client_idle_limit less than idle_in_transaction_session_timeout. This will not terminate the session even if the FATAL error occurs. However even if the FATAL error does not occur, the connection pools are removed if one or more of items satisfy the specified condition, which may affect performance.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I know PostgreSQL backend status connected by Pgpool-II? &#039;&#039;&#039;===&lt;br /&gt;
: The backend status shown in pg_stat_activity can be examined by using &amp;quot;show pool_pools&amp;quot; command. One of the columns &amp;quot;pool_backendpid&amp;quot; shown by &amp;quot;show pool_pools&amp;quot; is the process id of the corresponding PostgreSQL backend process. Once it is determined, you can examine the output of pg_stat_activiy matching with its &amp;quot;pid&amp;quot; column.&lt;br /&gt;
: You can do this automatically by using dblink extension of PostgreSQL. Here is a sample query:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
SELECT * FROM dblink(&#039;dbname=test host=xxx port=11000 user=t-ishii password=xxx&#039;, &#039;show pool_pools&#039;) as t1 (pool_pid int, start_time text, pool_id int, backend_id int, database text, username text, create_time text,majorversion int, minorversion int, pool_counter int, pool_backendpid int, pool_connected int), pg_stat_activity p WHERE p.pid = t1.pool_backendpid;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: You can execute the SQL above on either PostgreSQL or Pgpool-II. The first argument of dblink is a connection string to connect to Pgpool-II, not PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Where can I get Debian packages for Pgpool-II? &#039;&#039;&#039; ===&lt;br /&gt;
: You can get Debian packages here: https://apt.postgresql.org/pub/repos/apt/pool/main/p/pgpool2/&lt;br /&gt;
: For older releases you can find the packages at: https://atalia.postgresql.org/morgue/p/pgpool2/&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How to run Pgpool-II with non-root user? &#039;&#039;&#039; ===&lt;br /&gt;
: If you install Pgpool-II using RPM packages, Pgpool-II will be running by root by default.&lt;br /&gt;
: You can also run Pgpool-II with a non-root user. But root privilege is required to control the virtual IP, so you have to copy ip/ifconfig/arping command and add the setuid flag to them.&lt;br /&gt;
&lt;br /&gt;
: Following is an example to run Pgpool-II with postgres user.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Edit pgpool.service file to use postgres user to start Pgpool-II&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# cp /usr/lib/systemd/system/pgpool.service /etc/systemd/system/pgpool.service&lt;br /&gt;
&lt;br /&gt;
# vi /etc/systemd/system/pgpool.service&lt;br /&gt;
...&lt;br /&gt;
User=postgres&lt;br /&gt;
Group=postgres&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Change owner of /var/{lib,run}/pgpool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# chown postgres:postgres /var/{lib,run}/pgpool&lt;br /&gt;
# cp /usr/lib/tmpfiles.d/pgpool-II-pgxx.conf /etc/tmpfiles.d&lt;br /&gt;
# vi /etc/tmpfiles.d/pgpool-II-pgxx.conf&lt;br /&gt;
===&lt;br /&gt;
d /var/run/pgpool 0755 postgres postgres -&lt;br /&gt;
===&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Change owner of Pgpool-II config files &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
chown -R postgres:postgres /etc/pgpool-II/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Copy ip/ifconfig/arping commands to somewhere where the user has access permissions and add setuid flag to them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# mkdir /var/lib/pgsql/sbin&lt;br /&gt;
# chown postgres:postgres /var/lib/pgsql/sbin&lt;br /&gt;
# chmod 700 /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/ifconfig /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/arping /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/ip /var/lib/pgsql/sbin&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/ip&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/ifconfig&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/arping &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use repmgr with Pgpool-II? &#039;&#039;&#039; ===&lt;br /&gt;
: No. These software do not consider each other. You should use Pgpool-II without repmger or use repmgr without Pgpool-II. See this message for more details: https://www.pgpool.net/pipermail/pgpool-general/2019-August/006743.html&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Connection fails in CentOS6&#039;&#039;&#039; ===&lt;br /&gt;
: Pgpool-II doesn&#039;t support GSSAPI authentication yet, but GSSAPI is requested in CentOS6. Therefore, the connection attempt will fail in CentOS6. &lt;br /&gt;
: A workaround is to set a environment variable to disable GSSAPI encryption in the client: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
export PGGSSENCMODE=disable&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Watchdog standby does not take over master when the master goes down&#039;&#039;&#039; ===&lt;br /&gt;
: If you have even number of watchdog nodes, you need to turn on enable_consensus_with_half_votes parameter, which is new in 4.1. The reason why you need this is explained in the 4.1 release note:&lt;br /&gt;
&amp;lt;q&amp;gt;&lt;br /&gt;
This changes the behavior of the decision of quorum existence and failover consensus on even number (i.e. 2, 4, 6...) of watchdog clusters. Odd number of clusters (3, 5, 7...) are not affected. When this parameter is off (the default), a 2 node watchdog cluster needs to have both 2 nodes are alive to have a quorum. If the quorum does not exist and 1 node goes down, then 1) VIP will be lost, 2) failover srcript is not executed and 3) no watchdog master exists. Especially #2 could be troublesome because no new primary PostgreSQL exists if existing primary goes down. Probably 2 node watchdog cluster users want to turn on this parameter to keep the existing behavior. On the other hand 4 or more even number of watchdog cluster users will benefit from this parameter is off because now it prevents possible split brain when a half of watchdog nodes go down. &lt;br /&gt;
&amp;lt;/q&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:This kind of error could happen for multiple reasons. Here &amp;quot;52&amp;quot; is ASCII code point in hexa decimal number, that is ASCII &#039;R&#039;. &#039;R&#039; is normal response from backend. &amp;quot;45&amp;quot; is &#039;E&#039; in ASCII, which means PostgreSQL complains something. In summary backend 0 accepts the connection request normaly, while backend 1 complains. To solve the problem, you need to look into pgpool.log. For example, If you set &amp;quot;reject&amp;quot; entry for the connection request in backend 1&#039;s pg_hba.conf:&lt;br /&gt;
&lt;br /&gt;
 local	all	foo	reject&lt;br /&gt;
&lt;br /&gt;
: and try to connect to pgpool, you will get the error. You should be able to find something like below in pgpool log:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: LOG:  pool_read_kind: error message from 1 th backend:pg_hba.conf rejects connection for host &amp;quot;[local]&amp;quot;, user &amp;quot;foo&amp;quot;, database &amp;quot;test&amp;quot;, no encryption&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: ERROR:  unable to read message kind&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: DETAIL:  kind does not match between main(52) slot[1] (45)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: You find that you need to fix pg_hba.conf of backend 1.&lt;br /&gt;
&lt;br /&gt;
: Other types of errors include:&lt;br /&gt;
:&amp;lt;ul&amp;gt;&lt;br /&gt;
:&amp;lt;li&amp;gt; Backend 1&#039;s pg_hba.conf setting refuses the connection from pgpool&lt;br /&gt;
:&amp;lt;li&amp;gt; max_connections parameter of PostgreSQL is not identical among backends&lt;br /&gt;
:&amp;lt;/ul&amp;gt;&lt;br /&gt;
: Note that &amp;quot;main&amp;quot; in the error message is &amp;quot;master&amp;quot; in Pgpool-II 4.1 or before. Also note that the detailed error info (&amp;quot;error message from 1 th backend:...&amp;quot;) is not available in Pgpool-II 3.6 or before.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I am getting an authentication error when Pgpool-II connects to Azure PostgreSQL. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Auzre PostgreSQL only accepts clear text password. md5 authentication nor SCRAM-SHA-256 cannot be used. You need to set clear text password in pool_passwd.&lt;br /&gt;
: related bug track entries:&lt;br /&gt;
: &amp;lt;ul&amp;gt;&lt;br /&gt;
: &amp;lt;li&amp;gt; https://www.pgpool.net/mantisbt/view.php?id=737&lt;br /&gt;
: &amp;lt;li&amp;gt; https://www.pgpool.net/mantisbt/view.php?id=699&lt;br /&gt;
: &amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why show pool_nodes does not show replication delay even I set delay_threshold_by_time to 1?&#039;&#039;&#039; ===&lt;br /&gt;
: There are two possible reasons.&lt;br /&gt;
: First one is sr_check_user does not have enough previlege to query against pg_stat_replication view. Please consult [https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-STATS-VIEWS PostgreSQL manual] for more details.&lt;br /&gt;
: Another reason can be the setting of backend_application_name parameter of the standby node in pgpool.conf. It must match with the application_name in primary_coninfo parameter in postgresql.conf.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;For client authentication I want to avoid maintaining pool_passwd file. What&#039;s the recommended way to do that?&#039;&#039;&#039; ===&lt;br /&gt;
: See this email thread.&lt;br /&gt;
: [https://www.pgpool.net/pipermail/pgpool-general/2023-August/001572.html][pgpool-general: 8897] pgpool forwarding database users/passwords&lt;br /&gt;
&lt;br /&gt;
== pgpoolAdmin Frequently Asked Questions ==&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;pgpoolAdmin does not show any node in pgpool status and node status. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pgpoolAdmin uses PHP&#039;s PostgreSQL extention (pg_connect and pg_query etc.). Probably the extention does not work as expected. Please check apache error log. Also please check the FAQ item below.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why does node status in pgpoolAdmin show &amp;quot;down&amp;quot; status even if PostgreSQL is up and running?&#039;&#039;&#039; ===&lt;br /&gt;
: pgpoolAdmin checks PostgreSQL status by connecting with user = &amp;quot;health_check_user&amp;quot; and database = template1. Thus you should allow pgpoolAdmin to access PostgreSQL with those user and database without password. You can check PostgreSQL log to verify this. If health_check_user does not exist, you will see something like:&lt;br /&gt;
: &amp;lt;pre&amp;gt;20148 2011-07-06 16:41:59 JST FATAL:  role &amp;quot;foo&amp;quot; does not exist&amp;lt;/pre&amp;gt;&lt;br /&gt;
: If the user is protected by password, you will see:&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;20220 2011-07-06 16:42:16 JST FATAL:  password authentication failed for user &amp;quot;foo&amp;quot;&lt;br /&gt;
20221 2011-07-06 16:42:16 JST LOG:  could not receive data from client: Connection reset by peer&lt;br /&gt;
20221 2011-07-06 16:42:16 JST LOG:  unexpected EOF within message length word&lt;br /&gt;
20246 2011-07-06 16:42:26 JST LOG:  could not receive data from client: Connection reset by peer&lt;br /&gt;
20246 2011-07-06 16:42:26 JST LOG:  unexpected EOF within message length word&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=Roadmap&amp;diff=3922</id>
		<title>Roadmap</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=Roadmap&amp;diff=3922"/>
		<updated>2024-11-07T06:34:54Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Upcoming minor releases */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Upcoming minor releases == &lt;br /&gt;
&lt;br /&gt;
PgPool Global Development Group will make at least one minor release quarterly according to a predefined schedule.&lt;br /&gt;
&lt;br /&gt;
If there are important bug fixes or security issues, more releases will be made between these scheduled dates.&lt;br /&gt;
&lt;br /&gt;
The current schedule for upcoming releases is: &lt;br /&gt;
&lt;br /&gt;
* February 29th, 2024&lt;br /&gt;
* May 16th, 2024&lt;br /&gt;
* August 15th, 2024&lt;br /&gt;
* November 21st, 2024&lt;br /&gt;
* February 20th, 2025&lt;br /&gt;
* May 15th, 2025&lt;br /&gt;
* August 21st, 2025&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3913</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3913"/>
		<updated>2024-08-27T02:22:54Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Pgpool-II TODO list */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== Do not disturb sessions in failover of standby servers when load_balance_mode is off ===&lt;br /&gt;
: In streaming replication mode if load_balance_mode is off, it would be desirable to not disconnect sessions in failover of standby servers. Currently Pgpool-II connects to all backend even if load_balance_mode is off. But it is actually unnecessary to connect to standby servers if load_balance_mode is off. If pgpool only connects to primary server, it does not need to disconnect session in failover of standby servers.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 4.5, it is allowed to use IPv6 address for PostgreSQL backend server, listening addresses of pgpool-II itself and listening addresses of pcp process.&lt;br /&gt;
: However, watchdog process only listens to IPv4 and UNIX domain socket. Following modules need to be updated.&lt;br /&gt;
* watchdog communication port (wd_port)&lt;br /&gt;
** wd_create_recv_socket/wd_create_client_socket&lt;br /&gt;
* heartbeat port (heartbeat_port)&lt;br /&gt;
** wd_create_hb_recv_socket/wd_create_hb_send_socket&lt;br /&gt;
:&lt;br /&gt;
: This has been implemented in 4.6.&lt;br /&gt;
=== Support multiple pcp_socket_dir ===&lt;br /&gt;
: Pgpool-II has supported multiple unix_socket_directories in 4.4 release. I think pcp_socket_dir should also support multiple directories.&lt;br /&gt;
: This has been implemented in 4.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow load balance for PREPARE/EXECUTE/DEALLOCATE ===&lt;br /&gt;
: Pgpool-II does not load balance these queries even if it&#039;s processing read only SELECT.&lt;br /&gt;
: This has been implemented in 4.5.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
: ---------------------------------------------------------------------------------------&lt;br /&gt;
: This has been implemented in from master (to be 4.5) to 4.1 (as of 2023/4/19).&lt;br /&gt;
: Now Pgpool-II correctly recognizes multi-statements and distributes the query to proper PostgreSQL nodes.&lt;br /&gt;
: Basically Pgpool-II forwards multi-statement queries to primary node (or all nodes in replication/snapshot isolation mode). A few SQL like BEGIN/END/SAVEPOINT/DEALLOCATE need special handling. This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd32f5ef996cad36d5b1554e92a33ea7a815419a&lt;br /&gt;
: Another challenge is how to deal with minimal parser. The minimal parser happily gives up parsing multi-statement query once it finds UPDATE/INSERT/DELETE in streaming replication mode and pgpool fails to recognize the query is multi-statement. To fix this, &amp;quot;psqlscan&amp;quot; is imported from PostgreSQL, which precisely detects multi-statement with lower cost than SQL parser. Still it is expensive than simple string comparison, and we use psqlscan only when the query string is large (currently defined as 10kB). As a result, the minimal parser is only used when query is larger than 10kB and the query is not multi-statement.&lt;br /&gt;
: This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=48da8715bf403965507eef0321c0ab10054ac71c&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=64f670ca4abae749e1a95cc57b6a508a8611e44d&lt;br /&gt;
: As for handling each query in a multi-statement query, it is technically impossible as stated above. So we don&#039;t need to worry about this.&lt;br /&gt;
: We can now safely and correctly handle multi-statement queries.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
&lt;br /&gt;
=== Support multiple unix_socket_directories and related parameters ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
: Also unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: These have been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3912</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3912"/>
		<updated>2024-08-27T02:22:21Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* TODOs already done */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 4.5, it is allowed to use IPv6 address for PostgreSQL backend server, listening addresses of pgpool-II itself and listening addresses of pcp process.&lt;br /&gt;
: However, watchdog process only listens to IPv4 and UNIX domain socket. Following modules need to be updated.&lt;br /&gt;
* watchdog communication port (wd_port)&lt;br /&gt;
** wd_create_recv_socket/wd_create_client_socket&lt;br /&gt;
* heartbeat port (heartbeat_port)&lt;br /&gt;
** wd_create_hb_recv_socket/wd_create_hb_send_socket&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== Do not disturb sessions in failover of standby servers when load_balance_mode is off ===&lt;br /&gt;
: In streaming replication mode if load_balance_mode is off, it would be desirable to not disconnect sessions in failover of standby servers. Currently Pgpool-II connects to all backend even if load_balance_mode is off. But it is actually unnecessary to connect to standby servers if load_balance_mode is off. If pgpool only connects to primary server, it does not need to disconnect session in failover of standby servers.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 4.5, it is allowed to use IPv6 address for PostgreSQL backend server, listening addresses of pgpool-II itself and listening addresses of pcp process.&lt;br /&gt;
: However, watchdog process only listens to IPv4 and UNIX domain socket. Following modules need to be updated.&lt;br /&gt;
* watchdog communication port (wd_port)&lt;br /&gt;
** wd_create_recv_socket/wd_create_client_socket&lt;br /&gt;
* heartbeat port (heartbeat_port)&lt;br /&gt;
** wd_create_hb_recv_socket/wd_create_hb_send_socket&lt;br /&gt;
:&lt;br /&gt;
: This has been implemented in 4.6.&lt;br /&gt;
=== Support multiple pcp_socket_dir ===&lt;br /&gt;
: Pgpool-II has supported multiple unix_socket_directories in 4.4 release. I think pcp_socket_dir should also support multiple directories.&lt;br /&gt;
: This has been implemented in 4.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow load balance for PREPARE/EXECUTE/DEALLOCATE ===&lt;br /&gt;
: Pgpool-II does not load balance these queries even if it&#039;s processing read only SELECT.&lt;br /&gt;
: This has been implemented in 4.5.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
: ---------------------------------------------------------------------------------------&lt;br /&gt;
: This has been implemented in from master (to be 4.5) to 4.1 (as of 2023/4/19).&lt;br /&gt;
: Now Pgpool-II correctly recognizes multi-statements and distributes the query to proper PostgreSQL nodes.&lt;br /&gt;
: Basically Pgpool-II forwards multi-statement queries to primary node (or all nodes in replication/snapshot isolation mode). A few SQL like BEGIN/END/SAVEPOINT/DEALLOCATE need special handling. This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd32f5ef996cad36d5b1554e92a33ea7a815419a&lt;br /&gt;
: Another challenge is how to deal with minimal parser. The minimal parser happily gives up parsing multi-statement query once it finds UPDATE/INSERT/DELETE in streaming replication mode and pgpool fails to recognize the query is multi-statement. To fix this, &amp;quot;psqlscan&amp;quot; is imported from PostgreSQL, which precisely detects multi-statement with lower cost than SQL parser. Still it is expensive than simple string comparison, and we use psqlscan only when the query string is large (currently defined as 10kB). As a result, the minimal parser is only used when query is larger than 10kB and the query is not multi-statement.&lt;br /&gt;
: This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=48da8715bf403965507eef0321c0ab10054ac71c&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=64f670ca4abae749e1a95cc57b6a508a8611e44d&lt;br /&gt;
: As for handling each query in a multi-statement query, it is technically impossible as stated above. So we don&#039;t need to worry about this.&lt;br /&gt;
: We can now safely and correctly handle multi-statement queries.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
&lt;br /&gt;
=== Support multiple unix_socket_directories and related parameters ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
: Also unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: These have been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=Main_Page&amp;diff=3911</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=Main_Page&amp;diff=3911"/>
		<updated>2024-08-25T08:05:31Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Contacts */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--&lt;br /&gt;
&#039;&#039;&#039;MediaWiki has been successfully installed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Consult the [https://meta.wikimedia.org/wiki/Help:Contents User&#039;s Guide] for information on using the wiki software.&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&amp;lt;!-- MAINTENANCE MESSAGE --&amp;gt;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;[!] Server maintenance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The pgpool.net server will be shut down for maintenance work&lt;br /&gt;
from 12:00 JST (UTC+0900) to 13:00 JST (UTC+0900), Tue, 28 May 2024.&lt;br /&gt;
All services on pgpool.net will be unavailable for the maintenance period.&lt;br /&gt;
Sorry for inconvenience.&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;[!] Server maintenance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The pgpool.net server will be shut down for maintenance work&lt;br /&gt;
from 12:00 JST (UTC+0900) to 13:00 JST (UTC+0900), Tuesday, June 28, 2022.&lt;br /&gt;
All services on pgpool.net will be unavailable for the maintenance period.&lt;br /&gt;
Sorry for inconvenience.&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&lt;br /&gt;
Server maintenance has finished. Most of the functionalities are working but the mailing lists do not work at this moment. Our engineers are working on it. Thank you for your patient.&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Welcome to Pgpool Wiki! =&lt;br /&gt;
&lt;br /&gt;
== What is Pgpool-II? ==&lt;br /&gt;
&lt;br /&gt;
Pgpool-II is a middleware that works between PostgreSQL servers and a PostgreSQL database client. It is distributed under a license similar to BSD and MIT. It provides the following features.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Connection Pooling&#039;&#039;&#039;&lt;br /&gt;
: Pgpool-II saves connections to the PostgreSQL servers, and reuse them whenever a new connection with the same properties (i.e. username, database, protocol version) comes in. It reduces connection overhead, and improves system&#039;s overall throughput.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Replication&#039;&#039;&#039;&lt;br /&gt;
: Pgpool-II can manage multiple PostgreSQL servers. Using the replication function enables creating a realtime backup on 2 or more physical disks, so that the service can continue without stopping servers in case of a disk failure.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Load Balancing&#039;&#039;&#039;&lt;br /&gt;
: If a database is replicated, executing a SELECT query on any server will return the same result. Pgpool-II takes an advantage of the replication feature to reduce the load on each PostgreSQL server by distributing SELECT queries among multiple servers, improving system&#039;s overall throughput. At best, performance improves proportionally to the number of PostgreSQL servers. Load balance works best in a situation where there are a lot of users executing many queries at the same time.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Limiting Exceeding Connections&#039;&#039;&#039;&lt;br /&gt;
: There is a limit on the maximum number of concurrent connections with PostgreSQL, and connections are rejected after this many connections. Setting the maximum number of connections, however, increases resource consumption and affect system performance. pgpool-II also has a limit on the maximum number of connections, but extra connections will be queued instead of returning an error immediately.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Watchdog&#039;&#039;&#039;&lt;br /&gt;
: Watchdog can coordinate multiple Pgpool-II, create a robust cluster system and avoid the single point of failure or split brain. Watchdog can perform lifecheck against other pgpool-II nodes, to detect a fault of Pgpoll-II. If active Pgpool-II goes down, standby Pgpool-II can be promoted to active, and take over Virtual IP. &lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;In Memory Query Cache&#039;&#039;&#039;&lt;br /&gt;
: In memory query cache allows to save a pair of SELECT statement and its result. If an identical SELECTs comes in, Pgpool-II returns the value from cache. Since no SQL parsing nor access to PostgreSQL are involved, using in memory cache is extremely fast. On the other hand, it might be slower than the normal path in some cases, because it adds some overhead of storing cache data. &lt;br /&gt;
&lt;br /&gt;
Pgpool-II speaks PostgreSQL&#039;s backend and frontend protocol, and relays messages between a backend and a frontend. Therefore, a database application (frontend) thinks that Pgpool-II is the actual PostgreSQL server, and the server (backend) sees Pgpool-II as one of its clients. Because Pgpool-II is transparent to both the server and the client, an existing database application can be used with Pgpool-II almost without a change to its sources.&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II License ==&lt;br /&gt;
&lt;br /&gt;
See [https://pgpool.net/mediawiki/index.php/pgpool-II_License License].&lt;br /&gt;
&lt;br /&gt;
== Stable versions ==&lt;br /&gt;
&lt;br /&gt;
* Pgpool-II: 4.5, 4.4, 4.3, 4.2, 4.1&lt;br /&gt;
&lt;br /&gt;
== Contacts ==&lt;br /&gt;
&lt;br /&gt;
* If you have technical questions regarding Pgpool-II, please subscribe [https://pgpool.net/mediawiki/index.php/Mailing_lists Pgpool-II mailing lists] and send questions to it (pgpool-general will be a starter). Please &#039;&#039;&#039;do not&#039;&#039;&#039; send technical questions via private email. We will not respond such that messages.&lt;br /&gt;
&lt;br /&gt;
* If you believe that you found a vulnerability/security issues with Pgpool-II, please send a &#039;&#039;&#039;private email&#039;&#039;&#039; to ishii at postgresql.org (he is the project leader).&lt;br /&gt;
&lt;br /&gt;
= What&#039;s new =&lt;br /&gt;
&lt;br /&gt;
== News ==&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.5.3, 4.4.8, 4.3.11, 4.2.18 and 4.1.21 officially released (2024/08/15) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.5.3, 4.4.8, 4.3.11, 4.2.18 and 4.1.21 officially released.&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.5.3 :  [https://www.pgpool.net/docs/latest/en/html/release-4-5-3.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-5-3.html Japanese]&lt;br /&gt;
* 4.4.8 :  [https://www.pgpool.net/docs/latest/en/html/release-4-4-8.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-4-8.html Japanese]&lt;br /&gt;
* 4.3.11 :  [https://www.pgpool.net/docs/latest/en/html/release-4-3-11.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-3-11.html Japanese]&lt;br /&gt;
* 4.2.18 :  [https://www.pgpool.net/docs/latest/en/html/release-4-2-18.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-2-18.html Japanese]&lt;br /&gt;
* 4.1.21 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-21.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-21.html Japanese]&lt;br /&gt;
&lt;br /&gt;
== Events ==&lt;br /&gt;
=== PGConf.ASIA 2019 in Bali ===&lt;br /&gt;
&lt;br /&gt;
Bo Peng is going to give a presentation [https://www.sraoss.co.jp/event_seminar/2019/PGConf.ASIA.Bali.2019-PENG.pdf Setup a High-Availability and Load Balancing PostgreSQL Cluster: New Features of Pgpool-II 4.1] at [https://2019.pgconf.asia/ &amp;quot;PGConf.ASIA 2019&amp;quot;] held in Bali from September 8th to 11st.&lt;br /&gt;
&lt;br /&gt;
=== PGConf.ASIA 2018 in Tokyo ===&lt;br /&gt;
&lt;br /&gt;
Tatsuo Ishii and Bo Peng are going to give a presentation &lt;br /&gt;
[https://www.pgpool.net/download.php?f=pgpool-past-now-and-future.pdf Celebrating its 15th Anniversary Pgpool-II Past Present and Future]&lt;br /&gt;
[https://www.pgpool.net/download.php?f=pgpool-past-now-and-future-part2.pdf Celebrating its 15th Anniversary Pgpool-II Past Present and Future part2] at [https://www.pgconf.asia/EN/2018/day1/#B4 &amp;quot;PGConf.ASIA 2018&amp;quot;] held in Tokyo from December 10th to 12th.&lt;br /&gt;
&lt;br /&gt;
=== PGConf.ASIA 2017 in Tokyo ===&lt;br /&gt;
&lt;br /&gt;
Tatsuo Ishii is going to give a presentation  [https://www.pgconf.asia/EN/2017/day-2/#B5 &amp;quot;More reliability and support for PostgreSQL 10: Introducing Pgpool-II 3.7&amp;quot;] at [https://www.pgconf.asia/EN/2017/ &amp;quot;PGConf.ASIA 2017&amp;quot;], held in Tokyo from December 4th to 6th.&lt;br /&gt;
&lt;br /&gt;
=== PostgreSQL Conference in Russia 2016 ===&lt;br /&gt;
&lt;br /&gt;
Tatsuo gave a talk &amp;quot;How to manage a herd of elephants&amp;quot; at [https://pgconf.ru/en &amp;quot;PostgreSQL Conference in Russia 2016&amp;quot;], held in Moscow from February 4th to 5th.&lt;br /&gt;
The slides are available in [https://pgpool.net/mediawiki/index.php/Documentation#Developer.27s_documentation Developer&#039;s documentation].&lt;br /&gt;
&lt;br /&gt;
=== PostgreSQL Conference in China 2015 ===&lt;br /&gt;
&lt;br /&gt;
Tatsuo gave a talk &amp;quot;How to manage a herd of elephants&amp;quot; at [https://www.eventdove.com/event/106042 &amp;quot;PostgreSQL Conference in China 2015&amp;quot;], held in Beijing from November 21th to 22th.&lt;br /&gt;
The slides are available in [https://pgpool.net/mediawiki/index.php/Documentation#Developer.27s_documentation Developer&#039;s documentation].&lt;br /&gt;
&lt;br /&gt;
=== pgCon 2015 Cluster Hacker Summit ===&lt;br /&gt;
&lt;br /&gt;
We participated &amp;quot;6th Postgres Cluster Hacker Summit&amp;quot; which was held as a part of PgCon 2015 Developer Unconference. We reported about Development Status Updates and pgpool-II 3.5 features. The slides are available in [https://pgpool.net/mediawiki/index.php/Documentation#Developer.27s_documentation Developer&#039;s documentation].&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II Day 2015 in Tokyo ===&lt;br /&gt;
&lt;br /&gt;
We had the very first time pgpool-II decicated conference. See the [https://pgpool.net/mediawiki/index.php/pgpool-II_Day_2015 event report].&lt;br /&gt;
&lt;br /&gt;
== Stable ==&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.5.3, 4.4.8, 4.3.11, 4.2.18 and 4.1.21 officially released (2024/08/15) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.5.3, 4.4.8, 4.3.11, 4.2.18 and 4.1.21 officially released.&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.5.3 :  [https://www.pgpool.net/docs/latest/en/html/release-4-5-3.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-5-3.html Japanese]&lt;br /&gt;
* 4.4.8 :  [https://www.pgpool.net/docs/latest/en/html/release-4-4-8.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-4-8.html Japanese]&lt;br /&gt;
* 4.3.11 :  [https://www.pgpool.net/docs/latest/en/html/release-4-3-11.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-3-11.html Japanese]&lt;br /&gt;
* 4.2.18 :  [https://www.pgpool.net/docs/latest/en/html/release-4-2-18.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-2-18.html Japanese]&lt;br /&gt;
* 4.1.21 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-21.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-21.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.5.2, 4.4.7, 4.3.10, 4.2.17 and 4.1.20 officially released (2024/05/16) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.5.2, 4.4.7, 4.3.10, 4.2.17 and 4.1.20 officially released.&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.5.2 :  [https://www.pgpool.net/docs/latest/en/html/release-4-5-2.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-5-2.html Japanese]&lt;br /&gt;
* 4.4.7 :  [https://www.pgpool.net/docs/latest/en/html/release-4-4-7.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-4-7.html Japanese]&lt;br /&gt;
* 4.3.10 :  [https://www.pgpool.net/docs/latest/en/html/release-4-3-10.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-3-10.html Japanese]&lt;br /&gt;
* 4.2.17 :  [https://www.pgpool.net/docs/latest/en/html/release-4-2-17.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-2-17.html Japanese]&lt;br /&gt;
* 4.1.20 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-20.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-20.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.5.1, 4.4.6, 4.3.9, 4.2.16 and 4.1.19 officially released (2024/02/29) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.5.1, 4.4.6, 4.3.9, 4.2.16 and 4.1.19 officially released.&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.5.1 :  [https://www.pgpool.net/docs/latest/en/html/release-4-5-1.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-5-1.html Japanese]&lt;br /&gt;
* 4.4.6 :  [https://www.pgpool.net/docs/latest/en/html/release-4-4-6.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-4-6.html Japanese]&lt;br /&gt;
* 4.3.9 :  [https://www.pgpool.net/docs/latest/en/html/release-4-3-9.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-3-9.html Japanese]&lt;br /&gt;
* 4.2.16 :  [https://www.pgpool.net/docs/latest/en/html/release-4-2-16.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-2-16.html Japanese]&lt;br /&gt;
* 4.1.19 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-19.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-19.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.5.0 released (2023/12/12) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.5.0 is now released. This is the first stable release of Pgpool-II 4.5.x. &lt;br /&gt;
&lt;br /&gt;
V4.5 contains new features and enhancements, including:&lt;br /&gt;
&lt;br /&gt;
* Allow to use more kind of multiple statements in a query string.&lt;br /&gt;
&lt;br /&gt;
* Allow to load balance PREPARE/EXECUTE/DEALLOCATE. &lt;br /&gt;
&lt;br /&gt;
* Allow to set delay_threshold_by_time in milliseconds.&lt;br /&gt;
&lt;br /&gt;
* Avoid session disconnection issue in failover/failback/backend error in some cases.&lt;br /&gt;
&lt;br /&gt;
* Allow to route queries to a specific backend node for a specific user connection.&lt;br /&gt;
&lt;br /&gt;
* Support multiple directories specification for pcp_socket_dir.&lt;br /&gt;
&lt;br /&gt;
* Import PostgreSQL 16&#039;s SQL parser. &lt;br /&gt;
&lt;br /&gt;
You can download it from [https://www.pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
Release notes:&lt;br /&gt;
[https://www.pgpool.net/docs/45/en/html/release-4-5-0.html English]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.4.5, 4.3.8, 4.2.15, 4.1.18 and 4.0.25 officially released (2023/11/30) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.4.5, 4.3.8, 4.2.15, 4.1.18 and 4.0.25 officially released.&lt;br /&gt;
&lt;br /&gt;
Please note that 4.0.25 is the last release of the 4.0.x series.&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.4.5 :  [https://www.pgpool.net/docs/latest/en/html/release-4-4-5.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-4-5.html Japanese]&lt;br /&gt;
* 4.3.8 :  [https://www.pgpool.net/docs/latest/en/html/release-4-3-8.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-3-8.html Japanese]&lt;br /&gt;
* 4.2.15 :  [https://www.pgpool.net/docs/latest/en/html/release-4-2-15.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-2-15.html Japanese]&lt;br /&gt;
* 4.1.18 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-18.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-18.html Japanese]&lt;br /&gt;
* 4.0.25 :  [https://www.pgpool.net/docs/latest/en/html/release-4-0-25.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-0-25.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.4.4, 4.3.7, 4.2.14, 4.1.17 and 4.0.24 officially released (2023/08/17) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.4.4, 4.3.7, 4.2.14, 4.1.17 and 4.0.24 officially released.&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.4.4 :  [https://www.pgpool.net/docs/latest/en/html/release-4-4-4.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-4-4.html Japanese]&lt;br /&gt;
* 4.3.7 :  [https://www.pgpool.net/docs/latest/en/html/release-4-3-7.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-3-7.html Japanese]&lt;br /&gt;
* 4.2.14 :  [https://www.pgpool.net/docs/latest/en/html/release-4-2-14.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-2-14.html Japanese]&lt;br /&gt;
* 4.1.17 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-17.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-17.html Japanese]&lt;br /&gt;
* 4.0.24 :  [https://www.pgpool.net/docs/latest/en/html/release-4-0-24.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-0-24.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.4.3, 4.3.6, 4.2.13, 4.1.16 and 4.0.23 officially released (2023/05/18) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.4.3, 4.3.6, 4.2.13, 4.1.16 and 4.0.23 officially released.&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.4.3 :  [https://www.pgpool.net/docs/latest/en/html/release-4-4-3.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-4-3.html Japanese]&lt;br /&gt;
* 4.3.6 :  [https://www.pgpool.net/docs/latest/en/html/release-4-3-6.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-3-6.html Japanese]&lt;br /&gt;
* 4.2.13 :  [https://www.pgpool.net/docs/latest/en/html/release-4-2-13.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-2-13.html Japanese]&lt;br /&gt;
* 4.1.16 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-16.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-16.html Japanese]&lt;br /&gt;
* 4.0.23 :  [https://www.pgpool.net/docs/latest/en/html/release-4-0-23.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-0-23.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== End-of-Life (EOL) Announcement for pgpoolAdmin (2023/02/17) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool Global Development Group announces end-of-life date for pgpoolAdmin, which is the management tool for Pgpool-II.&lt;br /&gt;
&lt;br /&gt;
Pgpool Global Development Group will provide maintenance for pgpoolAdmin 4.0, 4.1 and 4.2 until December 31, 2023. &lt;br /&gt;
After this date, bug fixes and security fixes for these versions will no longer be provided.&lt;br /&gt;
&lt;br /&gt;
In addition, pgpoolAdmin for Pgpool-II 4.3 or later will not be released.&lt;br /&gt;
&lt;br /&gt;
We would like to thank you for using pgpoolAdmin over the years.&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.4.2, 4.3.5, 4.2.12, 4.1.15 and 4.0.22 officially released (2023/01/23) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.4.2, 4.3.5, 4.2.12, 4.1.15 and 4.0.22 officially released.&lt;br /&gt;
&lt;br /&gt;
This release contains a security fix.&lt;br /&gt;
&lt;br /&gt;
If following conditions are all met, the password of &amp;quot;wd_lifecheck_user&amp;quot; is exposed by &amp;quot;SHOW POOL STATUS&amp;quot; command. &lt;br /&gt;
The command can be executed by any user who can connect to Pgpool-II. (CVE-2023-22332)&lt;br /&gt;
* Version 3.3 or later&lt;br /&gt;
* use_watchdog = on&lt;br /&gt;
* wd_lifecheck_method = &#039;query&#039;&lt;br /&gt;
* A plain text password is set to wd_lifecheck_password&lt;br /&gt;
&lt;br /&gt;
In this case it is strongly recommended to upgrade to this version (we do not expose wd_lifecheck_password in show pool_status command any more), or use one of following workarounds.&lt;br /&gt;
&lt;br /&gt;
Workarounds for 4.0.x to 4.4.x users:&lt;br /&gt;
* Disable watchdog. Set use_watchdog to off.&lt;br /&gt;
* Change wd_lifecheck_method to heartbeat.&lt;br /&gt;
* Set an empty string to wd_lifecheck_password. This will use password in the pool_passwd file.&lt;br /&gt;
* Set an AES encrypted password to wd_lifecheck_password. &lt;br /&gt;
&lt;br /&gt;
In any case we recommend to change &amp;quot;wd_lifecheck_password&amp;quot; in PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
Workarounds for 3.0.x to 3.7.x users:&lt;br /&gt;
* Disable watchdog. Set use_watchdog to off.&lt;br /&gt;
* Change wd_lifecheck_method to heartbeat. &lt;br /&gt;
&lt;br /&gt;
In any case we recommend to change &amp;quot;wd_lifecheck_password&amp;quot; in PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
Please note that Pgpool-II 3.7.x or before are end of life and no minor updates are provided for those versions. &lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.4.2 :  [https://www.pgpool.net/docs/latest/en/html/release-4-4-1.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-4-2.html Japanese]&lt;br /&gt;
* 4.3.5 :  [https://www.pgpool.net/docs/latest/en/html/release-4-3-4.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-3-5.html Japanese]&lt;br /&gt;
* 4.2.12 :  [https://www.pgpool.net/docs/latest/en/html/release-4-2-11.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-2-12.html Japanese]&lt;br /&gt;
* 4.1.15 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-14.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-15.html Japanese]&lt;br /&gt;
* 4.0.22 :  [https://www.pgpool.net/docs/latest/en/html/release-4-0-21.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-0-22.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.4.1, 4.3.4, 4.2.11, 4.1.14, 4.0.21 and 3.7.26 officially released (2022/12/22) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.4.1, 4.3.4, 4.2.11, 4.1.14, 4.0.21 and 3.7.26 officially released.&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.4.1 :  [https://www.pgpool.net/docs/latest/en/html/release-4-4-1.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-4-1.html Japanese]&lt;br /&gt;
* 4.3.4 :  [https://www.pgpool.net/docs/latest/en/html/release-4-3-4.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-3-4.html Japanese]&lt;br /&gt;
* 4.2.11 :  [https://www.pgpool.net/docs/latest/en/html/release-4-2-11.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-2-11.html Japanese]&lt;br /&gt;
* 4.1.14 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-14.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-14.html Japanese]&lt;br /&gt;
* 4.0.21 :  [https://www.pgpool.net/docs/latest/en/html/release-4-0-21.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-0-21.html Japanese]&lt;br /&gt;
* 3.7.26 :  [https://www.pgpool.net/docs/latest/en/html/release-3-7-26.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-7-26.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.4.0 released (2022/12/06) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.4.0 is now released. This is the first stable release of Pgpool-II 4.4.x.&lt;br /&gt;
&lt;br /&gt;
V4.4 contains new features and enhancements, including:&lt;br /&gt;
&lt;br /&gt;
* Add new dynamic spare process management feature. This feature allows selecting between static and dynamic process management modes.&lt;br /&gt;
&lt;br /&gt;
* Allow to specify replication delay by time in streaming replication mode. For this purpose new parameter delay_threshold_by_time is introduced.&lt;br /&gt;
&lt;br /&gt;
* As PostgreSQL already does, Pgpool-II now supports unix_socket_directories, unix_socket_group and unix_socket_permissions for more flexible and precise control of UNIX domain sockets.&lt;br /&gt;
&lt;br /&gt;
* Allow to use comma separated multiple listen addresses in listen_addresses and pcp_listen_addresses. They control which interfaces accept connection attempts, which can help prevent repeated malicious connection requests on insecure network interfaces.&lt;br /&gt;
&lt;br /&gt;
* Allow to customize the command used by trusted_servers for checking upstream connection. For this purpose new parameter trusted_server_command is introduced.&lt;br /&gt;
&lt;br /&gt;
* Import PostgreSQL 15&#039;s SQL parser.&lt;br /&gt;
&lt;br /&gt;
* Speed up query cache by reducing lock contention. This allows concurrent running clients to fetch cache contents much more quicker. &lt;br /&gt;
&lt;br /&gt;
You can download it from [https://www.pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
Release notes:&lt;br /&gt;
[https://www.pgpool.net/docs/44/en/html/release-4-4-0.html English] [https://www.pgpool.net/docs/44/ja/html/release-4-4-0.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.3.3, 4.2.10, 4.1.13, 4.0.20 and 3.7.25 officially released (2022/08/18) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.3.3, 4.2.10, 4.1.13, 4.0.20 and 3.7.25 officially released.&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.3.3 :  [https://www.pgpool.net/docs/latest/en/html/release-4-3-3.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-3-3.html Japanese]&lt;br /&gt;
* 4.2.10 :  [https://www.pgpool.net/docs/latest/en/html/release-4-2-10.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-2-10.html Japanese]&lt;br /&gt;
* 4.1.13 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-13.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-13.html Japanese]&lt;br /&gt;
* 4.0.20 :  [https://www.pgpool.net/docs/latest/en/html/release-4-0-20.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-0-20.html Japanese]&lt;br /&gt;
* 3.7.25 :  [https://www.pgpool.net/docs/latest/en/html/release-3-7-25.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-7-25.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.3.2, 4.2.9, 4.1.12, 4.0.19 and 3.7.24 officially released (2022/05/19) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.3.2, 4.2.9, 4.1.12, 4.0.19 and 3.7.24 officially released.&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.3.2 :  [https://www.pgpool.net/docs/latest/en/html/release-4-3-2.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-3-2.html Japanese]&lt;br /&gt;
* 4.2.9 :  [https://www.pgpool.net/docs/latest/en/html/release-4-2-9.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-2-9.html Japanese]&lt;br /&gt;
* 4.1.12 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-12.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-12.html Japanese]&lt;br /&gt;
* 4.0.19 :  [https://www.pgpool.net/docs/latest/en/html/release-4-0-19.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-0-19.html Japanese]&lt;br /&gt;
* 3.7.24 :  [https://www.pgpool.net/docs/latest/en/html/release-3-7-24.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-7-24.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.3.1, 4.2.8, 4.1.11, 4.0.18 and 3.7.23 officially released (2022/02/17) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.3.1, 4.2.8, 4.1.11, 4.0.18 and 3.7.23 officially released.&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.3.1 :  [https://www.pgpool.net/docs/latest/en/html/release-4-3-1.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-3-1.html Japanese]&lt;br /&gt;
* 4.2.8 :  [https://www.pgpool.net/docs/latest/en/html/release-4-2-8.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-2-8.html Japanese]&lt;br /&gt;
* 4.1.11 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-11.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-11.html Japanese]&lt;br /&gt;
* 4.0.18 :  [https://www.pgpool.net/docs/latest/en/html/release-4-0-18.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-0-18.html Japanese]&lt;br /&gt;
* 3.7.23 :  [https://www.pgpool.net/docs/latest/en/html/release-3-7-23.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-7-23.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.2.7, 4.1.10, 4.0.17 and 3.7.22 officially released (2021/12/23) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.2.7, 4.1.10, 4.0.17 and 3.7.22 officially released.&lt;br /&gt;
&lt;br /&gt;
The purpose of this release is to provide packages for PostgreSQL 14.&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.2.7 :  [https://www.pgpool.net/docs/latest/en/html/release-4-2-7.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-2-7.html Japanese]&lt;br /&gt;
* 4.1.10 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-10.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-10.html Japanese]&lt;br /&gt;
* 4.0.17 :  [https://www.pgpool.net/docs/latest/en/html/release-4-0-17.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-0-17.html Japanese]&lt;br /&gt;
* 3.7.22 :  [https://www.pgpool.net/docs/latest/en/html/release-3-7-22.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-7-22.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.3.0 released (2021/12/07) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.3.0 is now released. Please take a look at [https://www.pgpool.net/docs/43/en/html/release-4-3-0.html 4.3.0]&lt;br /&gt;
&lt;br /&gt;
V4.3.0 contains new features and enhancements, including:&lt;br /&gt;
* A new membership mechanism is introduced to Watchdog to allow to keep quorum/VIP when some of watchdog nodes are removed.&lt;br /&gt;
&lt;br /&gt;
* Allow to choose the least replication delay standby node when selecting the load balance node. &lt;br /&gt;
&lt;br /&gt;
* Allow to specify the node id to be promoted in pcp_promote_node. &lt;br /&gt;
&lt;br /&gt;
* Allow to configure to not trigger failover when PostgreSQL is shutdown by admin or killed by pg_terminate_backend. &lt;br /&gt;
&lt;br /&gt;
* Add new fields to pcp_proc_info, SHOW POOL_PROCESSES and SHOW POOL_POOLS command to display more useful information to admin.&lt;br /&gt;
&lt;br /&gt;
* Allow pcp_node_info to list all backend nodes information.&lt;br /&gt;
&lt;br /&gt;
* Add new fields showing actual PostgreSQL status to SHOW POOL NODES command and friends.&lt;br /&gt;
&lt;br /&gt;
* Add a new parameter which represents the recovery source hostname to recovery_1st_stage_command and recovery_2nd_stage_command. &lt;br /&gt;
&lt;br /&gt;
* Add support for log time stamp with milliseconds.&lt;br /&gt;
&lt;br /&gt;
* Import PostgreSQL 14&#039;s SQL parser.&lt;br /&gt;
&lt;br /&gt;
* Support include directive in pgppol.conf file. You can have separate sub-config file to be included in pgpool.conf.&lt;br /&gt;
&lt;br /&gt;
* pgpool.conf sample files are unified into single sample file for easier configuration.&lt;br /&gt;
&lt;br /&gt;
* All configuration parameters in pgpool.conf sample file are commented out to clarify which parameter is needed to be changed.&lt;br /&gt;
&lt;br /&gt;
You can download it from [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
Release notes:&lt;br /&gt;
[https://www.pgpool.net/docs/43/en/html/release-4-3-0.html English]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.2.6, 4.1.9, 4.0.16, 3.7.21 and 3.6.28 officially released (2021/11/18) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.2.6, 4.1.9, 4.0.16, 3.7.21 and 3.6.28 officially released.&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.2.6 :  [https://www.pgpool.net/docs/latest/en/html/release-4-2-6.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-2-6.html Japanese]&lt;br /&gt;
* 4.1.9 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-9.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-9.html Japanese]&lt;br /&gt;
* 4.0.16 :  [https://www.pgpool.net/docs/latest/en/html/release-4-0-16.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-0-16.html Japanese]&lt;br /&gt;
* 3.7.21 :  [https://www.pgpool.net/docs/latest/en/html/release-3-7-21.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-7-21.html Japanese]&lt;br /&gt;
* 3.6.28 :  [https://www.pgpool.net/docs/latest/en/html/release-3-6-28.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-6-28.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.2.5 released (2021/09/14) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool Global Development Group is pleased to announce the availability of Pgpool-II 4.2.5.&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.2.5 :  [https://www.pgpool.net/docs/latest/en/html/release-4-2-5.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-2-5.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.2.4, 4.1.8, 4.0.15, 3.7.20 and 3.6.27 officially released (2021/08/05) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.2.4, 4.1.8, 4.0.15, 3.7.20 and 3.6.27 officially released.&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.2.4 :  [https://www.pgpool.net/docs/latest/en/html/release-4-2-4.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-2-4.html Japanese]&lt;br /&gt;
* 4.1.8 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-8.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-8.html Japanese]&lt;br /&gt;
* 4.0.15 :  [https://www.pgpool.net/docs/latest/en/html/release-4-0-15.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-0-15.html Japanese]&lt;br /&gt;
* 3.7.20 :  [https://www.pgpool.net/docs/latest/en/html/release-3-7-20.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-7-20.html Japanese]&lt;br /&gt;
* 3.6.27 :  [https://www.pgpool.net/docs/latest/en/html/release-3-6-27.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-6-27.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== pgpoolAdmin 4.2.0 officially released (2021/06/17) ===&lt;br /&gt;
&lt;br /&gt;
pgpoolAdmin 4.2.0 officially released.&lt;br /&gt;
&lt;br /&gt;
pgpoolAdmin 4.2.0 adds support for Pgpool-II 4.2.&lt;br /&gt;
Please take a look at [https://www.pgpool.net/docs/pgpoolAdmin/NEWS release notes].&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.2.3, 4.1.7, 4.0.14, 3.7.19 and 3.6.26 officially released (2021/05/20) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.2.3, 4.1.7, 4.0.14, 3.7.19 and 3.6.26 officially released.&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.2.3 :  [https://www.pgpool.net/docs/latest/en/html/release-4-2-3.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-2-3.html Japanese]&lt;br /&gt;
* 4.1.7 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-7.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-7.html Japanese]&lt;br /&gt;
* 4.0.14 :  [https://www.pgpool.net/docs/latest/en/html/release-4-0-14.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-0-14.html Japanese]&lt;br /&gt;
* 3.7.19 :  [https://www.pgpool.net/docs/latest/en/html/release-3-7-19.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-7-19.html Japanese]&lt;br /&gt;
* 3.6.26 :  [https://www.pgpool.net/docs/latest/en/html/release-3-6-26.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-6-26.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.2.2, 4.1.6, 4.0.13, 3.7.18 and 3.6.25 officially released (2021/02/18) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.2.2, 4.1.6, 4.0.13, 3.7.18 and 3.6.25 officially released.&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.2.2 :  [https://www.pgpool.net/docs/latest/en/html/release-4-2-2.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-2-2.html Japanese]&lt;br /&gt;
* 4.1.6 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-6.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-6.html Japanese]&lt;br /&gt;
* 4.0.13 :  [https://www.pgpool.net/docs/latest/en/html/release-4-0-13.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-0-13.html Japanese]&lt;br /&gt;
* 3.7.18 :  [https://www.pgpool.net/docs/latest/en/html/release-3-7-18.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-7-18.html Japanese]&lt;br /&gt;
* 3.6.25 :  [https://www.pgpool.net/docs/latest/en/html/release-3-6-25.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-6-25.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.2.1 released (2020/12/23) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.2.1 is now released. &lt;br /&gt;
&lt;br /&gt;
You can download them from [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.2.1 :  [https://www.pgpool.net/docs/latest/en/html/release-4-2-1.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-2-1.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.2.0 released (2020/11/26) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.2.0 is now released. &lt;br /&gt;
&lt;br /&gt;
V4.2 contains new features and enhancements, including:&lt;br /&gt;
&lt;br /&gt;
* Some items in the configuration file pgpool.conf are vastly enhanced for easier configuration and administration.&lt;br /&gt;
&lt;br /&gt;
* Implement logging_collector for easier log management.&lt;br /&gt;
&lt;br /&gt;
* Implement log_disconnections to collect disconnection logs.&lt;br /&gt;
&lt;br /&gt;
* Implement pg_enc and pg_md5 to allow to register multiple passwords at once.&lt;br /&gt;
&lt;br /&gt;
* Allow to show statistics of health check by using SHOW POOL_HEALTH_CHECK_STATS command, and also allow to show statistics of issued SQL by using SHOW POOL_BACKEND_STATS command.&lt;br /&gt;
&lt;br /&gt;
* New PCP command pcp_reload_config is added.&lt;br /&gt;
&lt;br /&gt;
* Now it is possible to omit write_function_list and read_only_function_list by looking at system catalog information.&lt;br /&gt;
&lt;br /&gt;
* Add new clustering mode snapshot_isolation_mode which guarantees not only data modifications to multiple PostgreSQL but read consistency.&lt;br /&gt;
&lt;br /&gt;
* Support LDAP authentication between clients and Pgpool-II.&lt;br /&gt;
&lt;br /&gt;
* Add ssl_crl_file and ssl_passphrase_command to SSL configuration.&lt;br /&gt;
&lt;br /&gt;
* Import PostgreSQL 13&#039;s SQL parser. &lt;br /&gt;
&lt;br /&gt;
You can download it from [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
Release notes:&lt;br /&gt;
[https://www.pgpool.net/docs/42/en/html/release-4-2-0.html English]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.1.5, 4.0.12, 3.7.17, 3.6.24 and 3.5.28 officially released (2020/11/19) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.1.5, 4.0.12, 3.7.17, 3.6.24 and 3.5.28 officially released&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.1.5 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-5.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-5.html Japanese]&lt;br /&gt;
* 4.0.12 :  [https://www.pgpool.net/docs/latest/en/html/release-4-0-12.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-0-12.html Japanese]&lt;br /&gt;
* 3.7.17 :  [https://www.pgpool.net/docs/latest/en/html/release-3-7-17.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-7-17.html Japanese]&lt;br /&gt;
* 3.6.24 :  [https://www.pgpool.net/docs/latest/en/html/release-3-6-24.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-6-24.html Japanese]&lt;br /&gt;
* 3.5.28 :  [https://www.pgpool.net/docs/latest/en/html/release-3-5-28.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-5-28.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.1.4, 4.0.11, 3.7.16, 3.6.23 and 3.5.27 officially released (2020/09/17) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.1.4, 4.0.11, 3.7.16, 3.6.23 and 3.5.27 officially released&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.1.4 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-4.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-4.html Japanese]&lt;br /&gt;
* 4.0.11 :  [https://www.pgpool.net/docs/latest/en/html/release-4-0-11.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-0-11.html Japanese]&lt;br /&gt;
* 3.7.16 :  [https://www.pgpool.net/docs/latest/en/html/release-3-7-16.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-7-16.html Japanese]&lt;br /&gt;
* 3.6.23 :  [https://www.pgpool.net/docs/latest/en/html/release-3-6-23.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-6-23.html Japanese]&lt;br /&gt;
* 3.5.27 :  [https://www.pgpool.net/docs/latest/en/html/release-3-5-27.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-5-27.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.1.3, 4.0.10, 3.7.15, 3.6.22 and 3.5.26 officially released (2020/08/20) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.1.3, 4.0.10, 3.7.15, 3.6.22 and 3.5.26 officially released&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.1.3 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-3.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-3.html Japanese]&lt;br /&gt;
* 4.0.10 :  [https://www.pgpool.net/docs/latest/en/html/release-4-0-10.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-0-10.html Japanese]&lt;br /&gt;
* 3.7.15 :  [https://www.pgpool.net/docs/latest/en/html/release-3-7-15.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-7-15.html Japanese]&lt;br /&gt;
* 3.6.22 :  [https://www.pgpool.net/docs/latest/en/html/release-3-6-22.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-6-22.html Japanese]&lt;br /&gt;
* 3.5.26 :  [https://www.pgpool.net/docs/latest/en/html/release-3-5-26.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-5-26.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.1.2, 4.0.9, 3.7.14, 3.6.21 and 3.5.25 officially released (2020/05/21) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.1.2, 4.0.9, 3.7.14, 3.6.21 and 3.5.25 officially released&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.1.2 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-2.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-2.html Japanese]&lt;br /&gt;
* 4.0.9 :  [https://www.pgpool.net/docs/latest/en/html/release-4-0-9.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-0-9.html Japanese]&lt;br /&gt;
* 3.7.14 :  [https://www.pgpool.net/docs/latest/en/html/release-3-7-14.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-7-14.html Japanese]&lt;br /&gt;
* 3.6.21 :  [https://www.pgpool.net/docs/latest/en/html/release-3-6-21.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-6-21.html Japanese]&lt;br /&gt;
* 3.5.25 :  [https://www.pgpool.net/docs/latest/en/html/release-3-5-25.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-5-25.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.1.1, 4.0.8, 3.7.13, 3.6.20, 3.5.24 and pgpoolAdmin 4.1.0 officially released (2020/02/20) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.1.1, 4.0.8, 3.7.13, 3.6.20, 3.5.24 and pgpoolAdmin 4.1.0 officially released&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
* 4.1.1 :  [https://www.pgpool.net/docs/latest/en/html/release-4-1-1.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-1-1.html Japanese]&lt;br /&gt;
* 4.0.8 :  [https://www.pgpool.net/docs/latest/en/html/release-4-0-8.html English] [https://www.pgpool.net/docs/latest/ja/html/release-4-0-8.html Japanese]&lt;br /&gt;
* 3.7.13 :  [https://www.pgpool.net/docs/latest/en/html/release-3-7-13.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-7-13.html Japanese]&lt;br /&gt;
* 3.6.20 :  [https://www.pgpool.net/docs/latest/en/html/release-3-6-20.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-6-20.html Japanese]&lt;br /&gt;
* 3.5.24 :  [https://www.pgpool.net/docs/latest/en/html/release-3-5-24.html English] [https://www.pgpool.net/docs/latest/ja/html/release-3-5-24.html Japanese]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.1.0, 4.0.7, 3.7.12, 3.6.19, 3.5.23 and 3.4.26 RPMs Update 2 officially released (2019/11/19) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.1.0, 4.0.7, 3.7.12, 3.6.19, 3.5.23 and 3.4.26 RPMs Update 2 are officially released.&lt;br /&gt;
The updated RPM packages fixed the bug which logs are not output, when log_destination = &#039;syslog&#039; is configured.&lt;br /&gt;
&lt;br /&gt;
You can download them [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.1.0 released (2019/10/31) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.1.0 is now relased. &lt;br /&gt;
&lt;br /&gt;
Major enhancements in Pgpool-II 4.1 include:&lt;br /&gt;
&lt;br /&gt;
* Statement level load balancing.&lt;br /&gt;
* Auto failback.&lt;br /&gt;
* Enhance performance in number of areas.&lt;br /&gt;
** Shared relation cache allows to reuse relation cache among sessions to reduce internal queries against PostgreSQL system catalogs.&lt;br /&gt;
** Have separate SQL parser for DML statements to eliminate unnecessary parsing effort.&lt;br /&gt;
** Load balancing control for specific queries. &lt;br /&gt;
* Reduce Internal Queries against System Catalogs.&lt;br /&gt;
* Import PostgreSQL 12 SQL parser. &lt;br /&gt;
* etc.&lt;br /&gt;
&lt;br /&gt;
You can download it from [https://pgpool.net/mediawiki/index.php/Downloads here].&lt;br /&gt;
&lt;br /&gt;
Release notes:&lt;br /&gt;
[https://www.pgpool.net/docs/latest/en/html/ English]&lt;br /&gt;
[https://www.pgpool.net/docs/latest/ja/html/ Japanese]&lt;br /&gt;
&lt;br /&gt;
== Development ==&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.5 RC1 released (2023/11/28) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.5 RC1 is now released. &lt;br /&gt;
&lt;br /&gt;
Note that this is not a stable version but just for developers.&lt;br /&gt;
&lt;br /&gt;
V4.5 contains new features and enhancements, including:&lt;br /&gt;
&lt;br /&gt;
* Allow to use multiple statements in a query string.&lt;br /&gt;
&lt;br /&gt;
* Allow to set delay_threshold_by_time in milliseconds.&lt;br /&gt;
&lt;br /&gt;
* Avoid session disconnection issue in failover/failback/backend error in some cases.&lt;br /&gt;
&lt;br /&gt;
* Allow to route queries to a specific backend node for a specific user connection.&lt;br /&gt;
&lt;br /&gt;
* Support multiple directories specification for pcp_socket_dir.&lt;br /&gt;
&lt;br /&gt;
* Import PostgreSQL 16&#039;s SQL parser. &lt;br /&gt;
&lt;br /&gt;
You can download it from [https://pgpool.net/mediawiki/index.php/Developer_releases here].&lt;br /&gt;
&lt;br /&gt;
Release notes:&lt;br /&gt;
[https://www.pgpool.net/docs/45/en/html/release-4-5-0.html English]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.5 beta1 released (2023/11/14) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.5 beta1 is now released. &lt;br /&gt;
&lt;br /&gt;
Note that this is not a stable version but just for developers.&lt;br /&gt;
&lt;br /&gt;
V4.5 contains new features and enhancements, including:&lt;br /&gt;
&lt;br /&gt;
* Allow to use multiple statements in a query string.&lt;br /&gt;
&lt;br /&gt;
* Allow to set delay_threshold_by_time in milliseconds.&lt;br /&gt;
&lt;br /&gt;
* Avoid session disconnection issue in failover/failback/backend error in some cases.&lt;br /&gt;
&lt;br /&gt;
* Allow to route queries to a specific backend node for a specific user connection.&lt;br /&gt;
&lt;br /&gt;
* Support multiple directories specification for pcp_socket_dir.&lt;br /&gt;
&lt;br /&gt;
* Import PostgreSQL 16&#039;s SQL parser. &lt;br /&gt;
&lt;br /&gt;
You can download it from [https://pgpool.net/mediawiki/index.php/Developer_releases here].&lt;br /&gt;
&lt;br /&gt;
Release notes:&lt;br /&gt;
[https://www.pgpool.net/docs/45/en/html/release-4-5-0.html English]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.4 RC1 released (2022/11/22) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.4 RC1 is now released. &lt;br /&gt;
&lt;br /&gt;
Note that this is not a stable version but just for developers.&lt;br /&gt;
&lt;br /&gt;
V4.4 contains new features and enhancements, including:&lt;br /&gt;
&lt;br /&gt;
* Add new dynamic spare process management feature. This feature allows selecting between static and dynamic process management modes.&lt;br /&gt;
&lt;br /&gt;
* Allow to specify replication delay by time in streaming replication mode. For this purpose new parameter delay_threshold_by_time is introduced.&lt;br /&gt;
&lt;br /&gt;
* As PostgreSQL already does, Pgpool-II now supports unix_socket_directories, unix_socket_group and unix_socket_permissions for more flexible and precise control of UNIX domain sockets.&lt;br /&gt;
&lt;br /&gt;
* Allow to use comma separated multiple listen addresses in listen_addresses and pcp_listen_addresses. They control which interfaces accept connection attempts, which can help prevent repeated malicious connection requests on insecure network interfaces.&lt;br /&gt;
&lt;br /&gt;
* Allow to customize the command used by trusted_servers for checking upstream connection. For this purpose new parameter trusted_server_command is introduced.&lt;br /&gt;
&lt;br /&gt;
* Import PostgreSQL 15&#039;s SQL parser.&lt;br /&gt;
&lt;br /&gt;
* Speed up query cache by reducing lock contention. This allows concurrent running clients to fetch cache contents much more quicker. &lt;br /&gt;
&lt;br /&gt;
You can download it from [https://pgpool.net/mediawiki/index.php/Developer_releases here].&lt;br /&gt;
&lt;br /&gt;
Release notes:&lt;br /&gt;
[https://www.pgpool.net/docs/44/en/html/release-4-4-0.html English]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.4 beta1 released (2022/11/14) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.4 beta1 is now released. &lt;br /&gt;
&lt;br /&gt;
Note that this is not a stable version but just for developers.&lt;br /&gt;
&lt;br /&gt;
V4.4 contains new features and enhancements, including:&lt;br /&gt;
&lt;br /&gt;
* Add new dynamic spare process management feature. This feature allows selecting between static and dynamic process management modes.&lt;br /&gt;
&lt;br /&gt;
* Allow to specify replication delay by time in streaming replication mode. For this purpose new parameter delay_threshold_by_time is introduced.&lt;br /&gt;
&lt;br /&gt;
* As PostgreSQL already does, Pgpool-II now supports unix_socket_directories, unix_socket_group and unix_socket_permissions for more flexible and precise control of UNIX domain sockets.&lt;br /&gt;
&lt;br /&gt;
* Allow to use comma separated multiple listen addresses in listen_addresses and pcp_listen_addresses. They control which interfaces accept connection attempts, which can help prevent repeated malicious connection requests on insecure network interfaces.&lt;br /&gt;
&lt;br /&gt;
* Allow to customize the command used by trusted_servers for checking upstream connection. For this purpose new parameter trusted_server_command is introduced.&lt;br /&gt;
&lt;br /&gt;
* Import PostgreSQL 15&#039;s SQL parser.&lt;br /&gt;
&lt;br /&gt;
* Speed up query cache by reducing lock contention. This allows concurrent running clients to fetch cache contents much more quicker. &lt;br /&gt;
&lt;br /&gt;
You can download it from [https://pgpool.net/mediawiki/index.php/Developer_releases here].&lt;br /&gt;
&lt;br /&gt;
Release notes:&lt;br /&gt;
[https://www.pgpool.net/docs/44/en/html/release-4-4-0.html English]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.3 RC1 released (2021/11/25) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.3 RC1 is now released. Please take a look at [https://www.pgpool.net/docs/43/en/html/release-4-3rc1.html 4.3 RC1]&lt;br /&gt;
&lt;br /&gt;
Note that this is not a stable version but just for developers.&lt;br /&gt;
&lt;br /&gt;
V4.3 contains new features and enhancements, including:&lt;br /&gt;
* A new membership mechanism is introduced to Watchdog to allow to keep quorum/VIP when some of watchdog nodes are removed.&lt;br /&gt;
&lt;br /&gt;
* Allow to choose the least replication delay standby node when selecting the load balance node. &lt;br /&gt;
&lt;br /&gt;
* Allow to specify the node id to be promoted in pcp_promote_node. &lt;br /&gt;
&lt;br /&gt;
* Allow to configure to not trigger failover when PostgreSQL is shutdown by admin or killed by pg_terminate_backend. &lt;br /&gt;
&lt;br /&gt;
* Add new fields to pcp_proc_info, SHOW POOL_PROCESSES and SHOW POOL_POOLS command to display more useful information to admin.&lt;br /&gt;
&lt;br /&gt;
* Allow pcp_node_info to list all backend nodes information.&lt;br /&gt;
&lt;br /&gt;
* Add new fields showing actual PostgreSQL status to SHOW POOL NODES command and friends.&lt;br /&gt;
&lt;br /&gt;
* Add a new parameter which represents the recovery source hostname to recovery_1st_stage_command and recovery_2nd_stage_command. &lt;br /&gt;
&lt;br /&gt;
* Add support for log time stamp with milliseconds.&lt;br /&gt;
&lt;br /&gt;
* Import PostgreSQL 14&#039;s SQL parser.&lt;br /&gt;
&lt;br /&gt;
* Support include directive in pgppol.conf file. You can have separate sub-config file to be included in pgpool.conf.&lt;br /&gt;
&lt;br /&gt;
* pgpool.conf sample files are unified into single sample file for easier configuration.&lt;br /&gt;
&lt;br /&gt;
* All configuration parameters in pgpool.conf sample file are commented out to clarify which parameter is needed to be changed.&lt;br /&gt;
&lt;br /&gt;
You can download it from [https://pgpool.net/mediawiki/index.php/Developer_releases here].&lt;br /&gt;
&lt;br /&gt;
Release notes:&lt;br /&gt;
[https://www.pgpool.net/docs/43/en/html/release-4-3-0.html English]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.3 beta2 released (2021/11/18) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.3 beta2 is now released. Please take a look at [https://www.pgpool.net/docs/43/en/html/release-4-3beta2.html 4.3 beta2]&lt;br /&gt;
&lt;br /&gt;
Note that this is not a stable version but just for developers.&lt;br /&gt;
&lt;br /&gt;
V4.3 contains new features and enhancements, including:&lt;br /&gt;
* A new membership mechanism is introduced to Watchdog to allow to keep quorum/VIP when some of watchdog nodes are removed.&lt;br /&gt;
&lt;br /&gt;
* Allow to choose the least replication delay standby node when selecting the load balance node. &lt;br /&gt;
&lt;br /&gt;
* Allow to specify the node id to be promoted in pcp_promote_node. &lt;br /&gt;
&lt;br /&gt;
* Allow to configure to not trigger failover when PostgreSQL is shutdown by admin or killed by pg_terminate_backend. &lt;br /&gt;
&lt;br /&gt;
* Add new fields to pcp_proc_info, SHOW POOL_PROCESSES and SHOW POOL_POOLS command to display more useful information to admin.&lt;br /&gt;
&lt;br /&gt;
* Allow pcp_node_info to list all backend nodes information.&lt;br /&gt;
&lt;br /&gt;
* Add new fields showing actual PostgreSQL status to SHOW POOL NODES command and friends.&lt;br /&gt;
&lt;br /&gt;
* Add a new parameter which represents the recovery source hostname to recovery_1st_stage_command and recovery_2nd_stage_command. &lt;br /&gt;
&lt;br /&gt;
* Add support for log time stamp with milliseconds.&lt;br /&gt;
&lt;br /&gt;
* Import PostgreSQL 14&#039;s SQL parser.&lt;br /&gt;
&lt;br /&gt;
* Support include directive in pgppol.conf file. You can have separate sub-config file to be included in pgpool.conf.&lt;br /&gt;
&lt;br /&gt;
* pgpool.conf sample files are unified into single sample file for easier configuration.&lt;br /&gt;
&lt;br /&gt;
* All configuration parameters in pgpool.conf sample file are commented out to clarify which parameter is needed to be changed.&lt;br /&gt;
&lt;br /&gt;
You can download it from [https://pgpool.net/mediawiki/index.php/Developer_releases here].&lt;br /&gt;
&lt;br /&gt;
Release notes:&lt;br /&gt;
[https://www.pgpool.net/docs/43/en/html/release-4-3-0.html English]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.3 beta1 released (2021/11/09) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.3 beta1 is now released. &lt;br /&gt;
&lt;br /&gt;
Note that this is not a stable version but just for developers.&lt;br /&gt;
&lt;br /&gt;
V4.3 contains new features and enhancements, including:&lt;br /&gt;
* A new membership mechanism is introduced to Watchdog to allow to keep quorum/VIP when some of watchdog nodes are removed.&lt;br /&gt;
&lt;br /&gt;
* Allow to choose the least replication delay standby node when selecting the load balance node. &lt;br /&gt;
&lt;br /&gt;
* Allow to specify the node id to be promoted in pcp_promote_node. &lt;br /&gt;
&lt;br /&gt;
* Allow to configure to not trigger failover when PostgreSQL is shutdown by admin or killed by pg_terminate_backend. &lt;br /&gt;
&lt;br /&gt;
* Add new fields to pcp_proc_info, SHOW POOL_PROCESSES and SHOW POOL_POOLS command to display more useful information to admin.&lt;br /&gt;
&lt;br /&gt;
* Allow pcp_node_info to list all backend nodes information.&lt;br /&gt;
&lt;br /&gt;
* Add new fields showing actual PostgreSQL status to SHOW POOL NODES command and friends.&lt;br /&gt;
&lt;br /&gt;
* Add a new parameter which represents the recovery source hostname to recovery_1st_stage_command and recovery_2nd_stage_command. &lt;br /&gt;
&lt;br /&gt;
* Add support for log time stamp with milliseconds.&lt;br /&gt;
&lt;br /&gt;
* Import PostgreSQL 14&#039;s SQL parser.&lt;br /&gt;
&lt;br /&gt;
* Support include directive in pgppol.conf file. You can have separate sub-config file to be included in pgpool.conf.&lt;br /&gt;
&lt;br /&gt;
* pgpool.conf sample files are unified into single sample file for easier configuration.&lt;br /&gt;
&lt;br /&gt;
* All configuration parameters in pgpool.conf sample file are commented out to clarify which parameter is needed to be changed.&lt;br /&gt;
&lt;br /&gt;
You can download it from [https://pgpool.net/mediawiki/index.php/Developer_releases here].&lt;br /&gt;
&lt;br /&gt;
Release notes:&lt;br /&gt;
[https://www.pgpool.net/docs/43/en/html/release-4-3-0.html English]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.2 beta1 released (2020/10/27) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.2 beta1 is now released. &lt;br /&gt;
&lt;br /&gt;
Note that this is not a stable version but just for developers.&lt;br /&gt;
&lt;br /&gt;
V4.2 contains new features and enhancements, including:&lt;br /&gt;
&lt;br /&gt;
* Some items in the configuration file pgpool.conf are vastly enhanced for easier configuration and administration.&lt;br /&gt;
&lt;br /&gt;
* Implement logging_collector for easier log management.&lt;br /&gt;
&lt;br /&gt;
* Implement log_disconnections to collect disconnection logs.&lt;br /&gt;
&lt;br /&gt;
* Implement pg_enc and pg_md5 to allow to register multiple passwords at once.&lt;br /&gt;
&lt;br /&gt;
* Allow to show statistics of health check by using SHOW POOL_HEALTH_CHECK_STATS command, and also allow to show statistics of issued SQL by using SHOW POOL_BACKEND_STATS command.&lt;br /&gt;
&lt;br /&gt;
* New PCP command pcp_reload_config is added.&lt;br /&gt;
&lt;br /&gt;
* Now it is possible to omit write_function_list and read_only_function_list by looking at system catalog information.&lt;br /&gt;
&lt;br /&gt;
* Add new clustering mode snapshot_isolation_mode which guarantees not only data modifications to multiple PostgreSQL but read consistency.&lt;br /&gt;
&lt;br /&gt;
* Support LDAP authentication between clients and Pgpool-II.&lt;br /&gt;
&lt;br /&gt;
* Add ssl_crl_file and ssl_passphrase_command to SSL configuration.&lt;br /&gt;
&lt;br /&gt;
* Import PostgreSQL 13&#039;s SQL parser. &lt;br /&gt;
&lt;br /&gt;
You can download it from [https://pgpool.net/mediawiki/index.php/Developer_releases here].&lt;br /&gt;
&lt;br /&gt;
Release notes:&lt;br /&gt;
[https://www.pgpool.net/docs/42/en/html/release-4-2-0.html English]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.2 alpha1 released (2020/10/06) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.2 alpha1 is now released. &lt;br /&gt;
&lt;br /&gt;
Note that this is not a stable version but just for developers.&lt;br /&gt;
&lt;br /&gt;
V4.2 contains new features and enhancements, including:&lt;br /&gt;
&lt;br /&gt;
* Some items in the configuration file pgpool.conf are vastly enhanced for easier configuration and administration.&lt;br /&gt;
&lt;br /&gt;
* Implement logging_collector for easier log management.&lt;br /&gt;
&lt;br /&gt;
* Implement log_disconnections to collect disconnection logs.&lt;br /&gt;
&lt;br /&gt;
* Implement pg_enc and pg_md5 to allow to register multiple passwords at once.&lt;br /&gt;
&lt;br /&gt;
* Allow to show statistics of health check by using SHOW POOL_HEALTH_CHECK_STATS command, and also allow to show statistics of issued SQL by using SHOW POOL_BACKEND_STATS command.&lt;br /&gt;
&lt;br /&gt;
* New PCP command pcp_reload_config is added.&lt;br /&gt;
&lt;br /&gt;
* Now it is possible to omit write_function_list and read_only_function_list by looking at system catalog information.&lt;br /&gt;
&lt;br /&gt;
* Add new clustering mode snapshot_isolation_mode which guarantees not only data modifications to multiple PostgreSQL but read consistency.&lt;br /&gt;
&lt;br /&gt;
* Support LDAP authentication between clients and Pgpool-II.&lt;br /&gt;
&lt;br /&gt;
* Add ssl_crl_file and ssl_passphrase_command to SSL configuration.&lt;br /&gt;
&lt;br /&gt;
* Import PostgreSQL 13&#039;s SQL parser. &lt;br /&gt;
&lt;br /&gt;
You can download it from [https://pgpool.net/mediawiki/index.php/Developer_releases here].&lt;br /&gt;
&lt;br /&gt;
Release notes:&lt;br /&gt;
[https://www.pgpool.net/docs/42/en/html/release-4-2-0.html English]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.1 RC1 released (2019/10/16) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.1 RC1 is now released. &lt;br /&gt;
&lt;br /&gt;
This is not intended to be used in production but is close to the release version. So users are encouraged to test it out. &lt;br /&gt;
&lt;br /&gt;
Major enhancements in Pgpool-II 4.1 include:&lt;br /&gt;
&lt;br /&gt;
* Statement level load balancing.&lt;br /&gt;
* Auto failback.&lt;br /&gt;
* Enhance performance in number of areas.&lt;br /&gt;
** Shared relation cache allows to reuse relation cache among sessions to reduce internal queries against PostgreSQL system catalogs.&lt;br /&gt;
** Have separate SQL parser for DML statements to eliminate unnecessary parsing effort.&lt;br /&gt;
** Load balancing control for specific queries. &lt;br /&gt;
* Reduce Internal Queries against System Catalogs.&lt;br /&gt;
* Import PostgreSQL 12 SQL parser. &lt;br /&gt;
* etc.&lt;br /&gt;
&lt;br /&gt;
You can download it from [https://pgpool.net/mediawiki/index.php/Developer_releases here].&lt;br /&gt;
&lt;br /&gt;
Release notes:&lt;br /&gt;
[https://www.pgpool.net/docs/41/en/html/ English]&lt;br /&gt;
&lt;br /&gt;
=== Pgpool-II 4.1 beta1 released (2019/09/06) ===&lt;br /&gt;
&lt;br /&gt;
Pgpool-II 4.1 beta1 is now released. &lt;br /&gt;
&lt;br /&gt;
This is not a stable version but just for developers.&lt;br /&gt;
&lt;br /&gt;
Major enhancements in Pgpool-II 4.1 include:&lt;br /&gt;
&lt;br /&gt;
* Statement level load balancing.&lt;br /&gt;
* Auto failback.&lt;br /&gt;
* Enhance performance in number of areas.&lt;br /&gt;
** Shared relation cache allows to reuse relation cache among sessions to reduce internal queries against PostgreSQL system catalogs.&lt;br /&gt;
** Have separate SQL parser for DML statements to eliminate unnecessary parsing effort.&lt;br /&gt;
** Load balancing control for specific queries. &lt;br /&gt;
* Reduce Internal Queries against System Catalogs.&lt;br /&gt;
* Import PostgreSQL 12 SQL parser. &lt;br /&gt;
* etc.&lt;br /&gt;
&lt;br /&gt;
You can download it from [https://pgpool.net/mediawiki/index.php/Developer_releases here].&lt;br /&gt;
&lt;br /&gt;
Release notes:&lt;br /&gt;
[https://www.pgpool.net/docs/41/en/html/ English]&lt;br /&gt;
&lt;br /&gt;
== Old ==&lt;br /&gt;
See [[Old news|this page]].&lt;br /&gt;
&lt;br /&gt;
= Where can I get commercial support for Pgpool-II? =&lt;br /&gt;
&lt;br /&gt;
Some commercial packages include Pgpool-II support. Consulting and annual support can be purchased from SRA OSS LLC (https://www.sraoss.co.jp/index_en.php).&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=Mailing_lists&amp;diff=3901</id>
		<title>Mailing lists</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=Mailing_lists&amp;diff=3901"/>
		<updated>2024-07-23T00:38:54Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* New pgpool mailing lists */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== New pgpool mailing lists ==&lt;br /&gt;
All mailing lists are in public unless explicitly stated. Your posts and email address can be viewed by anyone.&lt;br /&gt;
{|&lt;br /&gt;
! Mailing list&lt;br /&gt;
! Description&lt;br /&gt;
! Archives&lt;br /&gt;
|-&lt;br /&gt;
|[https://www.pgpool.net/mailman/listinfo/pgpool-general pgpool-general]&lt;br /&gt;
|General discussion about pgpool.&lt;br /&gt;
|[https://www.pgpool.net/pipermail/pgpool-general/ Archives]&lt;br /&gt;
|-&lt;br /&gt;
|[https://www.pgpool.net/mailman/listinfo/pgpool-general-jp pgpool-general-jp]&lt;br /&gt;
|General discussion about pgpool in Japanese.&lt;br /&gt;
|[https://www.pgpool.net/pipermail/pgpool-general-jp/ Archives]&lt;br /&gt;
|-&lt;br /&gt;
|[https://www.pgpool.net/mailman/listinfo/pgpool-hackers pgpool-hackers]&lt;br /&gt;
|Hackers discussion about pgpool.&lt;br /&gt;
|[https://www.pgpool.net/pipermail/pgpool-hackers/ Archives]&lt;br /&gt;
|-&lt;br /&gt;
|[https://www.pgpool.net/mailman/listinfo/pgpool-committers pgpool-commiters]&lt;br /&gt;
|Commit messages for pgpool git.&lt;br /&gt;
|[https://www.pgpool.net/pipermail/pgpool-committers/ Archives]&lt;br /&gt;
|-&lt;br /&gt;
|[https://www.pgpool.net/mailman/listinfo/pgpool-buildfarm pgpool-buildfarm]&lt;br /&gt;
|Pgpool-II buildfarm results.&lt;br /&gt;
|[https://www.pgpool.net/pipermail/pgpool-buildfarm/ Archives]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Old pgpool mailing lists (pgFoundry) ==&lt;br /&gt;
{|&lt;br /&gt;
! Mailing list&lt;br /&gt;
! Description&lt;br /&gt;
! Archives&lt;br /&gt;
|-&lt;br /&gt;
|[https://lists.pgfoundry.org/mailman/listinfo/pgpool-general pgpool-general]&lt;br /&gt;
|General discussion about pgpool.&lt;br /&gt;
|[https://www.pgpool.net/pipermail/pgfoundry/pgpool-general/ Archives]&lt;br /&gt;
|-&lt;br /&gt;
|[https://lists.pgfoundry.org/mailman/listinfo/pgpool-hackers pgpool-hackers]&lt;br /&gt;
|Hackers discussion about pgpool.&lt;br /&gt;
|[https://www.pgpool.net/pipermail/pgfoundry/pgpool-hackers/ Archives]&lt;br /&gt;
|-&lt;br /&gt;
|[https://lists.pgfoundry.org/mailman/listinfo/pgpool-committers pgpool-commiters]&lt;br /&gt;
|Commit messages for pgpool CVS.&lt;br /&gt;
|[https://www.pgpool.net/pipermail/pgfoundry/pgpool-committers/ Archives]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3900</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3900"/>
		<updated>2024-07-04T12:16:45Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Pgpool-II TODO list */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 4.5, it is allowed to use IPv6 address for PostgreSQL backend server, listening addresses of pgpool-II itself and listening addresses of pcp process.&lt;br /&gt;
: However, watchdog process only listens to IPv4 and UNIX domain socket. Following modules need to be updated.&lt;br /&gt;
* watchdog communication port (wd_port)&lt;br /&gt;
** wd_create_recv_socket/wd_create_client_socket&lt;br /&gt;
* heartbeat port (heartbeat_port)&lt;br /&gt;
** wd_create_hb_recv_socket/wd_create_hb_send_socket&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== Do not disturb sessions in failover of standby servers when load_balance_mode is off ===&lt;br /&gt;
: In streaming replication mode if load_balance_mode is off, it would be desirable to not disconnect sessions in failover of standby servers. Currently Pgpool-II connects to all backend even if load_balance_mode is off. But it is actually unnecessary to connect to standby servers if load_balance_mode is off. If pgpool only connects to primary server, it does not need to disconnect session in failover of standby servers.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Support multiple pcp_socket_dir ===&lt;br /&gt;
: Pgpool-II has supported multiple unix_socket_directories in 4.4 release. I think pcp_socket_dir should also support multiple directories.&lt;br /&gt;
: This has been implemented in 4.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow load balance for PREPARE/EXECUTE/DEALLOCATE ===&lt;br /&gt;
: Pgpool-II does not load balance these queries even if it&#039;s processing read only SELECT.&lt;br /&gt;
: This has been implemented in 4.5.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
: ---------------------------------------------------------------------------------------&lt;br /&gt;
: This has been implemented in from master (to be 4.5) to 4.1 (as of 2023/4/19).&lt;br /&gt;
: Now Pgpool-II correctly recognizes multi-statements and distributes the query to proper PostgreSQL nodes.&lt;br /&gt;
: Basically Pgpool-II forwards multi-statement queries to primary node (or all nodes in replication/snapshot isolation mode). A few SQL like BEGIN/END/SAVEPOINT/DEALLOCATE need special handling. This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd32f5ef996cad36d5b1554e92a33ea7a815419a&lt;br /&gt;
: Another challenge is how to deal with minimal parser. The minimal parser happily gives up parsing multi-statement query once it finds UPDATE/INSERT/DELETE in streaming replication mode and pgpool fails to recognize the query is multi-statement. To fix this, &amp;quot;psqlscan&amp;quot; is imported from PostgreSQL, which precisely detects multi-statement with lower cost than SQL parser. Still it is expensive than simple string comparison, and we use psqlscan only when the query string is large (currently defined as 10kB). As a result, the minimal parser is only used when query is larger than 10kB and the query is not multi-statement.&lt;br /&gt;
: This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=48da8715bf403965507eef0321c0ab10054ac71c&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=64f670ca4abae749e1a95cc57b6a508a8611e44d&lt;br /&gt;
: As for handling each query in a multi-statement query, it is technically impossible as stated above. So we don&#039;t need to worry about this.&lt;br /&gt;
: We can now safely and correctly handle multi-statement queries.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
&lt;br /&gt;
=== Support multiple unix_socket_directories and related parameters ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
: Also unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: These have been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3899</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3899"/>
		<updated>2024-07-04T12:16:07Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* TODOs already done */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 4.5, it is allowed to use IPv6 address for PostgreSQL backend server, listening addresses of pgpool-II itself and listening addresses of pcp process.&lt;br /&gt;
: However, watchdog process only listens to IPv4 and UNIX domain socket. Following modules need to be updated.&lt;br /&gt;
* watchdog communication port (wd_port)&lt;br /&gt;
** wd_create_recv_socket/wd_create_client_socket&lt;br /&gt;
* heartbeat port (heartbeat_port)&lt;br /&gt;
** wd_create_hb_recv_socket/wd_create_hb_send_socket&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== Do not disturb sessions in failover of standby servers when load_balance_mode is off ===&lt;br /&gt;
: In streaming replication mode if load_balance_mode is off, it would be desirable to not disconnect sessions in failover of standby servers. Currently Pgpool-II connects to all backend even if load_balance_mode is off. But it is actually unnecessary to connect to standby servers if load_balance_mode is off. If pgpool only connects to primary server, it does not need to disconnect session in failover of standby servers.&lt;br /&gt;
&lt;br /&gt;
=== Support multiple pcp_socket_dir ===&lt;br /&gt;
: Pgpool-II has supported multiple unix_socket_directories in 4.4 release. I think pcp_socket_dir should also support multiple directories.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Support multiple pcp_socket_dir ===&lt;br /&gt;
: Pgpool-II has supported multiple unix_socket_directories in 4.4 release. I think pcp_socket_dir should also support multiple directories.&lt;br /&gt;
: This has been implemented in 4.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow load balance for PREPARE/EXECUTE/DEALLOCATE ===&lt;br /&gt;
: Pgpool-II does not load balance these queries even if it&#039;s processing read only SELECT.&lt;br /&gt;
: This has been implemented in 4.5.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
: ---------------------------------------------------------------------------------------&lt;br /&gt;
: This has been implemented in from master (to be 4.5) to 4.1 (as of 2023/4/19).&lt;br /&gt;
: Now Pgpool-II correctly recognizes multi-statements and distributes the query to proper PostgreSQL nodes.&lt;br /&gt;
: Basically Pgpool-II forwards multi-statement queries to primary node (or all nodes in replication/snapshot isolation mode). A few SQL like BEGIN/END/SAVEPOINT/DEALLOCATE need special handling. This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd32f5ef996cad36d5b1554e92a33ea7a815419a&lt;br /&gt;
: Another challenge is how to deal with minimal parser. The minimal parser happily gives up parsing multi-statement query once it finds UPDATE/INSERT/DELETE in streaming replication mode and pgpool fails to recognize the query is multi-statement. To fix this, &amp;quot;psqlscan&amp;quot; is imported from PostgreSQL, which precisely detects multi-statement with lower cost than SQL parser. Still it is expensive than simple string comparison, and we use psqlscan only when the query string is large (currently defined as 10kB). As a result, the minimal parser is only used when query is larger than 10kB and the query is not multi-statement.&lt;br /&gt;
: This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=48da8715bf403965507eef0321c0ab10054ac71c&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=64f670ca4abae749e1a95cc57b6a508a8611e44d&lt;br /&gt;
: As for handling each query in a multi-statement query, it is technically impossible as stated above. So we don&#039;t need to worry about this.&lt;br /&gt;
: We can now safely and correctly handle multi-statement queries.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
&lt;br /&gt;
=== Support multiple unix_socket_directories and related parameters ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
: Also unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: These have been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3898</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3898"/>
		<updated>2024-07-04T12:13:34Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Pgpool-II TODO list */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 4.5, it is allowed to use IPv6 address for PostgreSQL backend server, listening addresses of pgpool-II itself and listening addresses of pcp process.&lt;br /&gt;
: However, watchdog process only listens to IPv4 and UNIX domain socket. Following modules need to be updated.&lt;br /&gt;
* watchdog communication port (wd_port)&lt;br /&gt;
** wd_create_recv_socket/wd_create_client_socket&lt;br /&gt;
* heartbeat port (heartbeat_port)&lt;br /&gt;
** wd_create_hb_recv_socket/wd_create_hb_send_socket&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== Do not disturb sessions in failover of standby servers when load_balance_mode is off ===&lt;br /&gt;
: In streaming replication mode if load_balance_mode is off, it would be desirable to not disconnect sessions in failover of standby servers. Currently Pgpool-II connects to all backend even if load_balance_mode is off. But it is actually unnecessary to connect to standby servers if load_balance_mode is off. If pgpool only connects to primary server, it does not need to disconnect session in failover of standby servers.&lt;br /&gt;
&lt;br /&gt;
=== Support multiple pcp_socket_dir ===&lt;br /&gt;
: Pgpool-II has supported multiple unix_socket_directories in 4.4 release. I think pcp_socket_dir should also support multiple directories.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Allow load balance for PREPARE/EXECUTE/DEALLOCATE ===&lt;br /&gt;
: Pgpool-II does not load balance these queries even if it&#039;s processing read only SELECT.&lt;br /&gt;
: This has been implemented in 4.5.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
: ---------------------------------------------------------------------------------------&lt;br /&gt;
: This has been implemented in from master (to be 4.5) to 4.1 (as of 2023/4/19).&lt;br /&gt;
: Now Pgpool-II correctly recognizes multi-statements and distributes the query to proper PostgreSQL nodes.&lt;br /&gt;
: Basically Pgpool-II forwards multi-statement queries to primary node (or all nodes in replication/snapshot isolation mode). A few SQL like BEGIN/END/SAVEPOINT/DEALLOCATE need special handling. This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd32f5ef996cad36d5b1554e92a33ea7a815419a&lt;br /&gt;
: Another challenge is how to deal with minimal parser. The minimal parser happily gives up parsing multi-statement query once it finds UPDATE/INSERT/DELETE in streaming replication mode and pgpool fails to recognize the query is multi-statement. To fix this, &amp;quot;psqlscan&amp;quot; is imported from PostgreSQL, which precisely detects multi-statement with lower cost than SQL parser. Still it is expensive than simple string comparison, and we use psqlscan only when the query string is large (currently defined as 10kB). As a result, the minimal parser is only used when query is larger than 10kB and the query is not multi-statement.&lt;br /&gt;
: This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=48da8715bf403965507eef0321c0ab10054ac71c&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=64f670ca4abae749e1a95cc57b6a508a8611e44d&lt;br /&gt;
: As for handling each query in a multi-statement query, it is technically impossible as stated above. So we don&#039;t need to worry about this.&lt;br /&gt;
: We can now safely and correctly handle multi-statement queries.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
&lt;br /&gt;
=== Support multiple unix_socket_directories and related parameters ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
: Also unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: These have been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3897</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3897"/>
		<updated>2024-07-04T12:12:57Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* TODOs already done */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 4.5, it is allowed to use IPv6 address for PostgreSQL backend server, listening addresses of pgpool-II itself and listening addresses of pcp process.&lt;br /&gt;
: However, watchdog process only listens to IPv4 and UNIX domain socket. Following modules need to be updated.&lt;br /&gt;
* watchdog communication port (wd_port)&lt;br /&gt;
** wd_create_recv_socket/wd_create_client_socket&lt;br /&gt;
* heartbeat port (heartbeat_port)&lt;br /&gt;
** wd_create_hb_recv_socket/wd_create_hb_send_socket&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== Do not disturb sessions in failover of standby servers when load_balance_mode is off ===&lt;br /&gt;
: In streaming replication mode if load_balance_mode is off, it would be desirable to not disconnect sessions in failover of standby servers. Currently Pgpool-II connects to all backend even if load_balance_mode is off. But it is actually unnecessary to connect to standby servers if load_balance_mode is off. If pgpool only connects to primary server, it does not need to disconnect session in failover of standby servers.&lt;br /&gt;
&lt;br /&gt;
=== Support multiple pcp_socket_dir ===&lt;br /&gt;
: Pgpool-II has supported multiple unix_socket_directories in 4.4 release. I think pcp_socket_dir should also support multiple directories.&lt;br /&gt;
&lt;br /&gt;
=== Allow load balance for PREPARE/EXECUTE/DEALLOCATE ===&lt;br /&gt;
: Pgpool-II does not load balance these queries even if it&#039;s processing read only SELECT.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Allow load balance for PREPARE/EXECUTE/DEALLOCATE ===&lt;br /&gt;
: Pgpool-II does not load balance these queries even if it&#039;s processing read only SELECT.&lt;br /&gt;
: This has been implemented in 4.5.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
: ---------------------------------------------------------------------------------------&lt;br /&gt;
: This has been implemented in from master (to be 4.5) to 4.1 (as of 2023/4/19).&lt;br /&gt;
: Now Pgpool-II correctly recognizes multi-statements and distributes the query to proper PostgreSQL nodes.&lt;br /&gt;
: Basically Pgpool-II forwards multi-statement queries to primary node (or all nodes in replication/snapshot isolation mode). A few SQL like BEGIN/END/SAVEPOINT/DEALLOCATE need special handling. This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd32f5ef996cad36d5b1554e92a33ea7a815419a&lt;br /&gt;
: Another challenge is how to deal with minimal parser. The minimal parser happily gives up parsing multi-statement query once it finds UPDATE/INSERT/DELETE in streaming replication mode and pgpool fails to recognize the query is multi-statement. To fix this, &amp;quot;psqlscan&amp;quot; is imported from PostgreSQL, which precisely detects multi-statement with lower cost than SQL parser. Still it is expensive than simple string comparison, and we use psqlscan only when the query string is large (currently defined as 10kB). As a result, the minimal parser is only used when query is larger than 10kB and the query is not multi-statement.&lt;br /&gt;
: This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=48da8715bf403965507eef0321c0ab10054ac71c&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=64f670ca4abae749e1a95cc57b6a508a8611e44d&lt;br /&gt;
: As for handling each query in a multi-statement query, it is technically impossible as stated above. So we don&#039;t need to worry about this.&lt;br /&gt;
: We can now safely and correctly handle multi-statement queries.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
&lt;br /&gt;
=== Support multiple unix_socket_directories and related parameters ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
: Also unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: These have been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3896</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3896"/>
		<updated>2024-07-04T06:01:29Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Support IPv6 network */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 4.5, it is allowed to use IPv6 address for PostgreSQL backend server, listening addresses of pgpool-II itself and listening addresses of pcp process.&lt;br /&gt;
: However, watchdog process only listens to IPv4 and UNIX domain socket. Following modules need to be updated.&lt;br /&gt;
* watchdog communication port (wd_port)&lt;br /&gt;
** wd_create_recv_socket/wd_create_client_socket&lt;br /&gt;
* heartbeat port (heartbeat_port)&lt;br /&gt;
** wd_create_hb_recv_socket/wd_create_hb_send_socket&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== Do not disturb sessions in failover of standby servers when load_balance_mode is off ===&lt;br /&gt;
: In streaming replication mode if load_balance_mode is off, it would be desirable to not disconnect sessions in failover of standby servers. Currently Pgpool-II connects to all backend even if load_balance_mode is off. But it is actually unnecessary to connect to standby servers if load_balance_mode is off. If pgpool only connects to primary server, it does not need to disconnect session in failover of standby servers.&lt;br /&gt;
&lt;br /&gt;
=== Support multiple pcp_socket_dir ===&lt;br /&gt;
: Pgpool-II has supported multiple unix_socket_directories in 4.4 release. I think pcp_socket_dir should also support multiple directories.&lt;br /&gt;
&lt;br /&gt;
=== Allow load balance for PREPARE/EXECUTE/DEALLOCATE ===&lt;br /&gt;
: Pgpool-II does not load balance these queries even if it&#039;s processing read only SELECT.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
: ---------------------------------------------------------------------------------------&lt;br /&gt;
: This has been implemented in from master (to be 4.5) to 4.1 (as of 2023/4/19).&lt;br /&gt;
: Now Pgpool-II correctly recognizes multi-statements and distributes the query to proper PostgreSQL nodes.&lt;br /&gt;
: Basically Pgpool-II forwards multi-statement queries to primary node (or all nodes in replication/snapshot isolation mode). A few SQL like BEGIN/END/SAVEPOINT/DEALLOCATE need special handling. This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd32f5ef996cad36d5b1554e92a33ea7a815419a&lt;br /&gt;
: Another challenge is how to deal with minimal parser. The minimal parser happily gives up parsing multi-statement query once it finds UPDATE/INSERT/DELETE in streaming replication mode and pgpool fails to recognize the query is multi-statement. To fix this, &amp;quot;psqlscan&amp;quot; is imported from PostgreSQL, which precisely detects multi-statement with lower cost than SQL parser. Still it is expensive than simple string comparison, and we use psqlscan only when the query string is large (currently defined as 10kB). As a result, the minimal parser is only used when query is larger than 10kB and the query is not multi-statement.&lt;br /&gt;
: This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=48da8715bf403965507eef0321c0ab10054ac71c&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=64f670ca4abae749e1a95cc57b6a508a8611e44d&lt;br /&gt;
: As for handling each query in a multi-statement query, it is technically impossible as stated above. So we don&#039;t need to worry about this.&lt;br /&gt;
: We can now safely and correctly handle multi-statement queries.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
&lt;br /&gt;
=== Support multiple unix_socket_directories and related parameters ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
: Also unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: These have been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3895</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3895"/>
		<updated>2024-07-04T05:55:13Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Support IPv6 network */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 4.5, it is allowed to use IPv6 address for PostgreSQL backend server, listening addresses of pgpool-II itself and listening addresses of pcp process.&lt;br /&gt;
: However, watchdog process only listens to IPv4 and UNIX domain socket.&lt;br /&gt;
* watchdog communication port (wd_port)&lt;br /&gt;
** wd_create_recv_socket/wd_create_client_socket&lt;br /&gt;
* heartbeat port (heartbeat_port)&lt;br /&gt;
** wd_create_hb_recv_socket/wd_create_hb_send_socket&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== Do not disturb sessions in failover of standby servers when load_balance_mode is off ===&lt;br /&gt;
: In streaming replication mode if load_balance_mode is off, it would be desirable to not disconnect sessions in failover of standby servers. Currently Pgpool-II connects to all backend even if load_balance_mode is off. But it is actually unnecessary to connect to standby servers if load_balance_mode is off. If pgpool only connects to primary server, it does not need to disconnect session in failover of standby servers.&lt;br /&gt;
&lt;br /&gt;
=== Support multiple pcp_socket_dir ===&lt;br /&gt;
: Pgpool-II has supported multiple unix_socket_directories in 4.4 release. I think pcp_socket_dir should also support multiple directories.&lt;br /&gt;
&lt;br /&gt;
=== Allow load balance for PREPARE/EXECUTE/DEALLOCATE ===&lt;br /&gt;
: Pgpool-II does not load balance these queries even if it&#039;s processing read only SELECT.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
: ---------------------------------------------------------------------------------------&lt;br /&gt;
: This has been implemented in from master (to be 4.5) to 4.1 (as of 2023/4/19).&lt;br /&gt;
: Now Pgpool-II correctly recognizes multi-statements and distributes the query to proper PostgreSQL nodes.&lt;br /&gt;
: Basically Pgpool-II forwards multi-statement queries to primary node (or all nodes in replication/snapshot isolation mode). A few SQL like BEGIN/END/SAVEPOINT/DEALLOCATE need special handling. This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd32f5ef996cad36d5b1554e92a33ea7a815419a&lt;br /&gt;
: Another challenge is how to deal with minimal parser. The minimal parser happily gives up parsing multi-statement query once it finds UPDATE/INSERT/DELETE in streaming replication mode and pgpool fails to recognize the query is multi-statement. To fix this, &amp;quot;psqlscan&amp;quot; is imported from PostgreSQL, which precisely detects multi-statement with lower cost than SQL parser. Still it is expensive than simple string comparison, and we use psqlscan only when the query string is large (currently defined as 10kB). As a result, the minimal parser is only used when query is larger than 10kB and the query is not multi-statement.&lt;br /&gt;
: This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=48da8715bf403965507eef0321c0ab10054ac71c&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=64f670ca4abae749e1a95cc57b6a508a8611e44d&lt;br /&gt;
: As for handling each query in a multi-statement query, it is technically impossible as stated above. So we don&#039;t need to worry about this.&lt;br /&gt;
: We can now safely and correctly handle multi-statement queries.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
&lt;br /&gt;
=== Support multiple unix_socket_directories and related parameters ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
: Also unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: These have been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3894</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3894"/>
		<updated>2024-07-04T03:55:02Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Support IPv6 network */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 4.5, it is allowed to use IPv6 address for PostgreSQL backend server, listening addresses of pgpool-II itself and listening addresses of pcp process.&lt;br /&gt;
: However, watchdog process only listens to IPv4 and UNIX domain socket.&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== Do not disturb sessions in failover of standby servers when load_balance_mode is off ===&lt;br /&gt;
: In streaming replication mode if load_balance_mode is off, it would be desirable to not disconnect sessions in failover of standby servers. Currently Pgpool-II connects to all backend even if load_balance_mode is off. But it is actually unnecessary to connect to standby servers if load_balance_mode is off. If pgpool only connects to primary server, it does not need to disconnect session in failover of standby servers.&lt;br /&gt;
&lt;br /&gt;
=== Support multiple pcp_socket_dir ===&lt;br /&gt;
: Pgpool-II has supported multiple unix_socket_directories in 4.4 release. I think pcp_socket_dir should also support multiple directories.&lt;br /&gt;
&lt;br /&gt;
=== Allow load balance for PREPARE/EXECUTE/DEALLOCATE ===&lt;br /&gt;
: Pgpool-II does not load balance these queries even if it&#039;s processing read only SELECT.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
: ---------------------------------------------------------------------------------------&lt;br /&gt;
: This has been implemented in from master (to be 4.5) to 4.1 (as of 2023/4/19).&lt;br /&gt;
: Now Pgpool-II correctly recognizes multi-statements and distributes the query to proper PostgreSQL nodes.&lt;br /&gt;
: Basically Pgpool-II forwards multi-statement queries to primary node (or all nodes in replication/snapshot isolation mode). A few SQL like BEGIN/END/SAVEPOINT/DEALLOCATE need special handling. This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd32f5ef996cad36d5b1554e92a33ea7a815419a&lt;br /&gt;
: Another challenge is how to deal with minimal parser. The minimal parser happily gives up parsing multi-statement query once it finds UPDATE/INSERT/DELETE in streaming replication mode and pgpool fails to recognize the query is multi-statement. To fix this, &amp;quot;psqlscan&amp;quot; is imported from PostgreSQL, which precisely detects multi-statement with lower cost than SQL parser. Still it is expensive than simple string comparison, and we use psqlscan only when the query string is large (currently defined as 10kB). As a result, the minimal parser is only used when query is larger than 10kB and the query is not multi-statement.&lt;br /&gt;
: This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=48da8715bf403965507eef0321c0ab10054ac71c&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=64f670ca4abae749e1a95cc57b6a508a8611e44d&lt;br /&gt;
: As for handling each query in a multi-statement query, it is technically impossible as stated above. So we don&#039;t need to worry about this.&lt;br /&gt;
: We can now safely and correctly handle multi-statement queries.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
&lt;br /&gt;
=== Support multiple unix_socket_directories and related parameters ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
: Also unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: These have been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=FAQ&amp;diff=3794</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=FAQ&amp;diff=3794"/>
		<updated>2023-08-02T04:17:41Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Pgpool-II Frequently Asked Questions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II Frequently Asked Questions ==&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why configure fails by &amp;quot;pg_config not found&amp;quot; on my Ubuntu box?&#039;&#039;&#039; ===&lt;br /&gt;
: pg_config is in libpq-dev package. You need to install it before running configure.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why records inserted on the primary node do not appear on the standby nodes?&#039;&#039;&#039; ===&lt;br /&gt;
: Are you using streaming replication and a hash index on the table? Then it&#039;s a known limitation of streaming replication. The inserted record is there. But if you SELECT the record using the hash index, it will not appear. Hash index changes do not produce WAL record thus they are not reflected to the standby nodes. Solutions are: 1) use btree index instead 2) use pgpool-II native replication.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I mix different versions of PostgreSQL as pgpool-II backends?&#039;&#039;&#039; ===&lt;br /&gt;
: You cannot mix different major versions of PostgreSQL, for example 8.4.x and 9.0.x. On the other hand you can mix different minor versions of PostgreSQL, for example 9.0.3 and 9.0.4. Pgpool-II assumes messages from PostgreSQL to pgpool-II are identical anytime. Different major version of PostgreSQL may send out different messages and this would cause trouble for Pgpool-II.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I mix different platforms of PostgreSQL as pgpool-II backends, for example Linux and Windows?&#039;&#039;&#039; ===&lt;br /&gt;
: In streaming replication mode, no. Because streaming replication requires that primary and standby platforms are phsyically identical. On the other hand, pgpool-II&#039;s replication mode only requires logically database clusters identical. Beware, however, that online recovery script does not use rsync or some such, which do phical copying among database clusters. You want to use pg_dumpall instead.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;It seems my pgpool-II does not do load balancing. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: First of all, pgpool-II&#039; load balancing is &amp;quot;session base&amp;quot;, not &amp;quot;statement base&amp;quot;. That means, DB node selection for load balancing is decided at the beginning of session. So all SQL statements are sent to the same DB node until the session ends.&lt;br /&gt;
&lt;br /&gt;
: Another point is, whether statement is in an explicit transaction or not. If the statement is in a transaction, it will not be load balanced in the replication mode. In pgpool-II 3.0 or later, SELECT will be load balanced even in a transaction if operated in the master/slave mode.&lt;br /&gt;
&lt;br /&gt;
: Note the method to choose DB node is not LRU or some such. Pgpool-II chooses DB node randomly considering the &amp;quot;weight&amp;quot; parameter in pgpool.conf. This means that the chosen DB node is not uniformly distributed among DB nodes in short term. You might want to inspect the effect of load balancing after ~100 queries have been sent.&lt;br /&gt;
&lt;br /&gt;
: Also cursor statements are not load balanced in replication mode. i.e.:DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE. Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I observe the effect of load balancing?&#039;&#039;&#039; ===&lt;br /&gt;
: We recommend to enable &amp;quot;log_per_node_statement&amp;quot; directive in pgpool.conf for this. Here is an example of the log:&lt;br /&gt;
: &amp;lt;pre&amp;gt;2011-05-07 08:42:42 LOG:   pid 22382: DB node id: 1 backend pid: 22409 statement: SELECT abalance FROM pgbench_accounts WHERE aid = 62797;&amp;lt;/pre&amp;gt;&lt;br /&gt;
: The &amp;quot;DB node id: 1&amp;quot; shows which DB node was chosen for this loadbalancing session.&lt;br /&gt;
&lt;br /&gt;
: Please make sure that you start pgpool-II with &amp;quot;-n&amp;quot; option to get pgpool-II log. (or you can use syslog in pgpool-II 3.1 or later)&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why am I getting &amp;quot;ProcessFrontendResponse: failed to read kind from frontend. frontend abnormally exited&amp;quot; in my pgool log?&#039;&#039;&#039; ===&lt;br /&gt;
: Well, your clients might be ill-behaved:-) PostgreSQL&#039;s protocol requires clients to send particular packet before they disconnect the connection. pgpool-II complains that clients disconnect without sending the packet. You could reprodcude the problem by using psql. Connect to pgpool using psql. Kill -9 psql. You will silimar message in the log. The message will not appear if you quit psql normaly. Another possibility is unstable network connection between your client machine and pgpool-II. Check the cable and network interface card.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m running pgpool-II in streaming replication mode. It seems it works but I find following errors in the log. Why?&#039;&#039;&#039; ===&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;E&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;[&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: pool_read2: EOF encountered with backend&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: make_persistent_db_connection: s_do_auth failed&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: find_primary_node: make_persistent_connection failed&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
: pgpool-II tries to connect to PostgreSQL to execute some functions such as pg_current_xlog_location(), which is used for detecting primary server or checking replication delay. The messages above indicates that pgpool-II failed to connect with user = health_check_user and password = health_check_password. You need to set them properly even if health_check_period = 0.&lt;br /&gt;
&lt;br /&gt;
: Note that pgpool-II 3.1 or later will use sr_check_user and sr_check_password for it instead.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I run pgbench to test pgpool-II, pgbench hangs. If I directly run pgbench against PostgreSQL, it works fine. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pgbench creates concurrent connections (the number of connections is specified by &amp;quot;-c&amp;quot; option) before starting actual transactions. So if the number of concurrent transactions specified by &amp;quot;-c&amp;quot; exceeds num_init_children, pgbench will stuck because it will wait for pgpool accepting connections forever (remember that pgpool-II accepts up to num_init_children concurrent sessions. If the number of concurrent sessions reach num_init_children, new session will be queued). On the other hand PostgreSQL does not accept concurrent sessions more than max_connections. So in this case you will just see PostgreSQL errors, rather than connection blocking. If you want to test pgpool-II&#039;s connection queuing, you can use psql instead of pgbench. In the example session below, num_init_children = 1 (this is not a recommended setting in the real world. This is just for simplicity).&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;$ psql test &amp;lt;-- connect to pgpool from terminal #1&lt;br /&gt;
psql (9.1.1)&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
test=# &lt;br /&gt;
$ psql test &amp;lt;-- tries to connect to pgpool from terminal #2 but it is blocked.&lt;br /&gt;
test=# SELECT 1; &amp;lt;--- do something from terminal #1 psql&lt;br /&gt;
test=# \q &amp;lt;-- quit psql session on terminal #1&lt;br /&gt;
psql (9.1.1) &amp;lt;-- now psql on terminal #2 accepts session&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
test=# &amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
=== &#039;&#039;&#039;I created pool_hba.conf and pool_passwd to enable md5 authentication through pgpool-II but it does not work. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Probably you made mistake somewhere. For your help here is a table which describes error patterns depending on the setting of pg_hba.conf, pool_hba.conf and pool_passwd.&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&lt;br /&gt;
{|styles=&amp;quot;background:white; color:black&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|pg_hba.conf&lt;br /&gt;
|pool_hba.conf&lt;br /&gt;
|pool_passwd&lt;br /&gt;
|result&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|md5&lt;br /&gt;
|yes&lt;br /&gt;
|md5 auth&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|md5&lt;br /&gt;
|no&lt;br /&gt;
|&amp;quot;MD5&amp;quot; authentication with pgpool failed for user &amp;quot;XX&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|trust&lt;br /&gt;
|yes/no&lt;br /&gt;
|MD5 authentication is unsupported in replication, master-slave and parallel mode&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|md5&lt;br /&gt;
|yes&lt;br /&gt;
|no auth&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|md5&lt;br /&gt;
|no&lt;br /&gt;
|&amp;quot;MD5&amp;quot; authentication with pgpool failed for user &amp;quot;XX&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|trust&lt;br /&gt;
|yes/no&lt;br /&gt;
|no auth&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I set up SSL for pgpool-II?&#039;&#039;&#039; ===&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;SSL support for pgpool-II consists of two parts: 1)between client and pgpool-II 2)pgpool-II and PostgreSQL. #1 and #2 are independent each other. For example, you can only enable SSL connection of #1, or #2. Or you can enable both #1 and #2. I explain #1 (for #2, please take a look at PostgreSQL documentation).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Make sure that pgpool is built with openssl. If you build from source code, use --with-openssl option.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
First create server certificate. In the command below you will be asked PEM pass phrase(It will be asked when pgpool starts up).&lt;br /&gt;
If you want to start pgpool without being asked pass phrase, you can remove it later.&lt;br /&gt;
([[sample server certficate create session]])&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
openssl req -new -text -out server.req&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Remove PEM pass phrase if you want.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openssl rsa -in privkey.pem -out server.key&lt;br /&gt;
Enter pass phrase for privkey.pem:&lt;br /&gt;
writing RSA key&lt;br /&gt;
$ rm privkey.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Turn the certificate into a self-signed certificate.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openssl req -x509 -in server.req -text -key server.key -out server.crt&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy server.key and server.crt to appropreate place. Suppose we copy to /usr/local/etc.&lt;br /&gt;
Make sure that you use cp -p to retain appropreate permission of server.key.&lt;br /&gt;
Alternatively you can set permission later.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ chmod og-rwx /usr/local/etc/server.key&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Set the certificate and key location in pgpool.conf.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssl = on&lt;br /&gt;
ssl_key = &#039;/usr/local/etc/server.key&#039;&lt;br /&gt;
ssl_cert = &#039;/usr/local/etc/server.crt&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Restart pgpool.&lt;br /&gt;
To confirm SSL connection between client and pgpool is working, connect to pgpool using psql.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
psql -h localhost -p 9999 test&lt;br /&gt;
psql (9.1.1)&lt;br /&gt;
SSL connection (cipher: AES256-SHA, bits: 256)&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
&lt;br /&gt;
test=# \q&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you see &amp;quot;SSL connection...&amp;quot;, SSL connection between client and pgpool is working.&lt;br /&gt;
Please make sure that use &amp;quot;-h localhost&amp;quot; option. Because SSL only works with TCP/IP,&lt;br /&gt;
with Unix domain socket SSL does not work. &lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m using pgpool-II in replication mode. I expected that pgpool-II replaces current_timestamp call with time constants in my INSERT query, but actually it doesn&#039;t. Why?&#039;&#039;&#039; ===&lt;br /&gt;
:Probably your INSERT query uses schema qualied table name (like public.mytable) and you did not install pool_regclass function coming pgpool. Without pgpool_reglclass, pgpool-II only deals with table names without schema qualification.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why max_connection must satisfy this formula max_connection &amp;gt;= (num_init_children * max_pool) and not max_connection &amp;gt;= num_init_children?&#039;&#039;&#039; ===&lt;br /&gt;
: Probably you need to understand how pgpool uses these variables. Here is internal processing inside pgpool.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Wait for connection request from clients.&lt;br /&gt;
&amp;lt;li&amp;gt;pgpool child receives connection request from a client.&lt;br /&gt;
&amp;lt;li&amp;gt;The pgpool child looks for existing connection in the pool which&lt;br /&gt;
   has requested database/user pair up to max_pool.&lt;br /&gt;
&amp;lt;li&amp;gt;If found, reuse it.&lt;br /&gt;
&amp;lt;li&amp;gt; If not found, opens a new connection to PostgreSQL and registers to&lt;br /&gt;
   the pool.  If the pool has no empty slot, closes the oldest&lt;br /&gt;
   connection to PostgreSQL and reuse the slot.&lt;br /&gt;
&amp;lt;li&amp;gt;Do some query processing until the client sends session close request.&lt;br /&gt;
&amp;lt;li&amp;gt;Close the connection to client but keeps the connection to&lt;br /&gt;
   PostgreSQL for future use.&lt;br /&gt;
&amp;lt;li&amp;gt;Go to #1&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Is connection pool cache shared among pgpool process?&#039;&#039;&#039; ===&lt;br /&gt;
:No, the connection pool cache is in pgpool&#039;s process private memory and is not shared by other pgpool. This is how the connection cache is managed: Suppose pgpool process 12345 has connection cache for database A/user B but process 12346 does not have connection cache for database A/user B and both 12345 and 12346 are in idle state (no client is connecting at this point). If client connects to pgpool process 12345 with database A/user B, then the exisiting connection of 12345 is reused. On the other hand, If client connects to pgpool process 12346, 12346 needs to create new connection.  Whether 12345 or 12346 is chosen, is not under control of pgpool. However in the long run, each pgpool child process will be equally chosen and it is expected that each process&#039;s pool will be resued equally.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why my SELECTs are not cached?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
:Certain libraries such as iBatis, MyBatis always rollback transactions if they are not explicitely committed. Pgpool never caches SELECTs result in a rollbacked transaction because they might not be inconsistent.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use # comments or blank lines in pool_passwd?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
: The answer is simple. No (just like /etc/passwd).&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I cannot use MD5 authentication if start pgpool without -n option. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: You must have given -f option as a relative path: i.e. &amp;quot;-f pgpool.conf&amp;quot;, rather than full path: i.e. &amp;quot;-f /usr/local/etc/pgpool.conf&amp;quot;. Pgpool tries to locate the full path of pool_passwd (which is neccesary for MD5 auth) from pgpool.conf path. This is fine with -n option. However if pgpool starts without -n option, it changes current directory to &amp;quot;/&amp;quot;, which is neccessary processs for daemonizing. As a result, pgpool tries to open &amp;quot;/pool_passwd&amp;quot;, which will not successs.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I see standby servers go down status in steaming replication mode and see PostgreSQL messages &amp;quot;terminating connection due to conflict&amp;quot; Why?&#039;&#039;&#039; ===&lt;br /&gt;
: If you see following messages along with those, it is likely vacuum on primary server removes rows which SELECTs on standby server want to see. Workaround is setting &amp;quot;hot_standby_feedback = on&amp;quot; in your standby server&#039;s postgresql.conf.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
2013-04-07 19:38:10 UTC FATAL:  terminating connection due to conflict with recovery&lt;br /&gt;
2013-04-07 19:38:10 UTC DETAIL:  User query might have needed to see row versions that must be removed.&lt;br /&gt;
2013-04-07 19:38:10 UTC HINT:  In a moment you should be able to reconnect to the database and repeat your command.&lt;br /&gt;
2013-04-07 19:38:10 UTC LOG:  could not send data to client: Connection reset by peer&lt;br /&gt;
2013-04-07 19:38:10 UTC ERROR:  canceling statement due to conflict with recovery&lt;br /&gt;
2013-04-07 19:38:10 UTC DETAIL:  User query might have needed to see row versions that must be removed.&lt;br /&gt;
2013-04-07 19:38:10 UTC LOG:  could not send data to client: Broken pipe&lt;br /&gt;
2013-04-07 19:38:10 UTC FATAL:  connection to client lost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Every few minites load of the system which pgpool-II running on gets high as much as 5-10. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Mulptiple users stats that this is observed only Linux kernel 3.0. 2.6 or 3.2 does show the behavior. We suspect that there is a problem with 3.0 kernel. See more discussions on &amp;quot;[pgpool-general: 1528] Mysterious Load Spikes&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When watchdog enabled and the connection number reach the number of num_init_children, VIP switchover occurs. Why?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
:When the connection number reach the number of num_init_children, the watchdog will be failed because select 1 is failed, and then VIP will be transfer to another pgpool. Unfortunately, there are no way to discriminate normal client&#039;s connections from watchdog&#039;s connection. Larger num_init_children, wd_life_point and smaller wd_interval may prevent the problem somewhat. &lt;br /&gt;
&lt;br /&gt;
:The next major version, pgpool-II 3.3, will support a new monitoring method which uses UDP heartbeat packets instead of queries such like &#039;SELECT 1&#039; to resolve the problem.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why do I need to install pgpool_regclass? &#039;&#039;&#039; ===&lt;br /&gt;
:  If you are using PostgreSQL 8.0 or later, installing pgpool_regclass function on all PostgreSQL to be accessed by pgpool-II is strongly recommended, as it is used internally by pgpool-II. Without this, handling of duplicate table names in different schema might cause trouble (temporary tables aren&#039;t a problem).&lt;br /&gt;
:Related FAQ is here https://www.pgpool.net/mediawiki/index.php?title=FAQ&amp;amp;action=submit#I.27m_using_pgpool-II_in_replication_mode._I_expected_that_pgpool-II_replaces_current_timestamp_call_with_time_constants_in_my_INSERT_query.2C_but_actually_it_doesn.27t._Why.3F&lt;br /&gt;
: If you are using PostgreSQL 9.4.0 or later and pgpool-II 3.3.4 or later, 3.4.0 or later, you don&#039;t need to install pgpool_regclass since PostgreSQL 9.4 has built-in pgpool_regclass like function &amp;quot;to_regclass&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;md5 authentication does not work. Please help&#039;&#039;&#039; ===&lt;br /&gt;
: There&#039;s an excellent summary of various check points to set up md5 authentication. Please take a look at it.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2013-May/001773.html&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m running pgpool/PostgreSQL on Amazon AWS and occasionaly I get network errors. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: It&#039;s a known problem with AWS. We recommend to complain to the Amazon support.&lt;br /&gt;
: pgpool-II 3.3.4, 3.2.9 or later mitigate the problem by changing timeout value for connect(actually select system call) from 1 second to 10 seconds.&lt;br /&gt;
: Also pgpool-II 3.4 or later has a switch to control the timeout value.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I cannot run pcp command on my Ubuntu box. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pcp commands need libpcp.so. In Ubuntu it is included &amp;quot;libpgpool0&amp;quot; package.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;On line recovery failed. How can I debug this?&#039;&#039;&#039; ===&lt;br /&gt;
: pcp_recovery_node executes recovery_1st_stage_command and/or recovery_2nd_stage_command depending on your configuration. Those scripts are supposed to be executed on the master PostgreSQL node (the first live node in replication mode or primary node in streaming replication mode). &amp;quot;BackendError&amp;quot; means there&#039;s something wrong in pgpool and/or PostgreSQL. To verify this, I recommend followings;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;start pgpool with debug option&lt;br /&gt;
&amp;lt;li&amp;gt;execute pcp_recovery_node&lt;br /&gt;
&amp;lt;li&amp;gt;examin pgpool log and master PostgreSQL log&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039; Watchdog doesn&#039;t start if not all &amp;quot;other&amp;quot; nodes are alive&#039;&#039;&#039;===&lt;br /&gt;
: It&#039;s a feature. Watchdog&#039;s lifeheck will start after all of the pgpools has started. Until this, failover of the virtual IP never occurs.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;If I start transaction, pgool-II also starts a transaction on standby nodes. Why?&#039;&#039;&#039;===&lt;br /&gt;
: This is necessary to deal with the case when JDBC driver wants to use cursors. Pgpool-II takes a liberty of distributing SELECTs to the standby node including cursor statements. Unfortunately cursor statements need to be executed in an explicit transaction.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I use schema qualified table names, pgpool-II does not invalidate on memory query cache and I got outdated data. Why?&#039;&#039;&#039;===&lt;br /&gt;
: It seems you did not install &amp;quot;pgpool_regclass&amp;quot; function. Without the function, pgpool-II ignores the schema name pat of the schema qualified table name and the cache invalidation fails.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I periodically get error message like &amp;quot;read_startup_packet: incorrect packet length&amp;quot;. What does it mean?&#039;&#039;&#039;===&lt;br /&gt;
: Monitoring tools including Zabbix and Nagios periodically sends a packet or ping to the port which pgoool is listening on. Unfortunately those packets do not have correct contents, and pgpool-II complains it. If you are not sure who is sending such a packet, you could turn on &amp;quot;log_connections&amp;quot; to know the source host and port info. If they are from such tools, you could stop the monitoring to avoid the problem  or even better, change the monitoring method to send legal query, for example, &amp;quot;SELECT 1&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m getting repeated errors like this every few minutes on Tomcat: &amp;quot;An I/O Error occurred while sending to the backend&amp;quot; Why?&#039;&#039;&#039;===&lt;br /&gt;
: Tomcat creates persistent connections to pgpool. If you set client_idle_limit to non 0, pgpool disconnects the connection and next time when Tomcat tries to send something to pgpool it breaks with the error message.&lt;br /&gt;
: One solution is set client_idle_limit to 0. However this will leave lots of idle connections.&lt;br /&gt;
: Another solution provided by Lachezar Dobrev is:&lt;br /&gt;
: You might solve that by adding a time-out on the Tomcat side. https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html&lt;br /&gt;
:   What you should set is (AFAIK):&lt;br /&gt;
:    minIdle (default is 10, set to 0)&lt;br /&gt;
:   timeBetweenEvictionRunsMillis (default 5000)&lt;br /&gt;
:   minEvictableIdleTimeMillis    (default 60000)&lt;br /&gt;
:This will try every 5 seconds and close any connections that were not used in the last 60 seconds. If you keep the sum of both numbers below the client time-out on the pgpool size connections should be closed at Tomcat side before they time-out on the pgpool side.&lt;br /&gt;
: It is also beneficial to set the&lt;br /&gt;
:    testOnBorrow (default false, set to true)&lt;br /&gt;
:    validationQuery (default none, set to &#039;SELECT version();&#039; no quotes)&lt;br /&gt;
:  This will help with connections should they expire while waiting, without supplying a disconnected connection to the application.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I check pg_stat_activity view, I see a query like &amp;quot;SELECT count(*) FROM pg_catalog.pg_class AS c WHERE c.oid = pgpool_regclass(&#039;pgbench_accounts&#039;) AND c.relpersistence = &#039;u&#039;&amp;quot; in active state for very long time. Why?&#039;&#039;&#039;===&lt;br /&gt;
: It&#039;s a limitation of pg_stat_activity. You can safely ignore it.&lt;br /&gt;
: Pgpool-II issues queries like above for internal use to master node. When user query runs in extended protocol mode (sent from JDBC driver, for example), pgpool-II&#039;s query also runs in the mode. To make pg_stat_activity recognize the query finishes, pgpool-II needs to send a packet called &amp;quot;Sync&amp;quot;, which unfortunately breaks user&#039;s query (more precisely, unnamed portal). Thus pgpool-II sends &amp;quot;Flush&amp;quot; packet instead but then pg_stat_activity does not recognize the end of the query.&lt;br /&gt;
: Interesting thing is, if you enable log_duration, it logs the query finishes.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Online recovery always fails after certain minutes. Why? &#039;&#039;&#039; ===&lt;br /&gt;
: It is possible that PostgreSQL statement_timeout kills the online recovery process. The process is executed as a SQL statement and if it&#039;s running too long, PostgreSQL sends signal 2 to the SQL and kills it. Varying by the size of the database, the online recovery process takes very long time. Make sure to disable statement_timeout or set it long enough time.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why &amp;quot;SET default_transaction_isolation TO DEFAULT&amp;quot; fails ? &#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
$ psql -h localhost -p 9999 -c &#039;SET default_transaction_isolation to DEFAULT;&#039;&lt;br /&gt;
ERROR: kind mismatch among backends. Possible last query was: &amp;quot;SET default_transaction_isolation to DEFAULT;&amp;quot; kind details are: 0[N: statement: SET default_transaction_isolation to DEFAULT;] 1[C]&lt;br /&gt;
HINT: check data consistency among db nodes&lt;br /&gt;
ERROR: kind mismatch among backends. Possible last query was: &amp;quot;SET default_transaction_isolation to DEFAULT;&amp;quot; kind details are: 0[N: statement: SET default_transaction_isolation to DEFAULT;] 1[C]&lt;br /&gt;
HINT: check data consistency among db nodes&lt;br /&gt;
connection to server was lost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
: Pgpool-II detects that node 0 returns &amp;quot;N&amp;quot; (a NOTICE message comes from PostgreSQL) while node 1 returns &amp;quot;C&amp;quot; (which means the command finished).&lt;br /&gt;
: Though pgpool-II expects that both node 0 and 1 returns identical messages, actually they are not. So pgpool-II threw an error.&lt;br /&gt;
: Probably certain log/message settings are different in node 0 and 1. Please check client_min_messages or something like that.&lt;br /&gt;
: They should be identical.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How does pgpool-II find the primary node?&#039;&#039;&#039;===&lt;br /&gt;
: pgpool-II issues &amp;quot;SELECT pg_is_in_recovery()&amp;quot; to each DB node. If it returns true, then the node is a standby node. If one of DB nodes returns false, then the node is the primary node and done.&lt;br /&gt;
: Because it is possible that promoting node could return true for the SELECT, if no primary node is found and &amp;quot;search_primary_node_timeout&amp;quot; is greater than 0, pgpool-II sleeps 1 second and contines to issues the SELECT query to each DB node again until total sleep time exceeds search_primary_node_timeout.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use pg_cancel_backend() or pg_terminate_backend()?&#039;&#039;&#039;===&lt;br /&gt;
: You can safely use pg_cancel_backend().&lt;br /&gt;
: Be warned that pg_terminate_backend() will cause a fail over because it makes PostgreSQL emit an identical error code as postmaster shutdown. Pgpool-II 3.6 or greater mitigates the problem. See [https://www.pgpool.net/docs/latest/en/html/restrictions.html the manual] for more details.&lt;br /&gt;
&lt;br /&gt;
: Remember that pgpool-II manages multiple PostgreSQL servers. To use the function, you not only need to identify the backend pid but the backend server.&lt;br /&gt;
: If the query is running on the primary server, you can call the function something like&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
/*NO LOAD BALANCE*/ SELECT pg_cancel_backend(pid)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
: SQL comment is to prevent the SELECT be load balanced to standby. Of course you could issue the SELECT against directly the primary server.&lt;br /&gt;
: If the query is running on one of standby servers, you need to issue the SELECT against directly the standby server.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why my client is disconnected to pgpool-II when failover happens?&#039;&#039;&#039;===&lt;br /&gt;
: pgpool-II consists of many process where each process corresponds to client session. When failover occurs, each process may iterate on a loop for each backend without knowing a backend goes down. This may result in incorrect processing, or segfault in the worst case. For this reason, when failover occurs, pgpool-II parent process interrupt child process using signal to let them exit. Note that switch over using pcp_detach node has same effect.&lt;br /&gt;
: From Pgpool-II 3.6 or greater, however, a fail over does not cause the disconnection in certain conditions. See the manual for more details.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why am I getting &amp;quot;LOG:  forked new pcp worker ..,&amp;quot; and &amp;quot;LOG:  PCP process with pid: xxxx exit with SUCCESS.&amp;quot; messages in pgpool log?&#039;&#039;&#039; ===&lt;br /&gt;
: Prior to pgpool-II 3.5, pgpool could only handle single PCP command at a time and all PCP commands were handled by a single PCP child process which lives throughout the lifespan of pgpool-II main process. In pgpool II 3.5 the restriction of single PCP command is removed and pgpool-II can now handle multiple simultaneous PCP commands. For every PCP command issued to pgpool a new PCP child process is forked and that process exits after execution of the PCP command is complete. So these log messages are perfectly normal and are generated whenever a new PCP worker process is created or completes execution for a PCP command.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How does pgpool-II handle md5 authentication?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
# PostgreSQL and pgpool store md5(password+username) into pg_authid or pool_password. From now on I denote string md5(password+username) as &amp;quot;S&amp;quot;.&lt;br /&gt;
# When md5 auth is requested, pgpool sends a random number salt &amp;quot;s0&amp;quot; to frontend.&lt;br /&gt;
# Frontend replies back to pgpool with md5(S+s0).&lt;br /&gt;
# pgpool extracts S from pgpool_passwd and calculate md5(S+s0). If #3 and #4 matches, goes to next step.&lt;br /&gt;
# Each backend sends salt to pgpool. Suppose we have two backends b1 and b2, and salts are s1 and s2.&lt;br /&gt;
# pgpool extracts S from pgpool_passwd and calculate md5(S+s1) and send it to b1. pgpool extracts S from pgpool_passwd and calculate md5(S+s2) and send it to b2.&lt;br /&gt;
# If b1 and b2 agree with the authentication, the whole md5 auth process succeeds.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why Pgpool-II does not automatically recognize a database comes back online?&#039;&#039;&#039; ===&lt;br /&gt;
: It would be technically possible but we don&#039;t think it&#039;s a safe feature.&lt;br /&gt;
: Consider a streaming replication configuration. When a standby comes back online, it does not necessarily means it connects to the current primary node. It may connect to a different primary node , or even it&#039;s not a standby any more. If Pgpool-II automatically recognizes such that standby as online, SELECTs to the standby node will return different result as the primary, which is a disaster for database applications.&lt;br /&gt;
: Also please note that &amp;quot;pgpool reload&amp;quot; does not do anything for recognizing the standby node as online. It just reloads configuration files.&lt;br /&gt;
: Please note that in Pgpool-II 4.1 or later, it is possible to automatically make a standby server online if it&#039;s safe enough. See configuration parameter &amp;quot;auto_failback&amp;quot; for more information.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;After enabling idle_in_transaction_session_timeout, Pgpool-II sets the DB node status to all down&#039;&#039;&#039; ===&lt;br /&gt;
: idle_in_transaction_session_timeout was introduced in PostgreSQL 9.6. It is intended to cancel idle transactions. Unfortunately after the time out occurs, PostgreSQL raises a FATAL error, which triggers failover in Pgpool-II if fail_over_on_backend_error is on.&lt;br /&gt;
: These are some workarounds to avoid the unwanted failover.&lt;br /&gt;
* Disable fail_over_on_backend_error. By this, failover will not happen if the FATAL error occurs, but the session will be terminated.&lt;br /&gt;
* Set connection_life_time、child_life_time and client_idle_limit less than idle_in_transaction_session_timeout. This will not terminate the session even if the FATAL error occurs. However even if the FATAL error does not occur, the connection pools are removed if one or more of items satisfy the specified condition, which may affect performance.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I know PostgreSQL backend status connected by Pgpool-II? &#039;&#039;&#039;===&lt;br /&gt;
: The backend status shown in pg_stat_activity can be examined by using &amp;quot;show pool_pools&amp;quot; command. One of the columns &amp;quot;pool_backendpid&amp;quot; shown by &amp;quot;show pool_pools&amp;quot; is the process id of the corresponding PostgreSQL backend process. Once it is determined, you can examine the output of pg_stat_activiy matching with its &amp;quot;pid&amp;quot; column.&lt;br /&gt;
: You can do this automatically by using dblink extension of PostgreSQL. Here is a sample query:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
SELECT * FROM dblink(&#039;dbname=test host=xxx port=11000 user=t-ishii password=xxx&#039;, &#039;show pool_pools&#039;) as t1 (pool_pid int, start_time text, pool_id int, backend_id int, database text, username text, create_time text,majorversion int, minorversion int, pool_counter int, pool_backendpid int, pool_connected int), pg_stat_activity p WHERE p.pid = t1.pool_backendpid;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: You can execute the SQL above on either PostgreSQL or Pgpool-II. The first argument of dblink is a connection string to connect to Pgpool-II, not PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Where can I get Debian packages for Pgpool-II? &#039;&#039;&#039; ===&lt;br /&gt;
: You can get Debian packages here: https://apt.postgresql.org/pub/repos/apt/pool/main/p/pgpool2/&lt;br /&gt;
: For older releases you can find the packages at: https://atalia.postgresql.org/morgue/p/pgpool2/&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How to run Pgpool-II with non-root user? &#039;&#039;&#039; ===&lt;br /&gt;
: If you install Pgpool-II using RPM packages, Pgpool-II will be running by root by default.&lt;br /&gt;
: You can also run Pgpool-II with a non-root user. But root privilege is required to control the virtual IP, so you have to copy ip/ifconfig/arping command and add the setuid flag to them.&lt;br /&gt;
&lt;br /&gt;
: Following is an example to run Pgpool-II with postgres user.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Edit pgpool.service file to use postgres user to start Pgpool-II&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# cp /usr/lib/systemd/system/pgpool.service /etc/systemd/system/pgpool.service&lt;br /&gt;
&lt;br /&gt;
# vi /etc/systemd/system/pgpool.service&lt;br /&gt;
...&lt;br /&gt;
User=postgres&lt;br /&gt;
Group=postgres&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Change owner of /var/{lib,run}/pgpool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# chown postgres:postgres /var/{lib,run}/pgpool&lt;br /&gt;
# cp /usr/lib/tmpfiles.d/pgpool-II-pgxx.conf /etc/tmpfiles.d&lt;br /&gt;
# vi /etc/tmpfiles.d/pgpool-II-pgxx.conf&lt;br /&gt;
===&lt;br /&gt;
d /var/run/pgpool 0755 postgres postgres -&lt;br /&gt;
===&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Change owner of Pgpool-II config files &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
chown -R postgres:postgres /etc/pgpool-II/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Copy ip/ifconfig/arping commands to somewhere where the user has access permissions and add setuid flag to them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# mkdir /var/lib/pgsql/sbin&lt;br /&gt;
# chown postgres:postgres /var/lib/pgsql/sbin&lt;br /&gt;
# chmod 700 /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/ifconfig /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/arping /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/ip /var/lib/pgsql/sbin&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/ip&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/ifconfig&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/arping &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use repmgr with Pgpool-II? &#039;&#039;&#039; ===&lt;br /&gt;
: No. These software do not consider each other. You should use Pgpool-II without repmger or use repmgr without Pgpool-II. See this message for more details: https://www.pgpool.net/pipermail/pgpool-general/2019-August/006743.html&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Connection fails in CentOS6&#039;&#039;&#039; ===&lt;br /&gt;
: Pgpool-II doesn&#039;t support GSSAPI authentication yet, but GSSAPI is requested in CentOS6. Therefore, the connection attempt will fail in CentOS6. &lt;br /&gt;
: A workaround is to set a environment variable to disable GSSAPI encryption in the client: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
export PGGSSENCMODE=disable&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Watchdog standby does not take over master when the master goes down&#039;&#039;&#039; ===&lt;br /&gt;
: If you have even number of watchdog nodes, you need to turn on enable_consensus_with_half_votes parameter, which is new in 4.1. The reason why you need this is explained in the 4.1 release note:&lt;br /&gt;
&amp;lt;q&amp;gt;&lt;br /&gt;
This changes the behavior of the decision of quorum existence and failover consensus on even number (i.e. 2, 4, 6...) of watchdog clusters. Odd number of clusters (3, 5, 7...) are not affected. When this parameter is off (the default), a 2 node watchdog cluster needs to have both 2 nodes are alive to have a quorum. If the quorum does not exist and 1 node goes down, then 1) VIP will be lost, 2) failover srcript is not executed and 3) no watchdog master exists. Especially #2 could be troublesome because no new primary PostgreSQL exists if existing primary goes down. Probably 2 node watchdog cluster users want to turn on this parameter to keep the existing behavior. On the other hand 4 or more even number of watchdog cluster users will benefit from this parameter is off because now it prevents possible split brain when a half of watchdog nodes go down. &lt;br /&gt;
&amp;lt;/q&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:This kind of error could happen for multiple reasons. Here &amp;quot;52&amp;quot; is ASCII code point in hexa decimal number, that is ASCII &#039;R&#039;. &#039;R&#039; is normal response from backend. &amp;quot;45&amp;quot; is &#039;E&#039; in ASCII, which means PostgreSQL complains something. In summary backend 0 accepts the connection request normaly, while backend 1 complains. To solve the problem, you need to look into pgpool.log. For example, If you set &amp;quot;reject&amp;quot; entry for the connection request in backend 1&#039;s pg_hba.conf:&lt;br /&gt;
&lt;br /&gt;
 local	all	foo	reject&lt;br /&gt;
&lt;br /&gt;
: and try to connect to pgpool, you will get the error. You should be able to find something like below in pgpool log:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: LOG:  pool_read_kind: error message from 1 th backend:pg_hba.conf rejects connection for host &amp;quot;[local]&amp;quot;, user &amp;quot;foo&amp;quot;, database &amp;quot;test&amp;quot;, no encryption&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: ERROR:  unable to read message kind&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: DETAIL:  kind does not match between main(52) slot[1] (45)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: You find that you need to fix pg_hba.conf of backend 1.&lt;br /&gt;
&lt;br /&gt;
: Other types of errors include:&lt;br /&gt;
:&amp;lt;ul&amp;gt;&lt;br /&gt;
:&amp;lt;li&amp;gt; Backend 1&#039;s pg_hba.conf setting refuses the connection from pgpool&lt;br /&gt;
:&amp;lt;li&amp;gt; max_connections parameter of PostgreSQL is not identical among backends&lt;br /&gt;
:&amp;lt;/ul&amp;gt;&lt;br /&gt;
: Note that &amp;quot;main&amp;quot; in the error message is &amp;quot;master&amp;quot; in Pgpool-II 4.1 or before. Also note that the detailed error info (&amp;quot;error message from 1 th backend:...&amp;quot;) is not available in Pgpool-II 3.6 or before.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I am getting an authentication error when Pgpool-II connects to Azure PostgreSQL. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Auzre PostgreSQL only accepts clear text password. md5 authentication nor SCRAM-SHA-256 cannot be used. You need to set clear text password in pool_passwd.&lt;br /&gt;
: related bug track entries:&lt;br /&gt;
: &amp;lt;ul&amp;gt;&lt;br /&gt;
: &amp;lt;li&amp;gt; https://www.pgpool.net/mantisbt/view.php?id=737&lt;br /&gt;
: &amp;lt;li&amp;gt; https://www.pgpool.net/mantisbt/view.php?id=699&lt;br /&gt;
: &amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why show pool_nodes does not show replication delay even I set delay_threshold_by_time to 1?&#039;&#039;&#039; ===&lt;br /&gt;
: There are two possible reasons.&lt;br /&gt;
: First one is sr_check_user does not have enough previlege to query against pg_stat_replication view. Please consult [https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-STATS-VIEWS PostgreSQL manual] for more details.&lt;br /&gt;
: Another reason can be the setting of backend_application_name parameter of the standby node in pgpool.conf. It must match with the application_name in primary_coninfo parameter in postgresql.conf.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;For client authentication I want to avoid maintaining pool_passwd file. What&#039;s the recommended way to do that?&#039;&#039;&#039; ===&lt;br /&gt;
: See this email thread.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2023-August/008958.html&lt;br /&gt;
&lt;br /&gt;
== pgpoolAdmin Frequently Asked Questions ==&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;pgpoolAdmin does not show any node in pgpool status and node status. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pgpoolAdmin uses PHP&#039;s PostgreSQL extention (pg_connect and pg_query etc.). Probably the extention does not work as expected. Please check apache error log. Also please check the FAQ item below.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why does node status in pgpoolAdmin show &amp;quot;down&amp;quot; status even if PostgreSQL is up and running?&#039;&#039;&#039; ===&lt;br /&gt;
: pgpoolAdmin checks PostgreSQL status by connecting with user = &amp;quot;health_check_user&amp;quot; and database = template1. Thus you should allow pgpoolAdmin to access PostgreSQL with those user and database without password. You can check PostgreSQL log to verify this. If health_check_user does not exist, you will see something like:&lt;br /&gt;
: &amp;lt;pre&amp;gt;20148 2011-07-06 16:41:59 JST FATAL:  role &amp;quot;foo&amp;quot; does not exist&amp;lt;/pre&amp;gt;&lt;br /&gt;
: If the user is protected by password, you will see:&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;20220 2011-07-06 16:42:16 JST FATAL:  password authentication failed for user &amp;quot;foo&amp;quot;&lt;br /&gt;
20221 2011-07-06 16:42:16 JST LOG:  could not receive data from client: Connection reset by peer&lt;br /&gt;
20221 2011-07-06 16:42:16 JST LOG:  unexpected EOF within message length word&lt;br /&gt;
20246 2011-07-06 16:42:26 JST LOG:  could not receive data from client: Connection reset by peer&lt;br /&gt;
20246 2011-07-06 16:42:26 JST LOG:  unexpected EOF within message length word&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3783</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3783"/>
		<updated>2023-04-19T10:45:07Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Pgpool-II TODO list */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, watchdog process only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== Do not disturb sessions in failover of standby servers when load_balance_mode is off ===&lt;br /&gt;
: In streaming replication mode if load_balance_mode is off, it would be desirable to not disconnect sessions in failover of standby servers. Currently Pgpool-II connects to all backend even if load_balance_mode is off. But it is actually unnecessary to connect to standby servers if load_balance_mode is off. If pgpool only connects to primary server, it does not need to disconnect session in failover of standby servers.&lt;br /&gt;
&lt;br /&gt;
=== Support multiple pcp_socket_dir ===&lt;br /&gt;
: Pgpool-II has supported multiple unix_socket_directories in 4.4 release. I think pcp_socket_dir should also support multiple directories.&lt;br /&gt;
&lt;br /&gt;
=== Allow load balance for PREPARE/EXECUTE/DEALLOCATE ===&lt;br /&gt;
: Pgpool-II does not load balance these queries even if it&#039;s processing read only SELECT.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
: ---------------------------------------------------------------------------------------&lt;br /&gt;
: This has been implemented in from master (to be 4.5) to 4.1 (as of 2023/4/19).&lt;br /&gt;
: Now Pgpool-II correctly recognizes multi-statements and distributes the query to proper PostgreSQL nodes.&lt;br /&gt;
: Basically Pgpool-II forwards multi-statement queries to primary node (or all nodes in replication/snapshot isolation mode). A few SQL like BEGIN/END/SAVEPOINT/DEALLOCATE need special handling. This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd32f5ef996cad36d5b1554e92a33ea7a815419a&lt;br /&gt;
: Another challenge is how to deal with minimal parser. The minimal parser happily gives up parsing multi-statement query once it finds UPDATE/INSERT/DELETE in streaming replication mode and pgpool fails to recognize the query is multi-statement. To fix this, &amp;quot;psqlscan&amp;quot; is imported from PostgreSQL, which precisely detects multi-statement with lower cost than SQL parser. Still it is expensive than simple string comparison, and we use psqlscan only when the query string is large (currently defined as 10kB). As a result, the minimal parser is only used when query is larger than 10kB and the query is not multi-statement.&lt;br /&gt;
: This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=48da8715bf403965507eef0321c0ab10054ac71c&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=64f670ca4abae749e1a95cc57b6a508a8611e44d&lt;br /&gt;
: As for handling each query in a multi-statement query, it is technically impossible as stated above. So we don&#039;t need to worry about this.&lt;br /&gt;
: We can now safely and correctly handle multi-statement queries.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
&lt;br /&gt;
=== Support multiple unix_socket_directories and related parameters ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
: Also unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: These have been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3782</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3782"/>
		<updated>2023-04-19T02:34:05Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Recognize multi statemnet queries */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, watchdog process only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== Do not disturb sessions in failover of standby servers when load_balance_mode is off ===&lt;br /&gt;
: In streaming replication mode if load_balance_mode is off, it would be desirable to not disconnect sessions in failover of standby servers. Currently Pgpool-II connects to all backend even if load_balance_mode is off. But it is actually unnecessary to connect to standby servers if load_balance_mode is off. If pgpool only connects to primary server, it does not need to disconnect session in failover of standby servers.&lt;br /&gt;
&lt;br /&gt;
=== Support multiple pcp_socket_dir ===&lt;br /&gt;
: Pgpool-II has supported multiple unix_socket_directories in 4.4 release. I think pcp_socket_dir should also support multiple directories.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
: ---------------------------------------------------------------------------------------&lt;br /&gt;
: This has been implemented in from master (to be 4.5) to 4.1 (as of 2023/4/19).&lt;br /&gt;
: Now Pgpool-II correctly recognizes multi-statements and distributes the query to proper PostgreSQL nodes.&lt;br /&gt;
: Basically Pgpool-II forwards multi-statement queries to primary node (or all nodes in replication/snapshot isolation mode). A few SQL like BEGIN/END/SAVEPOINT/DEALLOCATE need special handling. This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd32f5ef996cad36d5b1554e92a33ea7a815419a&lt;br /&gt;
: Another challenge is how to deal with minimal parser. The minimal parser happily gives up parsing multi-statement query once it finds UPDATE/INSERT/DELETE in streaming replication mode and pgpool fails to recognize the query is multi-statement. To fix this, &amp;quot;psqlscan&amp;quot; is imported from PostgreSQL, which precisely detects multi-statement with lower cost than SQL parser. Still it is expensive than simple string comparison, and we use psqlscan only when the query string is large (currently defined as 10kB). As a result, the minimal parser is only used when query is larger than 10kB and the query is not multi-statement.&lt;br /&gt;
: This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=48da8715bf403965507eef0321c0ab10054ac71c&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=64f670ca4abae749e1a95cc57b6a508a8611e44d&lt;br /&gt;
: As for handling each query in a multi-statement query, it is technically impossible as stated above. So we don&#039;t need to worry about this.&lt;br /&gt;
: We can now safely and correctly handle multi-statement queries.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
&lt;br /&gt;
=== Support multiple unix_socket_directories and related parameters ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
: Also unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: These have been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3781</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3781"/>
		<updated>2023-04-19T02:33:06Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Recognize multi statemnet queries */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, watchdog process only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== Do not disturb sessions in failover of standby servers when load_balance_mode is off ===&lt;br /&gt;
: In streaming replication mode if load_balance_mode is off, it would be desirable to not disconnect sessions in failover of standby servers. Currently Pgpool-II connects to all backend even if load_balance_mode is off. But it is actually unnecessary to connect to standby servers if load_balance_mode is off. If pgpool only connects to primary server, it does not need to disconnect session in failover of standby servers.&lt;br /&gt;
&lt;br /&gt;
=== Support multiple pcp_socket_dir ===&lt;br /&gt;
: Pgpool-II has supported multiple unix_socket_directories in 4.4 release. I think pcp_socket_dir should also support multiple directories.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
: ---------------------------------------------------------------------------------------&lt;br /&gt;
: This has been implemented in from master (to be 4.5) to 4.1 (as of 2023/4/19).&lt;br /&gt;
: Now Pgpool-II correctly recognizes multi-statements and distributes the query to proper PostgreSQL nodes.&lt;br /&gt;
: Basically Pgpool-II forwards multi-statement queries to primary node (or all nodes in replication/snapshot isolation mode). A few SQL like BEGIN/END/SAVEPOINT/DEALLOCATE. This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd32f5ef996cad36d5b1554e92a33ea7a815419a&lt;br /&gt;
: Another challenge is how to deal with minimal parser. The minimal parser happily gives up parsing multi-statement query once it finds UPDATE/INSERT/DELETE in streaming replication mode and pgpool fails to recognize the query is multi-statement. To fix this, &amp;quot;psqlscan&amp;quot; is imported from PostgreSQL, which precisely detects multi-statement with lower cost than SQL parser. Still it is expensive than simple string comparison, and we use psqlscan only when the query string is large (currently defined as 10kB). As a result, the minimal parser is only used when query is larger than 10kB and the query is not multi-statement.&lt;br /&gt;
: This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=48da8715bf403965507eef0321c0ab10054ac71c&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=64f670ca4abae749e1a95cc57b6a508a8611e44d&lt;br /&gt;
: As for handling each query in a multi-statement query, it is technically impossible as stated above. So we don&#039;t need to worry about this.&lt;br /&gt;
: We can now safely and correctly handle multi-statement queries.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
&lt;br /&gt;
=== Support multiple unix_socket_directories and related parameters ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
: Also unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: These have been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3780</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3780"/>
		<updated>2023-04-19T02:29:04Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Pgpool-II TODO list */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, watchdog process only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== Do not disturb sessions in failover of standby servers when load_balance_mode is off ===&lt;br /&gt;
: In streaming replication mode if load_balance_mode is off, it would be desirable to not disconnect sessions in failover of standby servers. Currently Pgpool-II connects to all backend even if load_balance_mode is off. But it is actually unnecessary to connect to standby servers if load_balance_mode is off. If pgpool only connects to primary server, it does not need to disconnect session in failover of standby servers.&lt;br /&gt;
&lt;br /&gt;
=== Support multiple pcp_socket_dir ===&lt;br /&gt;
: Pgpool-II has supported multiple unix_socket_directories in 4.4 release. I think pcp_socket_dir should also support multiple directories.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
: ---------------------------------------------------------------------------------------&lt;br /&gt;
: This has been implemented in from master (to be 4.5) to 4.1 (as of 2023/4/19).&lt;br /&gt;
: Now Pgpool-II correctly recognizes multi-statements and distribute the query to proper PostgreSQL nodes.&lt;br /&gt;
: Basically Pgpool-II forwards multi-statement queries to primary node (or all nodes in replication/snapshot isolation mode). A few SQL like BEGIN/END/SAVEPOINT/DEALLOCATE. This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd32f5ef996cad36d5b1554e92a33ea7a815419a&lt;br /&gt;
: Another challenge is how to deal with minimal parser. The minimal parser happily gives up parsing multi-statement query once it finds UPDATE/INSERT/DELETE in streaming replication mode and pgpool fails to recognize the query is multi-statement. To fix this, &amp;quot;psqlscan&amp;quot; is imported from PostgreSQL, which precisely detects multi-statement with lower cost than SQL parser. Still it is expensive than simple string comparison, and we use psqlscan only when the query string is large (currently defined as 10kB). As a result, the minimal parser is only used when query is larger than 10kB and the query is not multi-statement.&lt;br /&gt;
: This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=48da8715bf403965507eef0321c0ab10054ac71c&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=64f670ca4abae749e1a95cc57b6a508a8611e44d&lt;br /&gt;
: As for handling each query in a multi-statement query, it is technically impossible as stated above. So we don&#039;t need to worry about this.&lt;br /&gt;
: We can now safely and correctly handle multi-statement queries.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
&lt;br /&gt;
=== Support multiple unix_socket_directories and related parameters ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
: Also unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: These have been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3779</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3779"/>
		<updated>2023-04-19T02:28:27Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* TODOs already done */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, watchdog process only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== Do not disturb sessions in failover of standby servers when load_balance_mode is off ===&lt;br /&gt;
: In streaming replication mode if load_balance_mode is off, it would be desirable to not disconnect sessions in failover of standby servers. Currently Pgpool-II connects to all backend even if load_balance_mode is off. But it is actually unnecessary to connect to standby servers if load_balance_mode is off. If pgpool only connects to primary server, it does not need to disconnect session in failover of standby servers.&lt;br /&gt;
&lt;br /&gt;
=== Support multiple pcp_socket_dir ===&lt;br /&gt;
: Pgpool-II has supported multiple unix_socket_directories in 4.4 release. I think pcp_socket_dir should also support multiple directories.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
: ---------------------------------------------------------------------------------------&lt;br /&gt;
: This has been implemented in from master (to be 4.5) to 4.1 (as of 2023/4/19).&lt;br /&gt;
: Now Pgpool-II correctly recognizes multi-statements and distribute the query to proper PostgreSQL nodes.&lt;br /&gt;
: Basically Pgpool-II forwards multi-statement queries to primary node (or all nodes in replication/snapshot isolation mode). A few SQL like BEGIN/END/SAVEPOINT/DEALLOCATE. This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd32f5ef996cad36d5b1554e92a33ea7a815419a&lt;br /&gt;
: Another challenge is how to deal with minimal parser. The minimal parser happily gives up parsing multi-statement query once it finds UPDATE/INSERT/DELETE in streaming replication mode and pgpool fails to recognize the query is multi-statement. To fix this, &amp;quot;psqlscan&amp;quot; is imported from PostgreSQL, which precisely detects multi-statement with lower cost than SQL parser. Still it is expensive than simple string comparison, and we use psqlscan only when the query string is large (currently defined as 10kB). As a result, the minimal parser is only used when query is larger than 10kB and the query is not multi-statement.&lt;br /&gt;
: This part is implemented in:&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=48da8715bf403965507eef0321c0ab10054ac71c&lt;br /&gt;
https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=64f670ca4abae749e1a95cc57b6a508a8611e44d&lt;br /&gt;
: As for handling each query in a multi-statement query, it is technically impossible as stated above. So we don&#039;t need to worry about this.&lt;br /&gt;
: We can now safely and correctly handle multi-statement queries.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
&lt;br /&gt;
=== Support multiple unix_socket_directories and related parameters ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
: Also unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: These have been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=Roadmap&amp;diff=3775</id>
		<title>Roadmap</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=Roadmap&amp;diff=3775"/>
		<updated>2023-01-27T08:13:14Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Upcoming minor releases */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Upcoming minor releases == &lt;br /&gt;
&lt;br /&gt;
PgPool Global Development Group will make at least one minor release quarterly according to a predefined schedule.&lt;br /&gt;
&lt;br /&gt;
If there are important bug fixes or security issues, more releases will be made between these scheduled dates.&lt;br /&gt;
&lt;br /&gt;
The current schedule for upcoming releases is: &lt;br /&gt;
&lt;br /&gt;
* &amp;lt;s&amp;gt;November 17th, 2022&amp;lt;/s&amp;gt; December 22th, 2022&lt;br /&gt;
* &amp;lt;s&amp;gt;February 16th, 2023&amp;lt;/s&amp;gt; January 23th, 2023&lt;br /&gt;
* May 18th, 2023&lt;br /&gt;
* August 17th, 2023&lt;br /&gt;
* November 16th, 2023&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=Roadmap&amp;diff=3774</id>
		<title>Roadmap</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=Roadmap&amp;diff=3774"/>
		<updated>2023-01-27T08:12:56Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Upcoming minor releases */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Upcoming minor releases == &lt;br /&gt;
&lt;br /&gt;
PgPool Global Development Group will make at least one minor release quarterly according to a predefined schedule.&lt;br /&gt;
&lt;br /&gt;
If there are important bug fixes or security issues, more releases will be made between these scheduled dates.&lt;br /&gt;
&lt;br /&gt;
The current schedule for upcoming releases is: &lt;br /&gt;
&lt;br /&gt;
* &amp;lt;s&amp;gt;November 17th, 2022&amp;lt;/s&amp;gt; December 22th, 2022&lt;br /&gt;
* &amp;lt;s&amp;gt;February 16th, 2023&amp;lt;s&amp;gt; January 23th, 2023&lt;br /&gt;
* May 18th, 2023&lt;br /&gt;
* August 17th, 2023&lt;br /&gt;
* November 16th, 2023&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=FAQ&amp;diff=3761</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=FAQ&amp;diff=3761"/>
		<updated>2022-12-28T00:42:41Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Why show pool_nodes does not show replication delay even I set delay_threshold_by_time to 1? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II Frequently Asked Questions ==&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why configure fails by &amp;quot;pg_config not found&amp;quot; on my Ubuntu box?&#039;&#039;&#039; ===&lt;br /&gt;
: pg_config is in libpq-dev package. You need to install it before running configure.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why records inserted on the primary node do not appear on the standby nodes?&#039;&#039;&#039; ===&lt;br /&gt;
: Are you using streaming replication and a hash index on the table? Then it&#039;s a known limitation of streaming replication. The inserted record is there. But if you SELECT the record using the hash index, it will not appear. Hash index changes do not produce WAL record thus they are not reflected to the standby nodes. Solutions are: 1) use btree index instead 2) use pgpool-II native replication.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I mix different versions of PostgreSQL as pgpool-II backends?&#039;&#039;&#039; ===&lt;br /&gt;
: You cannot mix different major versions of PostgreSQL, for example 8.4.x and 9.0.x. On the other hand you can mix different minor versions of PostgreSQL, for example 9.0.3 and 9.0.4. Pgpool-II assumes messages from PostgreSQL to pgpool-II are identical anytime. Different major version of PostgreSQL may send out different messages and this would cause trouble for Pgpool-II.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I mix different platforms of PostgreSQL as pgpool-II backends, for example Linux and Windows?&#039;&#039;&#039; ===&lt;br /&gt;
: In streaming replication mode, no. Because streaming replication requires that primary and standby platforms are phsyically identical. On the other hand, pgpool-II&#039;s replication mode only requires logically database clusters identical. Beware, however, that online recovery script does not use rsync or some such, which do phical copying among database clusters. You want to use pg_dumpall instead.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;It seems my pgpool-II does not do load balancing. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: First of all, pgpool-II&#039; load balancing is &amp;quot;session base&amp;quot;, not &amp;quot;statement base&amp;quot;. That means, DB node selection for load balancing is decided at the beginning of session. So all SQL statements are sent to the same DB node until the session ends.&lt;br /&gt;
&lt;br /&gt;
: Another point is, whether statement is in an explicit transaction or not. If the statement is in a transaction, it will not be load balanced in the replication mode. In pgpool-II 3.0 or later, SELECT will be load balanced even in a transaction if operated in the master/slave mode.&lt;br /&gt;
&lt;br /&gt;
: Note the method to choose DB node is not LRU or some such. Pgpool-II chooses DB node randomly considering the &amp;quot;weight&amp;quot; parameter in pgpool.conf. This means that the chosen DB node is not uniformly distributed among DB nodes in short term. You might want to inspect the effect of load balancing after ~100 queries have been sent.&lt;br /&gt;
&lt;br /&gt;
: Also cursor statements are not load balanced in replication mode. i.e.:DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE. Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I observe the effect of load balancing?&#039;&#039;&#039; ===&lt;br /&gt;
: We recommend to enable &amp;quot;log_per_node_statement&amp;quot; directive in pgpool.conf for this. Here is an example of the log:&lt;br /&gt;
: &amp;lt;pre&amp;gt;2011-05-07 08:42:42 LOG:   pid 22382: DB node id: 1 backend pid: 22409 statement: SELECT abalance FROM pgbench_accounts WHERE aid = 62797;&amp;lt;/pre&amp;gt;&lt;br /&gt;
: The &amp;quot;DB node id: 1&amp;quot; shows which DB node was chosen for this loadbalancing session.&lt;br /&gt;
&lt;br /&gt;
: Please make sure that you start pgpool-II with &amp;quot;-n&amp;quot; option to get pgpool-II log. (or you can use syslog in pgpool-II 3.1 or later)&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why am I getting &amp;quot;ProcessFrontendResponse: failed to read kind from frontend. frontend abnormally exited&amp;quot; in my pgool log?&#039;&#039;&#039; ===&lt;br /&gt;
: Well, your clients might be ill-behaved:-) PostgreSQL&#039;s protocol requires clients to send particular packet before they disconnect the connection. pgpool-II complains that clients disconnect without sending the packet. You could reprodcude the problem by using psql. Connect to pgpool using psql. Kill -9 psql. You will silimar message in the log. The message will not appear if you quit psql normaly. Another possibility is unstable network connection between your client machine and pgpool-II. Check the cable and network interface card.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m running pgpool-II in streaming replication mode. It seems it works but I find following errors in the log. Why?&#039;&#039;&#039; ===&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;E&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;[&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: pool_read2: EOF encountered with backend&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: make_persistent_db_connection: s_do_auth failed&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: find_primary_node: make_persistent_connection failed&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
: pgpool-II tries to connect to PostgreSQL to execute some functions such as pg_current_xlog_location(), which is used for detecting primary server or checking replication delay. The messages above indicates that pgpool-II failed to connect with user = health_check_user and password = health_check_password. You need to set them properly even if health_check_period = 0.&lt;br /&gt;
&lt;br /&gt;
: Note that pgpool-II 3.1 or later will use sr_check_user and sr_check_password for it instead.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I run pgbench to test pgpool-II, pgbench hangs. If I directly run pgbench against PostgreSQL, it works fine. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pgbench creates concurrent connections (the number of connections is specified by &amp;quot;-c&amp;quot; option) before starting actual transactions. So if the number of concurrent transactions specified by &amp;quot;-c&amp;quot; exceeds num_init_children, pgbench will stuck because it will wait for pgpool accepting connections forever (remember that pgpool-II accepts up to num_init_children concurrent sessions. If the number of concurrent sessions reach num_init_children, new session will be queued). On the other hand PostgreSQL does not accept concurrent sessions more than max_connections. So in this case you will just see PostgreSQL errors, rather than connection blocking. If you want to test pgpool-II&#039;s connection queuing, you can use psql instead of pgbench. In the example session below, num_init_children = 1 (this is not a recommended setting in the real world. This is just for simplicity).&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;$ psql test &amp;lt;-- connect to pgpool from terminal #1&lt;br /&gt;
psql (9.1.1)&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
test=# &lt;br /&gt;
$ psql test &amp;lt;-- tries to connect to pgpool from terminal #2 but it is blocked.&lt;br /&gt;
test=# SELECT 1; &amp;lt;--- do something from terminal #1 psql&lt;br /&gt;
test=# \q &amp;lt;-- quit psql session on terminal #1&lt;br /&gt;
psql (9.1.1) &amp;lt;-- now psql on terminal #2 accepts session&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
test=# &amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
=== &#039;&#039;&#039;I created pool_hba.conf and pool_passwd to enable md5 authentication through pgpool-II but it does not work. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Probably you made mistake somewhere. For your help here is a table which describes error patterns depending on the setting of pg_hba.conf, pool_hba.conf and pool_passwd.&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&lt;br /&gt;
{|styles=&amp;quot;background:white; color:black&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|pg_hba.conf&lt;br /&gt;
|pool_hba.conf&lt;br /&gt;
|pool_passwd&lt;br /&gt;
|result&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|md5&lt;br /&gt;
|yes&lt;br /&gt;
|md5 auth&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|md5&lt;br /&gt;
|no&lt;br /&gt;
|&amp;quot;MD5&amp;quot; authentication with pgpool failed for user &amp;quot;XX&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|trust&lt;br /&gt;
|yes/no&lt;br /&gt;
|MD5 authentication is unsupported in replication, master-slave and parallel mode&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|md5&lt;br /&gt;
|yes&lt;br /&gt;
|no auth&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|md5&lt;br /&gt;
|no&lt;br /&gt;
|&amp;quot;MD5&amp;quot; authentication with pgpool failed for user &amp;quot;XX&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|trust&lt;br /&gt;
|yes/no&lt;br /&gt;
|no auth&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I set up SSL for pgpool-II?&#039;&#039;&#039; ===&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;SSL support for pgpool-II consists of two parts: 1)between client and pgpool-II 2)pgpool-II and PostgreSQL. #1 and #2 are independent each other. For example, you can only enable SSL connection of #1, or #2. Or you can enable both #1 and #2. I explain #1 (for #2, please take a look at PostgreSQL documentation).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Make sure that pgpool is built with openssl. If you build from source code, use --with-openssl option.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
First create server certificate. In the command below you will be asked PEM pass phrase(It will be asked when pgpool starts up).&lt;br /&gt;
If you want to start pgpool without being asked pass phrase, you can remove it later.&lt;br /&gt;
([[sample server certficate create session]])&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
openssl req -new -text -out server.req&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Remove PEM pass phrase if you want.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openssl rsa -in privkey.pem -out server.key&lt;br /&gt;
Enter pass phrase for privkey.pem:&lt;br /&gt;
writing RSA key&lt;br /&gt;
$ rm privkey.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Turn the certificate into a self-signed certificate.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openssl req -x509 -in server.req -text -key server.key -out server.crt&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy server.key and server.crt to appropreate place. Suppose we copy to /usr/local/etc.&lt;br /&gt;
Make sure that you use cp -p to retain appropreate permission of server.key.&lt;br /&gt;
Alternatively you can set permission later.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ chmod og-rwx /usr/local/etc/server.key&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Set the certificate and key location in pgpool.conf.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssl = on&lt;br /&gt;
ssl_key = &#039;/usr/local/etc/server.key&#039;&lt;br /&gt;
ssl_cert = &#039;/usr/local/etc/server.crt&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Restart pgpool.&lt;br /&gt;
To confirm SSL connection between client and pgpool is working, connect to pgpool using psql.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
psql -h localhost -p 9999 test&lt;br /&gt;
psql (9.1.1)&lt;br /&gt;
SSL connection (cipher: AES256-SHA, bits: 256)&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
&lt;br /&gt;
test=# \q&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you see &amp;quot;SSL connection...&amp;quot;, SSL connection between client and pgpool is working.&lt;br /&gt;
Please make sure that use &amp;quot;-h localhost&amp;quot; option. Because SSL only works with TCP/IP,&lt;br /&gt;
with Unix domain socket SSL does not work. &lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m using pgpool-II in replication mode. I expected that pgpool-II replaces current_timestamp call with time constants in my INSERT query, but actually it doesn&#039;t. Why?&#039;&#039;&#039; ===&lt;br /&gt;
:Probably your INSERT query uses schema qualied table name (like public.mytable) and you did not install pool_regclass function coming pgpool. Without pgpool_reglclass, pgpool-II only deals with table names without schema qualification.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why max_connection must satisfy this formula max_connection &amp;gt;= (num_init_children * max_pool) and not max_connection &amp;gt;= num_init_children?&#039;&#039;&#039; ===&lt;br /&gt;
: Probably you need to understand how pgpool uses these variables. Here is internal processing inside pgpool.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Wait for connection request from clients.&lt;br /&gt;
&amp;lt;li&amp;gt;pgpool child receives connection request from a client.&lt;br /&gt;
&amp;lt;li&amp;gt;The pgpool child looks for existing connection in the pool which&lt;br /&gt;
   has requested database/user pair up to max_pool.&lt;br /&gt;
&amp;lt;li&amp;gt;If found, reuse it.&lt;br /&gt;
&amp;lt;li&amp;gt; If not found, opens a new connection to PostgreSQL and registers to&lt;br /&gt;
   the pool.  If the pool has no empty slot, closes the oldest&lt;br /&gt;
   connection to PostgreSQL and reuse the slot.&lt;br /&gt;
&amp;lt;li&amp;gt;Do some query processing until the client sends session close request.&lt;br /&gt;
&amp;lt;li&amp;gt;Close the connection to client but keeps the connection to&lt;br /&gt;
   PostgreSQL for future use.&lt;br /&gt;
&amp;lt;li&amp;gt;Go to #1&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Is connection pool cache shared among pgpool process?&#039;&#039;&#039; ===&lt;br /&gt;
:No, the connection pool cache is in pgpool&#039;s process private memory and is not shared by other pgpool. This is how the connection cache is managed: Suppose pgpool process 12345 has connection cache for database A/user B but process 12346 does not have connection cache for database A/user B and both 12345 and 12346 are in idle state (no client is connecting at this point). If client connects to pgpool process 12345 with database A/user B, then the exisiting connection of 12345 is reused. On the other hand, If client connects to pgpool process 12346, 12346 needs to create new connection.  Whether 12345 or 12346 is chosen, is not under control of pgpool. However in the long run, each pgpool child process will be equally chosen and it is expected that each process&#039;s pool will be resued equally.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why my SELECTs are not cached?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
:Certain libraries such as iBatis, MyBatis always rollback transactions if they are not explicitely committed. Pgpool never caches SELECTs result in a rollbacked transaction because they might not be inconsistent.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use # comments or blank lines in pool_passwd?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
: The answer is simple. No (just like /etc/passwd).&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I cannot use MD5 authentication if start pgpool without -n option. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: You must have given -f option as a relative path: i.e. &amp;quot;-f pgpool.conf&amp;quot;, rather than full path: i.e. &amp;quot;-f /usr/local/etc/pgpool.conf&amp;quot;. Pgpool tries to locate the full path of pool_passwd (which is neccesary for MD5 auth) from pgpool.conf path. This is fine with -n option. However if pgpool starts without -n option, it changes current directory to &amp;quot;/&amp;quot;, which is neccessary processs for daemonizing. As a result, pgpool tries to open &amp;quot;/pool_passwd&amp;quot;, which will not successs.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I see standby servers go down status in steaming replication mode and see PostgreSQL messages &amp;quot;terminating connection due to conflict&amp;quot; Why?&#039;&#039;&#039; ===&lt;br /&gt;
: If you see following messages along with those, it is likely vacuum on primary server removes rows which SELECTs on standby server want to see. Workaround is setting &amp;quot;hot_standby_feedback = on&amp;quot; in your standby server&#039;s postgresql.conf.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
2013-04-07 19:38:10 UTC FATAL:  terminating connection due to conflict with recovery&lt;br /&gt;
2013-04-07 19:38:10 UTC DETAIL:  User query might have needed to see row versions that must be removed.&lt;br /&gt;
2013-04-07 19:38:10 UTC HINT:  In a moment you should be able to reconnect to the database and repeat your command.&lt;br /&gt;
2013-04-07 19:38:10 UTC LOG:  could not send data to client: Connection reset by peer&lt;br /&gt;
2013-04-07 19:38:10 UTC ERROR:  canceling statement due to conflict with recovery&lt;br /&gt;
2013-04-07 19:38:10 UTC DETAIL:  User query might have needed to see row versions that must be removed.&lt;br /&gt;
2013-04-07 19:38:10 UTC LOG:  could not send data to client: Broken pipe&lt;br /&gt;
2013-04-07 19:38:10 UTC FATAL:  connection to client lost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Every few minites load of the system which pgpool-II running on gets high as much as 5-10. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Mulptiple users stats that this is observed only Linux kernel 3.0. 2.6 or 3.2 does show the behavior. We suspect that there is a problem with 3.0 kernel. See more discussions on &amp;quot;[pgpool-general: 1528] Mysterious Load Spikes&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When watchdog enabled and the connection number reach the number of num_init_children, VIP switchover occurs. Why?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
:When the connection number reach the number of num_init_children, the watchdog will be failed because select 1 is failed, and then VIP will be transfer to another pgpool. Unfortunately, there are no way to discriminate normal client&#039;s connections from watchdog&#039;s connection. Larger num_init_children, wd_life_point and smaller wd_interval may prevent the problem somewhat. &lt;br /&gt;
&lt;br /&gt;
:The next major version, pgpool-II 3.3, will support a new monitoring method which uses UDP heartbeat packets instead of queries such like &#039;SELECT 1&#039; to resolve the problem.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why do I need to install pgpool_regclass? &#039;&#039;&#039; ===&lt;br /&gt;
:  If you are using PostgreSQL 8.0 or later, installing pgpool_regclass function on all PostgreSQL to be accessed by pgpool-II is strongly recommended, as it is used internally by pgpool-II. Without this, handling of duplicate table names in different schema might cause trouble (temporary tables aren&#039;t a problem).&lt;br /&gt;
:Related FAQ is here https://www.pgpool.net/mediawiki/index.php?title=FAQ&amp;amp;action=submit#I.27m_using_pgpool-II_in_replication_mode._I_expected_that_pgpool-II_replaces_current_timestamp_call_with_time_constants_in_my_INSERT_query.2C_but_actually_it_doesn.27t._Why.3F&lt;br /&gt;
: If you are using PostgreSQL 9.4.0 or later and pgpool-II 3.3.4 or later, 3.4.0 or later, you don&#039;t need to install pgpool_regclass since PostgreSQL 9.4 has built-in pgpool_regclass like function &amp;quot;to_regclass&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;md5 authentication does not work. Please help&#039;&#039;&#039; ===&lt;br /&gt;
: There&#039;s an excellent summary of various check points to set up md5 authentication. Please take a look at it.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2013-May/001773.html&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m running pgpool/PostgreSQL on Amazon AWS and occasionaly I get network errors. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: It&#039;s a known problem with AWS. We recommend to complain to the Amazon support.&lt;br /&gt;
: pgpool-II 3.3.4, 3.2.9 or later mitigate the problem by changing timeout value for connect(actually select system call) from 1 second to 10 seconds.&lt;br /&gt;
: Also pgpool-II 3.4 or later has a switch to control the timeout value.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I cannot run pcp command on my Ubuntu box. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pcp commands need libpcp.so. In Ubuntu it is included &amp;quot;libpgpool0&amp;quot; package.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;On line recovery failed. How can I debug this?&#039;&#039;&#039; ===&lt;br /&gt;
: pcp_recovery_node executes recovery_1st_stage_command and/or recovery_2nd_stage_command depending on your configuration. Those scripts are supposed to be executed on the master PostgreSQL node (the first live node in replication mode or primary node in streaming replication mode). &amp;quot;BackendError&amp;quot; means there&#039;s something wrong in pgpool and/or PostgreSQL. To verify this, I recommend followings;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;start pgpool with debug option&lt;br /&gt;
&amp;lt;li&amp;gt;execute pcp_recovery_node&lt;br /&gt;
&amp;lt;li&amp;gt;examin pgpool log and master PostgreSQL log&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039; Watchdog doesn&#039;t start if not all &amp;quot;other&amp;quot; nodes are alive&#039;&#039;&#039;===&lt;br /&gt;
: It&#039;s a feature. Watchdog&#039;s lifeheck will start after all of the pgpools has started. Until this, failover of the virtual IP never occurs.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;If I start transaction, pgool-II also starts a transaction on standby nodes. Why?&#039;&#039;&#039;===&lt;br /&gt;
: This is necessary to deal with the case when JDBC driver wants to use cursors. Pgpool-II takes a liberty of distributing SELECTs to the standby node including cursor statements. Unfortunately cursor statements need to be executed in an explicit transaction.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I use schema qualified table names, pgpool-II does not invalidate on memory query cache and I got outdated data. Why?&#039;&#039;&#039;===&lt;br /&gt;
: It seems you did not install &amp;quot;pgpool_regclass&amp;quot; function. Without the function, pgpool-II ignores the schema name pat of the schema qualified table name and the cache invalidation fails.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I periodically get error message like &amp;quot;read_startup_packet: incorrect packet length&amp;quot;. What does it mean?&#039;&#039;&#039;===&lt;br /&gt;
: Monitoring tools including Zabbix and Nagios periodically sends a packet or ping to the port which pgoool is listening on. Unfortunately those packets do not have correct contents, and pgpool-II complains it. If you are not sure who is sending such a packet, you could turn on &amp;quot;log_connections&amp;quot; to know the source host and port info. If they are from such tools, you could stop the monitoring to avoid the problem  or even better, change the monitoring method to send legal query, for example, &amp;quot;SELECT 1&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m getting repeated errors like this every few minutes on Tomcat: &amp;quot;An I/O Error occurred while sending to the backend&amp;quot; Why?&#039;&#039;&#039;===&lt;br /&gt;
: Tomcat creates persistent connections to pgpool. If you set client_idle_limit to non 0, pgpool disconnects the connection and next time when Tomcat tries to send something to pgpool it breaks with the error message.&lt;br /&gt;
: One solution is set client_idle_limit to 0. However this will leave lots of idle connections.&lt;br /&gt;
: Another solution provided by Lachezar Dobrev is:&lt;br /&gt;
: You might solve that by adding a time-out on the Tomcat side. https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html&lt;br /&gt;
:   What you should set is (AFAIK):&lt;br /&gt;
:    minIdle (default is 10, set to 0)&lt;br /&gt;
:   timeBetweenEvictionRunsMillis (default 5000)&lt;br /&gt;
:   minEvictableIdleTimeMillis    (default 60000)&lt;br /&gt;
:This will try every 5 seconds and close any connections that were not used in the last 60 seconds. If you keep the sum of both numbers below the client time-out on the pgpool size connections should be closed at Tomcat side before they time-out on the pgpool side.&lt;br /&gt;
: It is also beneficial to set the&lt;br /&gt;
:    testOnBorrow (default false, set to true)&lt;br /&gt;
:    validationQuery (default none, set to &#039;SELECT version();&#039; no quotes)&lt;br /&gt;
:  This will help with connections should they expire while waiting, without supplying a disconnected connection to the application.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I check pg_stat_activity view, I see a query like &amp;quot;SELECT count(*) FROM pg_catalog.pg_class AS c WHERE c.oid = pgpool_regclass(&#039;pgbench_accounts&#039;) AND c.relpersistence = &#039;u&#039;&amp;quot; in active state for very long time. Why?&#039;&#039;&#039;===&lt;br /&gt;
: It&#039;s a limitation of pg_stat_activity. You can safely ignore it.&lt;br /&gt;
: Pgpool-II issues queries like above for internal use to master node. When user query runs in extended protocol mode (sent from JDBC driver, for example), pgpool-II&#039;s query also runs in the mode. To make pg_stat_activity recognize the query finishes, pgpool-II needs to send a packet called &amp;quot;Sync&amp;quot;, which unfortunately breaks user&#039;s query (more precisely, unnamed portal). Thus pgpool-II sends &amp;quot;Flush&amp;quot; packet instead but then pg_stat_activity does not recognize the end of the query.&lt;br /&gt;
: Interesting thing is, if you enable log_duration, it logs the query finishes.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Online recovery always fails after certain minutes. Why? &#039;&#039;&#039; ===&lt;br /&gt;
: It is possible that PostgreSQL statement_timeout kills the online recovery process. The process is executed as a SQL statement and if it&#039;s running too long, PostgreSQL sends signal 2 to the SQL and kills it. Varying by the size of the database, the online recovery process takes very long time. Make sure to disable statement_timeout or set it long enough time.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why &amp;quot;SET default_transaction_isolation TO DEFAULT&amp;quot; fails ? &#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
$ psql -h localhost -p 9999 -c &#039;SET default_transaction_isolation to DEFAULT;&#039;&lt;br /&gt;
ERROR: kind mismatch among backends. Possible last query was: &amp;quot;SET default_transaction_isolation to DEFAULT;&amp;quot; kind details are: 0[N: statement: SET default_transaction_isolation to DEFAULT;] 1[C]&lt;br /&gt;
HINT: check data consistency among db nodes&lt;br /&gt;
ERROR: kind mismatch among backends. Possible last query was: &amp;quot;SET default_transaction_isolation to DEFAULT;&amp;quot; kind details are: 0[N: statement: SET default_transaction_isolation to DEFAULT;] 1[C]&lt;br /&gt;
HINT: check data consistency among db nodes&lt;br /&gt;
connection to server was lost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
: Pgpool-II detects that node 0 returns &amp;quot;N&amp;quot; (a NOTICE message comes from PostgreSQL) while node 1 returns &amp;quot;C&amp;quot; (which means the command finished).&lt;br /&gt;
: Though pgpool-II expects that both node 0 and 1 returns identical messages, actually they are not. So pgpool-II threw an error.&lt;br /&gt;
: Probably certain log/message settings are different in node 0 and 1. Please check client_min_messages or something like that.&lt;br /&gt;
: They should be identical.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How does pgpool-II find the primary node?&#039;&#039;&#039;===&lt;br /&gt;
: pgpool-II issues &amp;quot;SELECT pg_is_in_recovery()&amp;quot; to each DB node. If it returns true, then the node is a standby node. If one of DB nodes returns false, then the node is the primary node and done.&lt;br /&gt;
: Because it is possible that promoting node could return true for the SELECT, if no primary node is found and &amp;quot;search_primary_node_timeout&amp;quot; is greater than 0, pgpool-II sleeps 1 second and contines to issues the SELECT query to each DB node again until total sleep time exceeds search_primary_node_timeout.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use pg_cancel_backend() or pg_terminate_backend()?&#039;&#039;&#039;===&lt;br /&gt;
: You can safely use pg_cancel_backend().&lt;br /&gt;
: Be warned that pg_terminate_backend() will cause a fail over because it makes PostgreSQL emit an identical error code as postmaster shutdown. Pgpool-II 3.6 or greater mitigates the problem. See [https://www.pgpool.net/docs/latest/en/html/restrictions.html the manual] for more details.&lt;br /&gt;
&lt;br /&gt;
: Remember that pgpool-II manages multiple PostgreSQL servers. To use the function, you not only need to identify the backend pid but the backend server.&lt;br /&gt;
: If the query is running on the primary server, you can call the function something like&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
/*NO LOAD BALANCE*/ SELECT pg_cancel_backend(pid)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
: SQL comment is to prevent the SELECT be load balanced to standby. Of course you could issue the SELECT against directly the primary server.&lt;br /&gt;
: If the query is running on one of standby servers, you need to issue the SELECT against directly the standby server.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why my client is disconnected to pgpool-II when failover happens?&#039;&#039;&#039;===&lt;br /&gt;
: pgpool-II consists of many process where each process corresponds to client session. When failover occurs, each process may iterate on a loop for each backend without knowing a backend goes down. This may result in incorrect processing, or segfault in the worst case. For this reason, when failover occurs, pgpool-II parent process interrupt child process using signal to let them exit. Note that switch over using pcp_detach node has same effect.&lt;br /&gt;
: From Pgpool-II 3.6 or greater, however, a fail over does not cause the disconnection in certain conditions. See the manual for more details.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why am I getting &amp;quot;LOG:  forked new pcp worker ..,&amp;quot; and &amp;quot;LOG:  PCP process with pid: xxxx exit with SUCCESS.&amp;quot; messages in pgpool log?&#039;&#039;&#039; ===&lt;br /&gt;
: Prior to pgpool-II 3.5, pgpool could only handle single PCP command at a time and all PCP commands were handled by a single PCP child process which lives throughout the lifespan of pgpool-II main process. In pgpool II 3.5 the restriction of single PCP command is removed and pgpool-II can now handle multiple simultaneous PCP commands. For every PCP command issued to pgpool a new PCP child process is forked and that process exits after execution of the PCP command is complete. So these log messages are perfectly normal and are generated whenever a new PCP worker process is created or completes execution for a PCP command.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How does pgpool-II handle md5 authentication?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
# PostgreSQL and pgpool store md5(password+username) into pg_authid or pool_password. From now on I denote string md5(password+username) as &amp;quot;S&amp;quot;.&lt;br /&gt;
# When md5 auth is requested, pgpool sends a random number salt &amp;quot;s0&amp;quot; to frontend.&lt;br /&gt;
# Frontend replies back to pgpool with md5(S+s0).&lt;br /&gt;
# pgpool extracts S from pgpool_passwd and calculate md5(S+s0). If #3 and #4 matches, goes to next step.&lt;br /&gt;
# Each backend sends salt to pgpool. Suppose we have two backends b1 and b2, and salts are s1 and s2.&lt;br /&gt;
# pgpool extracts S from pgpool_passwd and calculate md5(S+s1) and send it to b1. pgpool extracts S from pgpool_passwd and calculate md5(S+s2) and send it to b2.&lt;br /&gt;
# If b1 and b2 agree with the authentication, the whole md5 auth process succeeds.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why Pgpool-II does not automatically recognize a database comes back online?&#039;&#039;&#039; ===&lt;br /&gt;
: It would be technically possible but we don&#039;t think it&#039;s a safe feature.&lt;br /&gt;
: Consider a streaming replication configuration. When a standby comes back online, it does not necessarily means it connects to the current primary node. It may connect to a different primary node , or even it&#039;s not a standby any more. If Pgpool-II automatically recognizes such that standby as online, SELECTs to the standby node will return different result as the primary, which is a disaster for database applications.&lt;br /&gt;
: Also please note that &amp;quot;pgpool reload&amp;quot; does not do anything for recognizing the standby node as online. It just reloads configuration files.&lt;br /&gt;
: Please note that in Pgpool-II 4.1 or later, it is possible to automatically make a standby server online if it&#039;s safe enough. See configuration parameter &amp;quot;auto_failback&amp;quot; for more information.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;After enabling idle_in_transaction_session_timeout, Pgpool-II sets the DB node status to all down&#039;&#039;&#039; ===&lt;br /&gt;
: idle_in_transaction_session_timeout was introduced in PostgreSQL 9.6. It is intended to cancel idle transactions. Unfortunately after the time out occurs, PostgreSQL raises a FATAL error, which triggers failover in Pgpool-II if fail_over_on_backend_error is on.&lt;br /&gt;
: These are some workarounds to avoid the unwanted failover.&lt;br /&gt;
* Disable fail_over_on_backend_error. By this, failover will not happen if the FATAL error occurs, but the session will be terminated.&lt;br /&gt;
* Set connection_life_time、child_life_time and client_idle_limit less than idle_in_transaction_session_timeout. This will not terminate the session even if the FATAL error occurs. However even if the FATAL error does not occur, the connection pools are removed if one or more of items satisfy the specified condition, which may affect performance.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I know PostgreSQL backend status connected by Pgpool-II? &#039;&#039;&#039;===&lt;br /&gt;
: The backend status shown in pg_stat_activity can be examined by using &amp;quot;show pool_pools&amp;quot; command. One of the columns &amp;quot;pool_backendpid&amp;quot; shown by &amp;quot;show pool_pools&amp;quot; is the process id of the corresponding PostgreSQL backend process. Once it is determined, you can examine the output of pg_stat_activiy matching with its &amp;quot;pid&amp;quot; column.&lt;br /&gt;
: You can do this automatically by using dblink extension of PostgreSQL. Here is a sample query:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
SELECT * FROM dblink(&#039;dbname=test host=xxx port=11000 user=t-ishii password=xxx&#039;, &#039;show pool_pools&#039;) as t1 (pool_pid int, start_time text, pool_id int, backend_id int, database text, username text, create_time text,majorversion int, minorversion int, pool_counter int, pool_backendpid int, pool_connected int), pg_stat_activity p WHERE p.pid = t1.pool_backendpid;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: You can execute the SQL above on either PostgreSQL or Pgpool-II. The first argument of dblink is a connection string to connect to Pgpool-II, not PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Where can I get Debian packages for Pgpool-II? &#039;&#039;&#039; ===&lt;br /&gt;
: You can get Debian packages here: https://apt.postgresql.org/pub/repos/apt/pool/main/p/pgpool2/&lt;br /&gt;
: For older releases you can find the packages at: https://atalia.postgresql.org/morgue/p/pgpool2/&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How to run Pgpool-II with non-root user? &#039;&#039;&#039; ===&lt;br /&gt;
: If you install Pgpool-II using RPM packages, Pgpool-II will be running by root by default.&lt;br /&gt;
: You can also run Pgpool-II with a non-root user. But root privilege is required to control the virtual IP, so you have to copy ip/ifconfig/arping command and add the setuid flag to them.&lt;br /&gt;
&lt;br /&gt;
: Following is an example to run Pgpool-II with postgres user.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Edit pgpool.service file to use postgres user to start Pgpool-II&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# cp /usr/lib/systemd/system/pgpool.service /etc/systemd/system/pgpool.service&lt;br /&gt;
&lt;br /&gt;
# vi /etc/systemd/system/pgpool.service&lt;br /&gt;
...&lt;br /&gt;
User=postgres&lt;br /&gt;
Group=postgres&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Change owner of /var/{lib,run}/pgpool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# chown postgres:postgres /var/{lib,run}/pgpool&lt;br /&gt;
# cp /usr/lib/tmpfiles.d/pgpool-II-pgxx.conf /etc/tmpfiles.d&lt;br /&gt;
# vi /etc/tmpfiles.d/pgpool-II-pgxx.conf&lt;br /&gt;
===&lt;br /&gt;
d /var/run/pgpool 0755 postgres postgres -&lt;br /&gt;
===&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Change owner of Pgpool-II config files &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
chown -R postgres:postgres /etc/pgpool-II/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Copy ip/ifconfig/arping commands to somewhere where the user has access permissions and add setuid flag to them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# mkdir /var/lib/pgsql/sbin&lt;br /&gt;
# chown postgres:postgres /var/lib/pgsql/sbin&lt;br /&gt;
# chmod 700 /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/ifconfig /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/arping /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/ip /var/lib/pgsql/sbin&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/ip&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/ifconfig&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/arping &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use repmgr with Pgpool-II? &#039;&#039;&#039; ===&lt;br /&gt;
: No. These software do not consider each other. You should use Pgpool-II without repmger or use repmgr without Pgpool-II. See this message for more details: https://www.pgpool.net/pipermail/pgpool-general/2019-August/006743.html&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Connection fails in CentOS6&#039;&#039;&#039; ===&lt;br /&gt;
: Pgpool-II doesn&#039;t support GSSAPI authentication yet, but GSSAPI is requested in CentOS6. Therefore, the connection attempt will fail in CentOS6. &lt;br /&gt;
: A workaround is to set a environment variable to disable GSSAPI encryption in the client: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
export PGGSSENCMODE=disable&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Watchdog standby does not take over master when the master goes down&#039;&#039;&#039; ===&lt;br /&gt;
: If you have even number of watchdog nodes, you need to turn on enable_consensus_with_half_votes parameter, which is new in 4.1. The reason why you need this is explained in the 4.1 release note:&lt;br /&gt;
&amp;lt;q&amp;gt;&lt;br /&gt;
This changes the behavior of the decision of quorum existence and failover consensus on even number (i.e. 2, 4, 6...) of watchdog clusters. Odd number of clusters (3, 5, 7...) are not affected. When this parameter is off (the default), a 2 node watchdog cluster needs to have both 2 nodes are alive to have a quorum. If the quorum does not exist and 1 node goes down, then 1) VIP will be lost, 2) failover srcript is not executed and 3) no watchdog master exists. Especially #2 could be troublesome because no new primary PostgreSQL exists if existing primary goes down. Probably 2 node watchdog cluster users want to turn on this parameter to keep the existing behavior. On the other hand 4 or more even number of watchdog cluster users will benefit from this parameter is off because now it prevents possible split brain when a half of watchdog nodes go down. &lt;br /&gt;
&amp;lt;/q&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:This kind of error could happen for multiple reasons. Here &amp;quot;52&amp;quot; is ASCII code point in hexa decimal number, that is ASCII &#039;R&#039;. &#039;R&#039; is normal response from backend. &amp;quot;45&amp;quot; is &#039;E&#039; in ASCII, which means PostgreSQL complains something. In summary backend 0 accepts the connection request normaly, while backend 1 complains. To solve the problem, you need to look into pgpool.log. For example, If you set &amp;quot;reject&amp;quot; entry for the connection request in backend 1&#039;s pg_hba.conf:&lt;br /&gt;
&lt;br /&gt;
 local	all	foo	reject&lt;br /&gt;
&lt;br /&gt;
: and try to connect to pgpool, you will get the error. You should be able to find something like below in pgpool log:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: LOG:  pool_read_kind: error message from 1 th backend:pg_hba.conf rejects connection for host &amp;quot;[local]&amp;quot;, user &amp;quot;foo&amp;quot;, database &amp;quot;test&amp;quot;, no encryption&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: ERROR:  unable to read message kind&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: DETAIL:  kind does not match between main(52) slot[1] (45)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: You find that you need to fix pg_hba.conf of backend 1.&lt;br /&gt;
&lt;br /&gt;
: Other types of errors include:&lt;br /&gt;
:&amp;lt;ul&amp;gt;&lt;br /&gt;
:&amp;lt;li&amp;gt; Backend 1&#039;s pg_hba.conf setting refuses the connection from pgpool&lt;br /&gt;
:&amp;lt;li&amp;gt; max_connections parameter of PostgreSQL is not identical among backends&lt;br /&gt;
:&amp;lt;/ul&amp;gt;&lt;br /&gt;
: Note that &amp;quot;main&amp;quot; in the error message is &amp;quot;master&amp;quot; in Pgpool-II 4.1 or before. Also note that the detailed error info (&amp;quot;error message from 1 th backend:...&amp;quot;) is not available in Pgpool-II 3.6 or before.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I am getting an authentication error when Pgpool-II connects to Azure PostgreSQL. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Auzre PostgreSQL only accepts clear text password. md5 authentication nor SCRAM-SHA-256 cannot be used. You need to set clear text password in pool_passwd.&lt;br /&gt;
: related bug track entries:&lt;br /&gt;
: &amp;lt;ul&amp;gt;&lt;br /&gt;
: &amp;lt;li&amp;gt; https://www.pgpool.net/mantisbt/view.php?id=737&lt;br /&gt;
: &amp;lt;li&amp;gt; https://www.pgpool.net/mantisbt/view.php?id=699&lt;br /&gt;
: &amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why show pool_nodes does not show replication delay even I set delay_threshold_by_time to 1?&#039;&#039;&#039; ===&lt;br /&gt;
: There are two possible reasons.&lt;br /&gt;
: First one is sr_check_user does not have enough previlege to query against pg_stat_replication view. Please consult [https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-STATS-VIEWS PostgreSQL manual] for more details.&lt;br /&gt;
: Another reason can be the setting of backend_application_name parameter of the standby node in pgpool.conf. It must match with the application_name in primary_coninfo parameter in postgresql.conf.&lt;br /&gt;
&lt;br /&gt;
== pgpoolAdmin Frequently Asked Questions ==&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;pgpoolAdmin does not show any node in pgpool status and node status. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pgpoolAdmin uses PHP&#039;s PostgreSQL extention (pg_connect and pg_query etc.). Probably the extention does not work as expected. Please check apache error log. Also please check the FAQ item below.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why does node status in pgpoolAdmin show &amp;quot;down&amp;quot; status even if PostgreSQL is up and running?&#039;&#039;&#039; ===&lt;br /&gt;
: pgpoolAdmin checks PostgreSQL status by connecting with user = &amp;quot;health_check_user&amp;quot; and database = template1. Thus you should allow pgpoolAdmin to access PostgreSQL with those user and database without password. You can check PostgreSQL log to verify this. If health_check_user does not exist, you will see something like:&lt;br /&gt;
: &amp;lt;pre&amp;gt;20148 2011-07-06 16:41:59 JST FATAL:  role &amp;quot;foo&amp;quot; does not exist&amp;lt;/pre&amp;gt;&lt;br /&gt;
: If the user is protected by password, you will see:&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;20220 2011-07-06 16:42:16 JST FATAL:  password authentication failed for user &amp;quot;foo&amp;quot;&lt;br /&gt;
20221 2011-07-06 16:42:16 JST LOG:  could not receive data from client: Connection reset by peer&lt;br /&gt;
20221 2011-07-06 16:42:16 JST LOG:  unexpected EOF within message length word&lt;br /&gt;
20246 2011-07-06 16:42:26 JST LOG:  could not receive data from client: Connection reset by peer&lt;br /&gt;
20246 2011-07-06 16:42:26 JST LOG:  unexpected EOF within message length word&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=FAQ&amp;diff=3760</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=FAQ&amp;diff=3760"/>
		<updated>2022-12-28T00:39:30Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* I am getting an authentication error when Pgpool-II connects to Azure PostgreSQL. Why? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II Frequently Asked Questions ==&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why configure fails by &amp;quot;pg_config not found&amp;quot; on my Ubuntu box?&#039;&#039;&#039; ===&lt;br /&gt;
: pg_config is in libpq-dev package. You need to install it before running configure.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why records inserted on the primary node do not appear on the standby nodes?&#039;&#039;&#039; ===&lt;br /&gt;
: Are you using streaming replication and a hash index on the table? Then it&#039;s a known limitation of streaming replication. The inserted record is there. But if you SELECT the record using the hash index, it will not appear. Hash index changes do not produce WAL record thus they are not reflected to the standby nodes. Solutions are: 1) use btree index instead 2) use pgpool-II native replication.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I mix different versions of PostgreSQL as pgpool-II backends?&#039;&#039;&#039; ===&lt;br /&gt;
: You cannot mix different major versions of PostgreSQL, for example 8.4.x and 9.0.x. On the other hand you can mix different minor versions of PostgreSQL, for example 9.0.3 and 9.0.4. Pgpool-II assumes messages from PostgreSQL to pgpool-II are identical anytime. Different major version of PostgreSQL may send out different messages and this would cause trouble for Pgpool-II.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I mix different platforms of PostgreSQL as pgpool-II backends, for example Linux and Windows?&#039;&#039;&#039; ===&lt;br /&gt;
: In streaming replication mode, no. Because streaming replication requires that primary and standby platforms are phsyically identical. On the other hand, pgpool-II&#039;s replication mode only requires logically database clusters identical. Beware, however, that online recovery script does not use rsync or some such, which do phical copying among database clusters. You want to use pg_dumpall instead.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;It seems my pgpool-II does not do load balancing. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: First of all, pgpool-II&#039; load balancing is &amp;quot;session base&amp;quot;, not &amp;quot;statement base&amp;quot;. That means, DB node selection for load balancing is decided at the beginning of session. So all SQL statements are sent to the same DB node until the session ends.&lt;br /&gt;
&lt;br /&gt;
: Another point is, whether statement is in an explicit transaction or not. If the statement is in a transaction, it will not be load balanced in the replication mode. In pgpool-II 3.0 or later, SELECT will be load balanced even in a transaction if operated in the master/slave mode.&lt;br /&gt;
&lt;br /&gt;
: Note the method to choose DB node is not LRU or some such. Pgpool-II chooses DB node randomly considering the &amp;quot;weight&amp;quot; parameter in pgpool.conf. This means that the chosen DB node is not uniformly distributed among DB nodes in short term. You might want to inspect the effect of load balancing after ~100 queries have been sent.&lt;br /&gt;
&lt;br /&gt;
: Also cursor statements are not load balanced in replication mode. i.e.:DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE. Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I observe the effect of load balancing?&#039;&#039;&#039; ===&lt;br /&gt;
: We recommend to enable &amp;quot;log_per_node_statement&amp;quot; directive in pgpool.conf for this. Here is an example of the log:&lt;br /&gt;
: &amp;lt;pre&amp;gt;2011-05-07 08:42:42 LOG:   pid 22382: DB node id: 1 backend pid: 22409 statement: SELECT abalance FROM pgbench_accounts WHERE aid = 62797;&amp;lt;/pre&amp;gt;&lt;br /&gt;
: The &amp;quot;DB node id: 1&amp;quot; shows which DB node was chosen for this loadbalancing session.&lt;br /&gt;
&lt;br /&gt;
: Please make sure that you start pgpool-II with &amp;quot;-n&amp;quot; option to get pgpool-II log. (or you can use syslog in pgpool-II 3.1 or later)&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why am I getting &amp;quot;ProcessFrontendResponse: failed to read kind from frontend. frontend abnormally exited&amp;quot; in my pgool log?&#039;&#039;&#039; ===&lt;br /&gt;
: Well, your clients might be ill-behaved:-) PostgreSQL&#039;s protocol requires clients to send particular packet before they disconnect the connection. pgpool-II complains that clients disconnect without sending the packet. You could reprodcude the problem by using psql. Connect to pgpool using psql. Kill -9 psql. You will silimar message in the log. The message will not appear if you quit psql normaly. Another possibility is unstable network connection between your client machine and pgpool-II. Check the cable and network interface card.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m running pgpool-II in streaming replication mode. It seems it works but I find following errors in the log. Why?&#039;&#039;&#039; ===&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;E&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;[&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: pool_read2: EOF encountered with backend&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: make_persistent_db_connection: s_do_auth failed&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: find_primary_node: make_persistent_connection failed&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
: pgpool-II tries to connect to PostgreSQL to execute some functions such as pg_current_xlog_location(), which is used for detecting primary server or checking replication delay. The messages above indicates that pgpool-II failed to connect with user = health_check_user and password = health_check_password. You need to set them properly even if health_check_period = 0.&lt;br /&gt;
&lt;br /&gt;
: Note that pgpool-II 3.1 or later will use sr_check_user and sr_check_password for it instead.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I run pgbench to test pgpool-II, pgbench hangs. If I directly run pgbench against PostgreSQL, it works fine. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pgbench creates concurrent connections (the number of connections is specified by &amp;quot;-c&amp;quot; option) before starting actual transactions. So if the number of concurrent transactions specified by &amp;quot;-c&amp;quot; exceeds num_init_children, pgbench will stuck because it will wait for pgpool accepting connections forever (remember that pgpool-II accepts up to num_init_children concurrent sessions. If the number of concurrent sessions reach num_init_children, new session will be queued). On the other hand PostgreSQL does not accept concurrent sessions more than max_connections. So in this case you will just see PostgreSQL errors, rather than connection blocking. If you want to test pgpool-II&#039;s connection queuing, you can use psql instead of pgbench. In the example session below, num_init_children = 1 (this is not a recommended setting in the real world. This is just for simplicity).&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;$ psql test &amp;lt;-- connect to pgpool from terminal #1&lt;br /&gt;
psql (9.1.1)&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
test=# &lt;br /&gt;
$ psql test &amp;lt;-- tries to connect to pgpool from terminal #2 but it is blocked.&lt;br /&gt;
test=# SELECT 1; &amp;lt;--- do something from terminal #1 psql&lt;br /&gt;
test=# \q &amp;lt;-- quit psql session on terminal #1&lt;br /&gt;
psql (9.1.1) &amp;lt;-- now psql on terminal #2 accepts session&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
test=# &amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
=== &#039;&#039;&#039;I created pool_hba.conf and pool_passwd to enable md5 authentication through pgpool-II but it does not work. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Probably you made mistake somewhere. For your help here is a table which describes error patterns depending on the setting of pg_hba.conf, pool_hba.conf and pool_passwd.&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&lt;br /&gt;
{|styles=&amp;quot;background:white; color:black&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|pg_hba.conf&lt;br /&gt;
|pool_hba.conf&lt;br /&gt;
|pool_passwd&lt;br /&gt;
|result&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|md5&lt;br /&gt;
|yes&lt;br /&gt;
|md5 auth&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|md5&lt;br /&gt;
|no&lt;br /&gt;
|&amp;quot;MD5&amp;quot; authentication with pgpool failed for user &amp;quot;XX&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|trust&lt;br /&gt;
|yes/no&lt;br /&gt;
|MD5 authentication is unsupported in replication, master-slave and parallel mode&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|md5&lt;br /&gt;
|yes&lt;br /&gt;
|no auth&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|md5&lt;br /&gt;
|no&lt;br /&gt;
|&amp;quot;MD5&amp;quot; authentication with pgpool failed for user &amp;quot;XX&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|trust&lt;br /&gt;
|yes/no&lt;br /&gt;
|no auth&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I set up SSL for pgpool-II?&#039;&#039;&#039; ===&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;SSL support for pgpool-II consists of two parts: 1)between client and pgpool-II 2)pgpool-II and PostgreSQL. #1 and #2 are independent each other. For example, you can only enable SSL connection of #1, or #2. Or you can enable both #1 and #2. I explain #1 (for #2, please take a look at PostgreSQL documentation).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Make sure that pgpool is built with openssl. If you build from source code, use --with-openssl option.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
First create server certificate. In the command below you will be asked PEM pass phrase(It will be asked when pgpool starts up).&lt;br /&gt;
If you want to start pgpool without being asked pass phrase, you can remove it later.&lt;br /&gt;
([[sample server certficate create session]])&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
openssl req -new -text -out server.req&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Remove PEM pass phrase if you want.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openssl rsa -in privkey.pem -out server.key&lt;br /&gt;
Enter pass phrase for privkey.pem:&lt;br /&gt;
writing RSA key&lt;br /&gt;
$ rm privkey.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Turn the certificate into a self-signed certificate.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openssl req -x509 -in server.req -text -key server.key -out server.crt&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy server.key and server.crt to appropreate place. Suppose we copy to /usr/local/etc.&lt;br /&gt;
Make sure that you use cp -p to retain appropreate permission of server.key.&lt;br /&gt;
Alternatively you can set permission later.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ chmod og-rwx /usr/local/etc/server.key&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Set the certificate and key location in pgpool.conf.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssl = on&lt;br /&gt;
ssl_key = &#039;/usr/local/etc/server.key&#039;&lt;br /&gt;
ssl_cert = &#039;/usr/local/etc/server.crt&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Restart pgpool.&lt;br /&gt;
To confirm SSL connection between client and pgpool is working, connect to pgpool using psql.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
psql -h localhost -p 9999 test&lt;br /&gt;
psql (9.1.1)&lt;br /&gt;
SSL connection (cipher: AES256-SHA, bits: 256)&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
&lt;br /&gt;
test=# \q&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you see &amp;quot;SSL connection...&amp;quot;, SSL connection between client and pgpool is working.&lt;br /&gt;
Please make sure that use &amp;quot;-h localhost&amp;quot; option. Because SSL only works with TCP/IP,&lt;br /&gt;
with Unix domain socket SSL does not work. &lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m using pgpool-II in replication mode. I expected that pgpool-II replaces current_timestamp call with time constants in my INSERT query, but actually it doesn&#039;t. Why?&#039;&#039;&#039; ===&lt;br /&gt;
:Probably your INSERT query uses schema qualied table name (like public.mytable) and you did not install pool_regclass function coming pgpool. Without pgpool_reglclass, pgpool-II only deals with table names without schema qualification.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why max_connection must satisfy this formula max_connection &amp;gt;= (num_init_children * max_pool) and not max_connection &amp;gt;= num_init_children?&#039;&#039;&#039; ===&lt;br /&gt;
: Probably you need to understand how pgpool uses these variables. Here is internal processing inside pgpool.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Wait for connection request from clients.&lt;br /&gt;
&amp;lt;li&amp;gt;pgpool child receives connection request from a client.&lt;br /&gt;
&amp;lt;li&amp;gt;The pgpool child looks for existing connection in the pool which&lt;br /&gt;
   has requested database/user pair up to max_pool.&lt;br /&gt;
&amp;lt;li&amp;gt;If found, reuse it.&lt;br /&gt;
&amp;lt;li&amp;gt; If not found, opens a new connection to PostgreSQL and registers to&lt;br /&gt;
   the pool.  If the pool has no empty slot, closes the oldest&lt;br /&gt;
   connection to PostgreSQL and reuse the slot.&lt;br /&gt;
&amp;lt;li&amp;gt;Do some query processing until the client sends session close request.&lt;br /&gt;
&amp;lt;li&amp;gt;Close the connection to client but keeps the connection to&lt;br /&gt;
   PostgreSQL for future use.&lt;br /&gt;
&amp;lt;li&amp;gt;Go to #1&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Is connection pool cache shared among pgpool process?&#039;&#039;&#039; ===&lt;br /&gt;
:No, the connection pool cache is in pgpool&#039;s process private memory and is not shared by other pgpool. This is how the connection cache is managed: Suppose pgpool process 12345 has connection cache for database A/user B but process 12346 does not have connection cache for database A/user B and both 12345 and 12346 are in idle state (no client is connecting at this point). If client connects to pgpool process 12345 with database A/user B, then the exisiting connection of 12345 is reused. On the other hand, If client connects to pgpool process 12346, 12346 needs to create new connection.  Whether 12345 or 12346 is chosen, is not under control of pgpool. However in the long run, each pgpool child process will be equally chosen and it is expected that each process&#039;s pool will be resued equally.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why my SELECTs are not cached?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
:Certain libraries such as iBatis, MyBatis always rollback transactions if they are not explicitely committed. Pgpool never caches SELECTs result in a rollbacked transaction because they might not be inconsistent.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use # comments or blank lines in pool_passwd?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
: The answer is simple. No (just like /etc/passwd).&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I cannot use MD5 authentication if start pgpool without -n option. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: You must have given -f option as a relative path: i.e. &amp;quot;-f pgpool.conf&amp;quot;, rather than full path: i.e. &amp;quot;-f /usr/local/etc/pgpool.conf&amp;quot;. Pgpool tries to locate the full path of pool_passwd (which is neccesary for MD5 auth) from pgpool.conf path. This is fine with -n option. However if pgpool starts without -n option, it changes current directory to &amp;quot;/&amp;quot;, which is neccessary processs for daemonizing. As a result, pgpool tries to open &amp;quot;/pool_passwd&amp;quot;, which will not successs.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I see standby servers go down status in steaming replication mode and see PostgreSQL messages &amp;quot;terminating connection due to conflict&amp;quot; Why?&#039;&#039;&#039; ===&lt;br /&gt;
: If you see following messages along with those, it is likely vacuum on primary server removes rows which SELECTs on standby server want to see. Workaround is setting &amp;quot;hot_standby_feedback = on&amp;quot; in your standby server&#039;s postgresql.conf.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
2013-04-07 19:38:10 UTC FATAL:  terminating connection due to conflict with recovery&lt;br /&gt;
2013-04-07 19:38:10 UTC DETAIL:  User query might have needed to see row versions that must be removed.&lt;br /&gt;
2013-04-07 19:38:10 UTC HINT:  In a moment you should be able to reconnect to the database and repeat your command.&lt;br /&gt;
2013-04-07 19:38:10 UTC LOG:  could not send data to client: Connection reset by peer&lt;br /&gt;
2013-04-07 19:38:10 UTC ERROR:  canceling statement due to conflict with recovery&lt;br /&gt;
2013-04-07 19:38:10 UTC DETAIL:  User query might have needed to see row versions that must be removed.&lt;br /&gt;
2013-04-07 19:38:10 UTC LOG:  could not send data to client: Broken pipe&lt;br /&gt;
2013-04-07 19:38:10 UTC FATAL:  connection to client lost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Every few minites load of the system which pgpool-II running on gets high as much as 5-10. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Mulptiple users stats that this is observed only Linux kernel 3.0. 2.6 or 3.2 does show the behavior. We suspect that there is a problem with 3.0 kernel. See more discussions on &amp;quot;[pgpool-general: 1528] Mysterious Load Spikes&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When watchdog enabled and the connection number reach the number of num_init_children, VIP switchover occurs. Why?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
:When the connection number reach the number of num_init_children, the watchdog will be failed because select 1 is failed, and then VIP will be transfer to another pgpool. Unfortunately, there are no way to discriminate normal client&#039;s connections from watchdog&#039;s connection. Larger num_init_children, wd_life_point and smaller wd_interval may prevent the problem somewhat. &lt;br /&gt;
&lt;br /&gt;
:The next major version, pgpool-II 3.3, will support a new monitoring method which uses UDP heartbeat packets instead of queries such like &#039;SELECT 1&#039; to resolve the problem.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why do I need to install pgpool_regclass? &#039;&#039;&#039; ===&lt;br /&gt;
:  If you are using PostgreSQL 8.0 or later, installing pgpool_regclass function on all PostgreSQL to be accessed by pgpool-II is strongly recommended, as it is used internally by pgpool-II. Without this, handling of duplicate table names in different schema might cause trouble (temporary tables aren&#039;t a problem).&lt;br /&gt;
:Related FAQ is here https://www.pgpool.net/mediawiki/index.php?title=FAQ&amp;amp;action=submit#I.27m_using_pgpool-II_in_replication_mode._I_expected_that_pgpool-II_replaces_current_timestamp_call_with_time_constants_in_my_INSERT_query.2C_but_actually_it_doesn.27t._Why.3F&lt;br /&gt;
: If you are using PostgreSQL 9.4.0 or later and pgpool-II 3.3.4 or later, 3.4.0 or later, you don&#039;t need to install pgpool_regclass since PostgreSQL 9.4 has built-in pgpool_regclass like function &amp;quot;to_regclass&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;md5 authentication does not work. Please help&#039;&#039;&#039; ===&lt;br /&gt;
: There&#039;s an excellent summary of various check points to set up md5 authentication. Please take a look at it.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2013-May/001773.html&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m running pgpool/PostgreSQL on Amazon AWS and occasionaly I get network errors. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: It&#039;s a known problem with AWS. We recommend to complain to the Amazon support.&lt;br /&gt;
: pgpool-II 3.3.4, 3.2.9 or later mitigate the problem by changing timeout value for connect(actually select system call) from 1 second to 10 seconds.&lt;br /&gt;
: Also pgpool-II 3.4 or later has a switch to control the timeout value.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I cannot run pcp command on my Ubuntu box. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pcp commands need libpcp.so. In Ubuntu it is included &amp;quot;libpgpool0&amp;quot; package.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;On line recovery failed. How can I debug this?&#039;&#039;&#039; ===&lt;br /&gt;
: pcp_recovery_node executes recovery_1st_stage_command and/or recovery_2nd_stage_command depending on your configuration. Those scripts are supposed to be executed on the master PostgreSQL node (the first live node in replication mode or primary node in streaming replication mode). &amp;quot;BackendError&amp;quot; means there&#039;s something wrong in pgpool and/or PostgreSQL. To verify this, I recommend followings;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;start pgpool with debug option&lt;br /&gt;
&amp;lt;li&amp;gt;execute pcp_recovery_node&lt;br /&gt;
&amp;lt;li&amp;gt;examin pgpool log and master PostgreSQL log&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039; Watchdog doesn&#039;t start if not all &amp;quot;other&amp;quot; nodes are alive&#039;&#039;&#039;===&lt;br /&gt;
: It&#039;s a feature. Watchdog&#039;s lifeheck will start after all of the pgpools has started. Until this, failover of the virtual IP never occurs.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;If I start transaction, pgool-II also starts a transaction on standby nodes. Why?&#039;&#039;&#039;===&lt;br /&gt;
: This is necessary to deal with the case when JDBC driver wants to use cursors. Pgpool-II takes a liberty of distributing SELECTs to the standby node including cursor statements. Unfortunately cursor statements need to be executed in an explicit transaction.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I use schema qualified table names, pgpool-II does not invalidate on memory query cache and I got outdated data. Why?&#039;&#039;&#039;===&lt;br /&gt;
: It seems you did not install &amp;quot;pgpool_regclass&amp;quot; function. Without the function, pgpool-II ignores the schema name pat of the schema qualified table name and the cache invalidation fails.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I periodically get error message like &amp;quot;read_startup_packet: incorrect packet length&amp;quot;. What does it mean?&#039;&#039;&#039;===&lt;br /&gt;
: Monitoring tools including Zabbix and Nagios periodically sends a packet or ping to the port which pgoool is listening on. Unfortunately those packets do not have correct contents, and pgpool-II complains it. If you are not sure who is sending such a packet, you could turn on &amp;quot;log_connections&amp;quot; to know the source host and port info. If they are from such tools, you could stop the monitoring to avoid the problem  or even better, change the monitoring method to send legal query, for example, &amp;quot;SELECT 1&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m getting repeated errors like this every few minutes on Tomcat: &amp;quot;An I/O Error occurred while sending to the backend&amp;quot; Why?&#039;&#039;&#039;===&lt;br /&gt;
: Tomcat creates persistent connections to pgpool. If you set client_idle_limit to non 0, pgpool disconnects the connection and next time when Tomcat tries to send something to pgpool it breaks with the error message.&lt;br /&gt;
: One solution is set client_idle_limit to 0. However this will leave lots of idle connections.&lt;br /&gt;
: Another solution provided by Lachezar Dobrev is:&lt;br /&gt;
: You might solve that by adding a time-out on the Tomcat side. https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html&lt;br /&gt;
:   What you should set is (AFAIK):&lt;br /&gt;
:    minIdle (default is 10, set to 0)&lt;br /&gt;
:   timeBetweenEvictionRunsMillis (default 5000)&lt;br /&gt;
:   minEvictableIdleTimeMillis    (default 60000)&lt;br /&gt;
:This will try every 5 seconds and close any connections that were not used in the last 60 seconds. If you keep the sum of both numbers below the client time-out on the pgpool size connections should be closed at Tomcat side before they time-out on the pgpool side.&lt;br /&gt;
: It is also beneficial to set the&lt;br /&gt;
:    testOnBorrow (default false, set to true)&lt;br /&gt;
:    validationQuery (default none, set to &#039;SELECT version();&#039; no quotes)&lt;br /&gt;
:  This will help with connections should they expire while waiting, without supplying a disconnected connection to the application.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I check pg_stat_activity view, I see a query like &amp;quot;SELECT count(*) FROM pg_catalog.pg_class AS c WHERE c.oid = pgpool_regclass(&#039;pgbench_accounts&#039;) AND c.relpersistence = &#039;u&#039;&amp;quot; in active state for very long time. Why?&#039;&#039;&#039;===&lt;br /&gt;
: It&#039;s a limitation of pg_stat_activity. You can safely ignore it.&lt;br /&gt;
: Pgpool-II issues queries like above for internal use to master node. When user query runs in extended protocol mode (sent from JDBC driver, for example), pgpool-II&#039;s query also runs in the mode. To make pg_stat_activity recognize the query finishes, pgpool-II needs to send a packet called &amp;quot;Sync&amp;quot;, which unfortunately breaks user&#039;s query (more precisely, unnamed portal). Thus pgpool-II sends &amp;quot;Flush&amp;quot; packet instead but then pg_stat_activity does not recognize the end of the query.&lt;br /&gt;
: Interesting thing is, if you enable log_duration, it logs the query finishes.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Online recovery always fails after certain minutes. Why? &#039;&#039;&#039; ===&lt;br /&gt;
: It is possible that PostgreSQL statement_timeout kills the online recovery process. The process is executed as a SQL statement and if it&#039;s running too long, PostgreSQL sends signal 2 to the SQL and kills it. Varying by the size of the database, the online recovery process takes very long time. Make sure to disable statement_timeout or set it long enough time.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why &amp;quot;SET default_transaction_isolation TO DEFAULT&amp;quot; fails ? &#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
$ psql -h localhost -p 9999 -c &#039;SET default_transaction_isolation to DEFAULT;&#039;&lt;br /&gt;
ERROR: kind mismatch among backends. Possible last query was: &amp;quot;SET default_transaction_isolation to DEFAULT;&amp;quot; kind details are: 0[N: statement: SET default_transaction_isolation to DEFAULT;] 1[C]&lt;br /&gt;
HINT: check data consistency among db nodes&lt;br /&gt;
ERROR: kind mismatch among backends. Possible last query was: &amp;quot;SET default_transaction_isolation to DEFAULT;&amp;quot; kind details are: 0[N: statement: SET default_transaction_isolation to DEFAULT;] 1[C]&lt;br /&gt;
HINT: check data consistency among db nodes&lt;br /&gt;
connection to server was lost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
: Pgpool-II detects that node 0 returns &amp;quot;N&amp;quot; (a NOTICE message comes from PostgreSQL) while node 1 returns &amp;quot;C&amp;quot; (which means the command finished).&lt;br /&gt;
: Though pgpool-II expects that both node 0 and 1 returns identical messages, actually they are not. So pgpool-II threw an error.&lt;br /&gt;
: Probably certain log/message settings are different in node 0 and 1. Please check client_min_messages or something like that.&lt;br /&gt;
: They should be identical.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How does pgpool-II find the primary node?&#039;&#039;&#039;===&lt;br /&gt;
: pgpool-II issues &amp;quot;SELECT pg_is_in_recovery()&amp;quot; to each DB node. If it returns true, then the node is a standby node. If one of DB nodes returns false, then the node is the primary node and done.&lt;br /&gt;
: Because it is possible that promoting node could return true for the SELECT, if no primary node is found and &amp;quot;search_primary_node_timeout&amp;quot; is greater than 0, pgpool-II sleeps 1 second and contines to issues the SELECT query to each DB node again until total sleep time exceeds search_primary_node_timeout.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use pg_cancel_backend() or pg_terminate_backend()?&#039;&#039;&#039;===&lt;br /&gt;
: You can safely use pg_cancel_backend().&lt;br /&gt;
: Be warned that pg_terminate_backend() will cause a fail over because it makes PostgreSQL emit an identical error code as postmaster shutdown. Pgpool-II 3.6 or greater mitigates the problem. See [https://www.pgpool.net/docs/latest/en/html/restrictions.html the manual] for more details.&lt;br /&gt;
&lt;br /&gt;
: Remember that pgpool-II manages multiple PostgreSQL servers. To use the function, you not only need to identify the backend pid but the backend server.&lt;br /&gt;
: If the query is running on the primary server, you can call the function something like&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
/*NO LOAD BALANCE*/ SELECT pg_cancel_backend(pid)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
: SQL comment is to prevent the SELECT be load balanced to standby. Of course you could issue the SELECT against directly the primary server.&lt;br /&gt;
: If the query is running on one of standby servers, you need to issue the SELECT against directly the standby server.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why my client is disconnected to pgpool-II when failover happens?&#039;&#039;&#039;===&lt;br /&gt;
: pgpool-II consists of many process where each process corresponds to client session. When failover occurs, each process may iterate on a loop for each backend without knowing a backend goes down. This may result in incorrect processing, or segfault in the worst case. For this reason, when failover occurs, pgpool-II parent process interrupt child process using signal to let them exit. Note that switch over using pcp_detach node has same effect.&lt;br /&gt;
: From Pgpool-II 3.6 or greater, however, a fail over does not cause the disconnection in certain conditions. See the manual for more details.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why am I getting &amp;quot;LOG:  forked new pcp worker ..,&amp;quot; and &amp;quot;LOG:  PCP process with pid: xxxx exit with SUCCESS.&amp;quot; messages in pgpool log?&#039;&#039;&#039; ===&lt;br /&gt;
: Prior to pgpool-II 3.5, pgpool could only handle single PCP command at a time and all PCP commands were handled by a single PCP child process which lives throughout the lifespan of pgpool-II main process. In pgpool II 3.5 the restriction of single PCP command is removed and pgpool-II can now handle multiple simultaneous PCP commands. For every PCP command issued to pgpool a new PCP child process is forked and that process exits after execution of the PCP command is complete. So these log messages are perfectly normal and are generated whenever a new PCP worker process is created or completes execution for a PCP command.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How does pgpool-II handle md5 authentication?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
# PostgreSQL and pgpool store md5(password+username) into pg_authid or pool_password. From now on I denote string md5(password+username) as &amp;quot;S&amp;quot;.&lt;br /&gt;
# When md5 auth is requested, pgpool sends a random number salt &amp;quot;s0&amp;quot; to frontend.&lt;br /&gt;
# Frontend replies back to pgpool with md5(S+s0).&lt;br /&gt;
# pgpool extracts S from pgpool_passwd and calculate md5(S+s0). If #3 and #4 matches, goes to next step.&lt;br /&gt;
# Each backend sends salt to pgpool. Suppose we have two backends b1 and b2, and salts are s1 and s2.&lt;br /&gt;
# pgpool extracts S from pgpool_passwd and calculate md5(S+s1) and send it to b1. pgpool extracts S from pgpool_passwd and calculate md5(S+s2) and send it to b2.&lt;br /&gt;
# If b1 and b2 agree with the authentication, the whole md5 auth process succeeds.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why Pgpool-II does not automatically recognize a database comes back online?&#039;&#039;&#039; ===&lt;br /&gt;
: It would be technically possible but we don&#039;t think it&#039;s a safe feature.&lt;br /&gt;
: Consider a streaming replication configuration. When a standby comes back online, it does not necessarily means it connects to the current primary node. It may connect to a different primary node , or even it&#039;s not a standby any more. If Pgpool-II automatically recognizes such that standby as online, SELECTs to the standby node will return different result as the primary, which is a disaster for database applications.&lt;br /&gt;
: Also please note that &amp;quot;pgpool reload&amp;quot; does not do anything for recognizing the standby node as online. It just reloads configuration files.&lt;br /&gt;
: Please note that in Pgpool-II 4.1 or later, it is possible to automatically make a standby server online if it&#039;s safe enough. See configuration parameter &amp;quot;auto_failback&amp;quot; for more information.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;After enabling idle_in_transaction_session_timeout, Pgpool-II sets the DB node status to all down&#039;&#039;&#039; ===&lt;br /&gt;
: idle_in_transaction_session_timeout was introduced in PostgreSQL 9.6. It is intended to cancel idle transactions. Unfortunately after the time out occurs, PostgreSQL raises a FATAL error, which triggers failover in Pgpool-II if fail_over_on_backend_error is on.&lt;br /&gt;
: These are some workarounds to avoid the unwanted failover.&lt;br /&gt;
* Disable fail_over_on_backend_error. By this, failover will not happen if the FATAL error occurs, but the session will be terminated.&lt;br /&gt;
* Set connection_life_time、child_life_time and client_idle_limit less than idle_in_transaction_session_timeout. This will not terminate the session even if the FATAL error occurs. However even if the FATAL error does not occur, the connection pools are removed if one or more of items satisfy the specified condition, which may affect performance.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I know PostgreSQL backend status connected by Pgpool-II? &#039;&#039;&#039;===&lt;br /&gt;
: The backend status shown in pg_stat_activity can be examined by using &amp;quot;show pool_pools&amp;quot; command. One of the columns &amp;quot;pool_backendpid&amp;quot; shown by &amp;quot;show pool_pools&amp;quot; is the process id of the corresponding PostgreSQL backend process. Once it is determined, you can examine the output of pg_stat_activiy matching with its &amp;quot;pid&amp;quot; column.&lt;br /&gt;
: You can do this automatically by using dblink extension of PostgreSQL. Here is a sample query:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
SELECT * FROM dblink(&#039;dbname=test host=xxx port=11000 user=t-ishii password=xxx&#039;, &#039;show pool_pools&#039;) as t1 (pool_pid int, start_time text, pool_id int, backend_id int, database text, username text, create_time text,majorversion int, minorversion int, pool_counter int, pool_backendpid int, pool_connected int), pg_stat_activity p WHERE p.pid = t1.pool_backendpid;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: You can execute the SQL above on either PostgreSQL or Pgpool-II. The first argument of dblink is a connection string to connect to Pgpool-II, not PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Where can I get Debian packages for Pgpool-II? &#039;&#039;&#039; ===&lt;br /&gt;
: You can get Debian packages here: https://apt.postgresql.org/pub/repos/apt/pool/main/p/pgpool2/&lt;br /&gt;
: For older releases you can find the packages at: https://atalia.postgresql.org/morgue/p/pgpool2/&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How to run Pgpool-II with non-root user? &#039;&#039;&#039; ===&lt;br /&gt;
: If you install Pgpool-II using RPM packages, Pgpool-II will be running by root by default.&lt;br /&gt;
: You can also run Pgpool-II with a non-root user. But root privilege is required to control the virtual IP, so you have to copy ip/ifconfig/arping command and add the setuid flag to them.&lt;br /&gt;
&lt;br /&gt;
: Following is an example to run Pgpool-II with postgres user.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Edit pgpool.service file to use postgres user to start Pgpool-II&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# cp /usr/lib/systemd/system/pgpool.service /etc/systemd/system/pgpool.service&lt;br /&gt;
&lt;br /&gt;
# vi /etc/systemd/system/pgpool.service&lt;br /&gt;
...&lt;br /&gt;
User=postgres&lt;br /&gt;
Group=postgres&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Change owner of /var/{lib,run}/pgpool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# chown postgres:postgres /var/{lib,run}/pgpool&lt;br /&gt;
# cp /usr/lib/tmpfiles.d/pgpool-II-pgxx.conf /etc/tmpfiles.d&lt;br /&gt;
# vi /etc/tmpfiles.d/pgpool-II-pgxx.conf&lt;br /&gt;
===&lt;br /&gt;
d /var/run/pgpool 0755 postgres postgres -&lt;br /&gt;
===&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Change owner of Pgpool-II config files &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
chown -R postgres:postgres /etc/pgpool-II/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Copy ip/ifconfig/arping commands to somewhere where the user has access permissions and add setuid flag to them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# mkdir /var/lib/pgsql/sbin&lt;br /&gt;
# chown postgres:postgres /var/lib/pgsql/sbin&lt;br /&gt;
# chmod 700 /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/ifconfig /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/arping /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/ip /var/lib/pgsql/sbin&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/ip&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/ifconfig&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/arping &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use repmgr with Pgpool-II? &#039;&#039;&#039; ===&lt;br /&gt;
: No. These software do not consider each other. You should use Pgpool-II without repmger or use repmgr without Pgpool-II. See this message for more details: https://www.pgpool.net/pipermail/pgpool-general/2019-August/006743.html&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Connection fails in CentOS6&#039;&#039;&#039; ===&lt;br /&gt;
: Pgpool-II doesn&#039;t support GSSAPI authentication yet, but GSSAPI is requested in CentOS6. Therefore, the connection attempt will fail in CentOS6. &lt;br /&gt;
: A workaround is to set a environment variable to disable GSSAPI encryption in the client: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
export PGGSSENCMODE=disable&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Watchdog standby does not take over master when the master goes down&#039;&#039;&#039; ===&lt;br /&gt;
: If you have even number of watchdog nodes, you need to turn on enable_consensus_with_half_votes parameter, which is new in 4.1. The reason why you need this is explained in the 4.1 release note:&lt;br /&gt;
&amp;lt;q&amp;gt;&lt;br /&gt;
This changes the behavior of the decision of quorum existence and failover consensus on even number (i.e. 2, 4, 6...) of watchdog clusters. Odd number of clusters (3, 5, 7...) are not affected. When this parameter is off (the default), a 2 node watchdog cluster needs to have both 2 nodes are alive to have a quorum. If the quorum does not exist and 1 node goes down, then 1) VIP will be lost, 2) failover srcript is not executed and 3) no watchdog master exists. Especially #2 could be troublesome because no new primary PostgreSQL exists if existing primary goes down. Probably 2 node watchdog cluster users want to turn on this parameter to keep the existing behavior. On the other hand 4 or more even number of watchdog cluster users will benefit from this parameter is off because now it prevents possible split brain when a half of watchdog nodes go down. &lt;br /&gt;
&amp;lt;/q&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:This kind of error could happen for multiple reasons. Here &amp;quot;52&amp;quot; is ASCII code point in hexa decimal number, that is ASCII &#039;R&#039;. &#039;R&#039; is normal response from backend. &amp;quot;45&amp;quot; is &#039;E&#039; in ASCII, which means PostgreSQL complains something. In summary backend 0 accepts the connection request normaly, while backend 1 complains. To solve the problem, you need to look into pgpool.log. For example, If you set &amp;quot;reject&amp;quot; entry for the connection request in backend 1&#039;s pg_hba.conf:&lt;br /&gt;
&lt;br /&gt;
 local	all	foo	reject&lt;br /&gt;
&lt;br /&gt;
: and try to connect to pgpool, you will get the error. You should be able to find something like below in pgpool log:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: LOG:  pool_read_kind: error message from 1 th backend:pg_hba.conf rejects connection for host &amp;quot;[local]&amp;quot;, user &amp;quot;foo&amp;quot;, database &amp;quot;test&amp;quot;, no encryption&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: ERROR:  unable to read message kind&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: DETAIL:  kind does not match between main(52) slot[1] (45)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: You find that you need to fix pg_hba.conf of backend 1.&lt;br /&gt;
&lt;br /&gt;
: Other types of errors include:&lt;br /&gt;
:&amp;lt;ul&amp;gt;&lt;br /&gt;
:&amp;lt;li&amp;gt; Backend 1&#039;s pg_hba.conf setting refuses the connection from pgpool&lt;br /&gt;
:&amp;lt;li&amp;gt; max_connections parameter of PostgreSQL is not identical among backends&lt;br /&gt;
:&amp;lt;/ul&amp;gt;&lt;br /&gt;
: Note that &amp;quot;main&amp;quot; in the error message is &amp;quot;master&amp;quot; in Pgpool-II 4.1 or before. Also note that the detailed error info (&amp;quot;error message from 1 th backend:...&amp;quot;) is not available in Pgpool-II 3.6 or before.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I am getting an authentication error when Pgpool-II connects to Azure PostgreSQL. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Auzre PostgreSQL only accepts clear text password. md5 authentication nor SCRAM-SHA-256 cannot be used. You need to set clear text password in pool_passwd.&lt;br /&gt;
: related bug track entries:&lt;br /&gt;
: &amp;lt;ul&amp;gt;&lt;br /&gt;
: &amp;lt;li&amp;gt; https://www.pgpool.net/mantisbt/view.php?id=737&lt;br /&gt;
: &amp;lt;li&amp;gt; https://www.pgpool.net/mantisbt/view.php?id=699&lt;br /&gt;
: &amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why show pool_nodes does not show replication delay even I set delay_threshold_by_time to 1?&#039;&#039;&#039; ===&lt;br /&gt;
: There are two possible reasons.&lt;br /&gt;
: First one is sr_check_user does not have enough previlege to query against pg_stat_replication view. Please consult [https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-STATS-VIEWS PostgreSQL manual] for more details.&lt;br /&gt;
&lt;br /&gt;
== pgpoolAdmin Frequently Asked Questions ==&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;pgpoolAdmin does not show any node in pgpool status and node status. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pgpoolAdmin uses PHP&#039;s PostgreSQL extention (pg_connect and pg_query etc.). Probably the extention does not work as expected. Please check apache error log. Also please check the FAQ item below.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why does node status in pgpoolAdmin show &amp;quot;down&amp;quot; status even if PostgreSQL is up and running?&#039;&#039;&#039; ===&lt;br /&gt;
: pgpoolAdmin checks PostgreSQL status by connecting with user = &amp;quot;health_check_user&amp;quot; and database = template1. Thus you should allow pgpoolAdmin to access PostgreSQL with those user and database without password. You can check PostgreSQL log to verify this. If health_check_user does not exist, you will see something like:&lt;br /&gt;
: &amp;lt;pre&amp;gt;20148 2011-07-06 16:41:59 JST FATAL:  role &amp;quot;foo&amp;quot; does not exist&amp;lt;/pre&amp;gt;&lt;br /&gt;
: If the user is protected by password, you will see:&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;20220 2011-07-06 16:42:16 JST FATAL:  password authentication failed for user &amp;quot;foo&amp;quot;&lt;br /&gt;
20221 2011-07-06 16:42:16 JST LOG:  could not receive data from client: Connection reset by peer&lt;br /&gt;
20221 2011-07-06 16:42:16 JST LOG:  unexpected EOF within message length word&lt;br /&gt;
20246 2011-07-06 16:42:26 JST LOG:  could not receive data from client: Connection reset by peer&lt;br /&gt;
20246 2011-07-06 16:42:26 JST LOG:  unexpected EOF within message length word&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=FAQ&amp;diff=3759</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=FAQ&amp;diff=3759"/>
		<updated>2022-12-28T00:38:23Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Pgpool-II Frequently Asked Questions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II Frequently Asked Questions ==&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why configure fails by &amp;quot;pg_config not found&amp;quot; on my Ubuntu box?&#039;&#039;&#039; ===&lt;br /&gt;
: pg_config is in libpq-dev package. You need to install it before running configure.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why records inserted on the primary node do not appear on the standby nodes?&#039;&#039;&#039; ===&lt;br /&gt;
: Are you using streaming replication and a hash index on the table? Then it&#039;s a known limitation of streaming replication. The inserted record is there. But if you SELECT the record using the hash index, it will not appear. Hash index changes do not produce WAL record thus they are not reflected to the standby nodes. Solutions are: 1) use btree index instead 2) use pgpool-II native replication.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I mix different versions of PostgreSQL as pgpool-II backends?&#039;&#039;&#039; ===&lt;br /&gt;
: You cannot mix different major versions of PostgreSQL, for example 8.4.x and 9.0.x. On the other hand you can mix different minor versions of PostgreSQL, for example 9.0.3 and 9.0.4. Pgpool-II assumes messages from PostgreSQL to pgpool-II are identical anytime. Different major version of PostgreSQL may send out different messages and this would cause trouble for Pgpool-II.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I mix different platforms of PostgreSQL as pgpool-II backends, for example Linux and Windows?&#039;&#039;&#039; ===&lt;br /&gt;
: In streaming replication mode, no. Because streaming replication requires that primary and standby platforms are phsyically identical. On the other hand, pgpool-II&#039;s replication mode only requires logically database clusters identical. Beware, however, that online recovery script does not use rsync or some such, which do phical copying among database clusters. You want to use pg_dumpall instead.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;It seems my pgpool-II does not do load balancing. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: First of all, pgpool-II&#039; load balancing is &amp;quot;session base&amp;quot;, not &amp;quot;statement base&amp;quot;. That means, DB node selection for load balancing is decided at the beginning of session. So all SQL statements are sent to the same DB node until the session ends.&lt;br /&gt;
&lt;br /&gt;
: Another point is, whether statement is in an explicit transaction or not. If the statement is in a transaction, it will not be load balanced in the replication mode. In pgpool-II 3.0 or later, SELECT will be load balanced even in a transaction if operated in the master/slave mode.&lt;br /&gt;
&lt;br /&gt;
: Note the method to choose DB node is not LRU or some such. Pgpool-II chooses DB node randomly considering the &amp;quot;weight&amp;quot; parameter in pgpool.conf. This means that the chosen DB node is not uniformly distributed among DB nodes in short term. You might want to inspect the effect of load balancing after ~100 queries have been sent.&lt;br /&gt;
&lt;br /&gt;
: Also cursor statements are not load balanced in replication mode. i.e.:DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE. Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I observe the effect of load balancing?&#039;&#039;&#039; ===&lt;br /&gt;
: We recommend to enable &amp;quot;log_per_node_statement&amp;quot; directive in pgpool.conf for this. Here is an example of the log:&lt;br /&gt;
: &amp;lt;pre&amp;gt;2011-05-07 08:42:42 LOG:   pid 22382: DB node id: 1 backend pid: 22409 statement: SELECT abalance FROM pgbench_accounts WHERE aid = 62797;&amp;lt;/pre&amp;gt;&lt;br /&gt;
: The &amp;quot;DB node id: 1&amp;quot; shows which DB node was chosen for this loadbalancing session.&lt;br /&gt;
&lt;br /&gt;
: Please make sure that you start pgpool-II with &amp;quot;-n&amp;quot; option to get pgpool-II log. (or you can use syslog in pgpool-II 3.1 or later)&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why am I getting &amp;quot;ProcessFrontendResponse: failed to read kind from frontend. frontend abnormally exited&amp;quot; in my pgool log?&#039;&#039;&#039; ===&lt;br /&gt;
: Well, your clients might be ill-behaved:-) PostgreSQL&#039;s protocol requires clients to send particular packet before they disconnect the connection. pgpool-II complains that clients disconnect without sending the packet. You could reprodcude the problem by using psql. Connect to pgpool using psql. Kill -9 psql. You will silimar message in the log. The message will not appear if you quit psql normaly. Another possibility is unstable network connection between your client machine and pgpool-II. Check the cable and network interface card.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m running pgpool-II in streaming replication mode. It seems it works but I find following errors in the log. Why?&#039;&#039;&#039; ===&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;E&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;[&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: pool_read2: EOF encountered with backend&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: make_persistent_db_connection: s_do_auth failed&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: find_primary_node: make_persistent_connection failed&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
: pgpool-II tries to connect to PostgreSQL to execute some functions such as pg_current_xlog_location(), which is used for detecting primary server or checking replication delay. The messages above indicates that pgpool-II failed to connect with user = health_check_user and password = health_check_password. You need to set them properly even if health_check_period = 0.&lt;br /&gt;
&lt;br /&gt;
: Note that pgpool-II 3.1 or later will use sr_check_user and sr_check_password for it instead.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I run pgbench to test pgpool-II, pgbench hangs. If I directly run pgbench against PostgreSQL, it works fine. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pgbench creates concurrent connections (the number of connections is specified by &amp;quot;-c&amp;quot; option) before starting actual transactions. So if the number of concurrent transactions specified by &amp;quot;-c&amp;quot; exceeds num_init_children, pgbench will stuck because it will wait for pgpool accepting connections forever (remember that pgpool-II accepts up to num_init_children concurrent sessions. If the number of concurrent sessions reach num_init_children, new session will be queued). On the other hand PostgreSQL does not accept concurrent sessions more than max_connections. So in this case you will just see PostgreSQL errors, rather than connection blocking. If you want to test pgpool-II&#039;s connection queuing, you can use psql instead of pgbench. In the example session below, num_init_children = 1 (this is not a recommended setting in the real world. This is just for simplicity).&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;$ psql test &amp;lt;-- connect to pgpool from terminal #1&lt;br /&gt;
psql (9.1.1)&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
test=# &lt;br /&gt;
$ psql test &amp;lt;-- tries to connect to pgpool from terminal #2 but it is blocked.&lt;br /&gt;
test=# SELECT 1; &amp;lt;--- do something from terminal #1 psql&lt;br /&gt;
test=# \q &amp;lt;-- quit psql session on terminal #1&lt;br /&gt;
psql (9.1.1) &amp;lt;-- now psql on terminal #2 accepts session&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
test=# &amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
=== &#039;&#039;&#039;I created pool_hba.conf and pool_passwd to enable md5 authentication through pgpool-II but it does not work. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Probably you made mistake somewhere. For your help here is a table which describes error patterns depending on the setting of pg_hba.conf, pool_hba.conf and pool_passwd.&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&lt;br /&gt;
{|styles=&amp;quot;background:white; color:black&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|pg_hba.conf&lt;br /&gt;
|pool_hba.conf&lt;br /&gt;
|pool_passwd&lt;br /&gt;
|result&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|md5&lt;br /&gt;
|yes&lt;br /&gt;
|md5 auth&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|md5&lt;br /&gt;
|no&lt;br /&gt;
|&amp;quot;MD5&amp;quot; authentication with pgpool failed for user &amp;quot;XX&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|trust&lt;br /&gt;
|yes/no&lt;br /&gt;
|MD5 authentication is unsupported in replication, master-slave and parallel mode&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|md5&lt;br /&gt;
|yes&lt;br /&gt;
|no auth&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|md5&lt;br /&gt;
|no&lt;br /&gt;
|&amp;quot;MD5&amp;quot; authentication with pgpool failed for user &amp;quot;XX&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|trust&lt;br /&gt;
|yes/no&lt;br /&gt;
|no auth&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I set up SSL for pgpool-II?&#039;&#039;&#039; ===&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;SSL support for pgpool-II consists of two parts: 1)between client and pgpool-II 2)pgpool-II and PostgreSQL. #1 and #2 are independent each other. For example, you can only enable SSL connection of #1, or #2. Or you can enable both #1 and #2. I explain #1 (for #2, please take a look at PostgreSQL documentation).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Make sure that pgpool is built with openssl. If you build from source code, use --with-openssl option.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
First create server certificate. In the command below you will be asked PEM pass phrase(It will be asked when pgpool starts up).&lt;br /&gt;
If you want to start pgpool without being asked pass phrase, you can remove it later.&lt;br /&gt;
([[sample server certficate create session]])&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
openssl req -new -text -out server.req&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Remove PEM pass phrase if you want.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openssl rsa -in privkey.pem -out server.key&lt;br /&gt;
Enter pass phrase for privkey.pem:&lt;br /&gt;
writing RSA key&lt;br /&gt;
$ rm privkey.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Turn the certificate into a self-signed certificate.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openssl req -x509 -in server.req -text -key server.key -out server.crt&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy server.key and server.crt to appropreate place. Suppose we copy to /usr/local/etc.&lt;br /&gt;
Make sure that you use cp -p to retain appropreate permission of server.key.&lt;br /&gt;
Alternatively you can set permission later.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ chmod og-rwx /usr/local/etc/server.key&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Set the certificate and key location in pgpool.conf.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssl = on&lt;br /&gt;
ssl_key = &#039;/usr/local/etc/server.key&#039;&lt;br /&gt;
ssl_cert = &#039;/usr/local/etc/server.crt&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Restart pgpool.&lt;br /&gt;
To confirm SSL connection between client and pgpool is working, connect to pgpool using psql.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
psql -h localhost -p 9999 test&lt;br /&gt;
psql (9.1.1)&lt;br /&gt;
SSL connection (cipher: AES256-SHA, bits: 256)&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
&lt;br /&gt;
test=# \q&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you see &amp;quot;SSL connection...&amp;quot;, SSL connection between client and pgpool is working.&lt;br /&gt;
Please make sure that use &amp;quot;-h localhost&amp;quot; option. Because SSL only works with TCP/IP,&lt;br /&gt;
with Unix domain socket SSL does not work. &lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m using pgpool-II in replication mode. I expected that pgpool-II replaces current_timestamp call with time constants in my INSERT query, but actually it doesn&#039;t. Why?&#039;&#039;&#039; ===&lt;br /&gt;
:Probably your INSERT query uses schema qualied table name (like public.mytable) and you did not install pool_regclass function coming pgpool. Without pgpool_reglclass, pgpool-II only deals with table names without schema qualification.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why max_connection must satisfy this formula max_connection &amp;gt;= (num_init_children * max_pool) and not max_connection &amp;gt;= num_init_children?&#039;&#039;&#039; ===&lt;br /&gt;
: Probably you need to understand how pgpool uses these variables. Here is internal processing inside pgpool.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Wait for connection request from clients.&lt;br /&gt;
&amp;lt;li&amp;gt;pgpool child receives connection request from a client.&lt;br /&gt;
&amp;lt;li&amp;gt;The pgpool child looks for existing connection in the pool which&lt;br /&gt;
   has requested database/user pair up to max_pool.&lt;br /&gt;
&amp;lt;li&amp;gt;If found, reuse it.&lt;br /&gt;
&amp;lt;li&amp;gt; If not found, opens a new connection to PostgreSQL and registers to&lt;br /&gt;
   the pool.  If the pool has no empty slot, closes the oldest&lt;br /&gt;
   connection to PostgreSQL and reuse the slot.&lt;br /&gt;
&amp;lt;li&amp;gt;Do some query processing until the client sends session close request.&lt;br /&gt;
&amp;lt;li&amp;gt;Close the connection to client but keeps the connection to&lt;br /&gt;
   PostgreSQL for future use.&lt;br /&gt;
&amp;lt;li&amp;gt;Go to #1&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Is connection pool cache shared among pgpool process?&#039;&#039;&#039; ===&lt;br /&gt;
:No, the connection pool cache is in pgpool&#039;s process private memory and is not shared by other pgpool. This is how the connection cache is managed: Suppose pgpool process 12345 has connection cache for database A/user B but process 12346 does not have connection cache for database A/user B and both 12345 and 12346 are in idle state (no client is connecting at this point). If client connects to pgpool process 12345 with database A/user B, then the exisiting connection of 12345 is reused. On the other hand, If client connects to pgpool process 12346, 12346 needs to create new connection.  Whether 12345 or 12346 is chosen, is not under control of pgpool. However in the long run, each pgpool child process will be equally chosen and it is expected that each process&#039;s pool will be resued equally.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why my SELECTs are not cached?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
:Certain libraries such as iBatis, MyBatis always rollback transactions if they are not explicitely committed. Pgpool never caches SELECTs result in a rollbacked transaction because they might not be inconsistent.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use # comments or blank lines in pool_passwd?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
: The answer is simple. No (just like /etc/passwd).&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I cannot use MD5 authentication if start pgpool without -n option. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: You must have given -f option as a relative path: i.e. &amp;quot;-f pgpool.conf&amp;quot;, rather than full path: i.e. &amp;quot;-f /usr/local/etc/pgpool.conf&amp;quot;. Pgpool tries to locate the full path of pool_passwd (which is neccesary for MD5 auth) from pgpool.conf path. This is fine with -n option. However if pgpool starts without -n option, it changes current directory to &amp;quot;/&amp;quot;, which is neccessary processs for daemonizing. As a result, pgpool tries to open &amp;quot;/pool_passwd&amp;quot;, which will not successs.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I see standby servers go down status in steaming replication mode and see PostgreSQL messages &amp;quot;terminating connection due to conflict&amp;quot; Why?&#039;&#039;&#039; ===&lt;br /&gt;
: If you see following messages along with those, it is likely vacuum on primary server removes rows which SELECTs on standby server want to see. Workaround is setting &amp;quot;hot_standby_feedback = on&amp;quot; in your standby server&#039;s postgresql.conf.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
2013-04-07 19:38:10 UTC FATAL:  terminating connection due to conflict with recovery&lt;br /&gt;
2013-04-07 19:38:10 UTC DETAIL:  User query might have needed to see row versions that must be removed.&lt;br /&gt;
2013-04-07 19:38:10 UTC HINT:  In a moment you should be able to reconnect to the database and repeat your command.&lt;br /&gt;
2013-04-07 19:38:10 UTC LOG:  could not send data to client: Connection reset by peer&lt;br /&gt;
2013-04-07 19:38:10 UTC ERROR:  canceling statement due to conflict with recovery&lt;br /&gt;
2013-04-07 19:38:10 UTC DETAIL:  User query might have needed to see row versions that must be removed.&lt;br /&gt;
2013-04-07 19:38:10 UTC LOG:  could not send data to client: Broken pipe&lt;br /&gt;
2013-04-07 19:38:10 UTC FATAL:  connection to client lost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Every few minites load of the system which pgpool-II running on gets high as much as 5-10. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Mulptiple users stats that this is observed only Linux kernel 3.0. 2.6 or 3.2 does show the behavior. We suspect that there is a problem with 3.0 kernel. See more discussions on &amp;quot;[pgpool-general: 1528] Mysterious Load Spikes&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When watchdog enabled and the connection number reach the number of num_init_children, VIP switchover occurs. Why?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
:When the connection number reach the number of num_init_children, the watchdog will be failed because select 1 is failed, and then VIP will be transfer to another pgpool. Unfortunately, there are no way to discriminate normal client&#039;s connections from watchdog&#039;s connection. Larger num_init_children, wd_life_point and smaller wd_interval may prevent the problem somewhat. &lt;br /&gt;
&lt;br /&gt;
:The next major version, pgpool-II 3.3, will support a new monitoring method which uses UDP heartbeat packets instead of queries such like &#039;SELECT 1&#039; to resolve the problem.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why do I need to install pgpool_regclass? &#039;&#039;&#039; ===&lt;br /&gt;
:  If you are using PostgreSQL 8.0 or later, installing pgpool_regclass function on all PostgreSQL to be accessed by pgpool-II is strongly recommended, as it is used internally by pgpool-II. Without this, handling of duplicate table names in different schema might cause trouble (temporary tables aren&#039;t a problem).&lt;br /&gt;
:Related FAQ is here https://www.pgpool.net/mediawiki/index.php?title=FAQ&amp;amp;action=submit#I.27m_using_pgpool-II_in_replication_mode._I_expected_that_pgpool-II_replaces_current_timestamp_call_with_time_constants_in_my_INSERT_query.2C_but_actually_it_doesn.27t._Why.3F&lt;br /&gt;
: If you are using PostgreSQL 9.4.0 or later and pgpool-II 3.3.4 or later, 3.4.0 or later, you don&#039;t need to install pgpool_regclass since PostgreSQL 9.4 has built-in pgpool_regclass like function &amp;quot;to_regclass&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;md5 authentication does not work. Please help&#039;&#039;&#039; ===&lt;br /&gt;
: There&#039;s an excellent summary of various check points to set up md5 authentication. Please take a look at it.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2013-May/001773.html&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m running pgpool/PostgreSQL on Amazon AWS and occasionaly I get network errors. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: It&#039;s a known problem with AWS. We recommend to complain to the Amazon support.&lt;br /&gt;
: pgpool-II 3.3.4, 3.2.9 or later mitigate the problem by changing timeout value for connect(actually select system call) from 1 second to 10 seconds.&lt;br /&gt;
: Also pgpool-II 3.4 or later has a switch to control the timeout value.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I cannot run pcp command on my Ubuntu box. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pcp commands need libpcp.so. In Ubuntu it is included &amp;quot;libpgpool0&amp;quot; package.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;On line recovery failed. How can I debug this?&#039;&#039;&#039; ===&lt;br /&gt;
: pcp_recovery_node executes recovery_1st_stage_command and/or recovery_2nd_stage_command depending on your configuration. Those scripts are supposed to be executed on the master PostgreSQL node (the first live node in replication mode or primary node in streaming replication mode). &amp;quot;BackendError&amp;quot; means there&#039;s something wrong in pgpool and/or PostgreSQL. To verify this, I recommend followings;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;start pgpool with debug option&lt;br /&gt;
&amp;lt;li&amp;gt;execute pcp_recovery_node&lt;br /&gt;
&amp;lt;li&amp;gt;examin pgpool log and master PostgreSQL log&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039; Watchdog doesn&#039;t start if not all &amp;quot;other&amp;quot; nodes are alive&#039;&#039;&#039;===&lt;br /&gt;
: It&#039;s a feature. Watchdog&#039;s lifeheck will start after all of the pgpools has started. Until this, failover of the virtual IP never occurs.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;If I start transaction, pgool-II also starts a transaction on standby nodes. Why?&#039;&#039;&#039;===&lt;br /&gt;
: This is necessary to deal with the case when JDBC driver wants to use cursors. Pgpool-II takes a liberty of distributing SELECTs to the standby node including cursor statements. Unfortunately cursor statements need to be executed in an explicit transaction.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I use schema qualified table names, pgpool-II does not invalidate on memory query cache and I got outdated data. Why?&#039;&#039;&#039;===&lt;br /&gt;
: It seems you did not install &amp;quot;pgpool_regclass&amp;quot; function. Without the function, pgpool-II ignores the schema name pat of the schema qualified table name and the cache invalidation fails.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I periodically get error message like &amp;quot;read_startup_packet: incorrect packet length&amp;quot;. What does it mean?&#039;&#039;&#039;===&lt;br /&gt;
: Monitoring tools including Zabbix and Nagios periodically sends a packet or ping to the port which pgoool is listening on. Unfortunately those packets do not have correct contents, and pgpool-II complains it. If you are not sure who is sending such a packet, you could turn on &amp;quot;log_connections&amp;quot; to know the source host and port info. If they are from such tools, you could stop the monitoring to avoid the problem  or even better, change the monitoring method to send legal query, for example, &amp;quot;SELECT 1&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m getting repeated errors like this every few minutes on Tomcat: &amp;quot;An I/O Error occurred while sending to the backend&amp;quot; Why?&#039;&#039;&#039;===&lt;br /&gt;
: Tomcat creates persistent connections to pgpool. If you set client_idle_limit to non 0, pgpool disconnects the connection and next time when Tomcat tries to send something to pgpool it breaks with the error message.&lt;br /&gt;
: One solution is set client_idle_limit to 0. However this will leave lots of idle connections.&lt;br /&gt;
: Another solution provided by Lachezar Dobrev is:&lt;br /&gt;
: You might solve that by adding a time-out on the Tomcat side. https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html&lt;br /&gt;
:   What you should set is (AFAIK):&lt;br /&gt;
:    minIdle (default is 10, set to 0)&lt;br /&gt;
:   timeBetweenEvictionRunsMillis (default 5000)&lt;br /&gt;
:   minEvictableIdleTimeMillis    (default 60000)&lt;br /&gt;
:This will try every 5 seconds and close any connections that were not used in the last 60 seconds. If you keep the sum of both numbers below the client time-out on the pgpool size connections should be closed at Tomcat side before they time-out on the pgpool side.&lt;br /&gt;
: It is also beneficial to set the&lt;br /&gt;
:    testOnBorrow (default false, set to true)&lt;br /&gt;
:    validationQuery (default none, set to &#039;SELECT version();&#039; no quotes)&lt;br /&gt;
:  This will help with connections should they expire while waiting, without supplying a disconnected connection to the application.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I check pg_stat_activity view, I see a query like &amp;quot;SELECT count(*) FROM pg_catalog.pg_class AS c WHERE c.oid = pgpool_regclass(&#039;pgbench_accounts&#039;) AND c.relpersistence = &#039;u&#039;&amp;quot; in active state for very long time. Why?&#039;&#039;&#039;===&lt;br /&gt;
: It&#039;s a limitation of pg_stat_activity. You can safely ignore it.&lt;br /&gt;
: Pgpool-II issues queries like above for internal use to master node. When user query runs in extended protocol mode (sent from JDBC driver, for example), pgpool-II&#039;s query also runs in the mode. To make pg_stat_activity recognize the query finishes, pgpool-II needs to send a packet called &amp;quot;Sync&amp;quot;, which unfortunately breaks user&#039;s query (more precisely, unnamed portal). Thus pgpool-II sends &amp;quot;Flush&amp;quot; packet instead but then pg_stat_activity does not recognize the end of the query.&lt;br /&gt;
: Interesting thing is, if you enable log_duration, it logs the query finishes.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Online recovery always fails after certain minutes. Why? &#039;&#039;&#039; ===&lt;br /&gt;
: It is possible that PostgreSQL statement_timeout kills the online recovery process. The process is executed as a SQL statement and if it&#039;s running too long, PostgreSQL sends signal 2 to the SQL and kills it. Varying by the size of the database, the online recovery process takes very long time. Make sure to disable statement_timeout or set it long enough time.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why &amp;quot;SET default_transaction_isolation TO DEFAULT&amp;quot; fails ? &#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
$ psql -h localhost -p 9999 -c &#039;SET default_transaction_isolation to DEFAULT;&#039;&lt;br /&gt;
ERROR: kind mismatch among backends. Possible last query was: &amp;quot;SET default_transaction_isolation to DEFAULT;&amp;quot; kind details are: 0[N: statement: SET default_transaction_isolation to DEFAULT;] 1[C]&lt;br /&gt;
HINT: check data consistency among db nodes&lt;br /&gt;
ERROR: kind mismatch among backends. Possible last query was: &amp;quot;SET default_transaction_isolation to DEFAULT;&amp;quot; kind details are: 0[N: statement: SET default_transaction_isolation to DEFAULT;] 1[C]&lt;br /&gt;
HINT: check data consistency among db nodes&lt;br /&gt;
connection to server was lost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
: Pgpool-II detects that node 0 returns &amp;quot;N&amp;quot; (a NOTICE message comes from PostgreSQL) while node 1 returns &amp;quot;C&amp;quot; (which means the command finished).&lt;br /&gt;
: Though pgpool-II expects that both node 0 and 1 returns identical messages, actually they are not. So pgpool-II threw an error.&lt;br /&gt;
: Probably certain log/message settings are different in node 0 and 1. Please check client_min_messages or something like that.&lt;br /&gt;
: They should be identical.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How does pgpool-II find the primary node?&#039;&#039;&#039;===&lt;br /&gt;
: pgpool-II issues &amp;quot;SELECT pg_is_in_recovery()&amp;quot; to each DB node. If it returns true, then the node is a standby node. If one of DB nodes returns false, then the node is the primary node and done.&lt;br /&gt;
: Because it is possible that promoting node could return true for the SELECT, if no primary node is found and &amp;quot;search_primary_node_timeout&amp;quot; is greater than 0, pgpool-II sleeps 1 second and contines to issues the SELECT query to each DB node again until total sleep time exceeds search_primary_node_timeout.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use pg_cancel_backend() or pg_terminate_backend()?&#039;&#039;&#039;===&lt;br /&gt;
: You can safely use pg_cancel_backend().&lt;br /&gt;
: Be warned that pg_terminate_backend() will cause a fail over because it makes PostgreSQL emit an identical error code as postmaster shutdown. Pgpool-II 3.6 or greater mitigates the problem. See [https://www.pgpool.net/docs/latest/en/html/restrictions.html the manual] for more details.&lt;br /&gt;
&lt;br /&gt;
: Remember that pgpool-II manages multiple PostgreSQL servers. To use the function, you not only need to identify the backend pid but the backend server.&lt;br /&gt;
: If the query is running on the primary server, you can call the function something like&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
/*NO LOAD BALANCE*/ SELECT pg_cancel_backend(pid)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
: SQL comment is to prevent the SELECT be load balanced to standby. Of course you could issue the SELECT against directly the primary server.&lt;br /&gt;
: If the query is running on one of standby servers, you need to issue the SELECT against directly the standby server.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why my client is disconnected to pgpool-II when failover happens?&#039;&#039;&#039;===&lt;br /&gt;
: pgpool-II consists of many process where each process corresponds to client session. When failover occurs, each process may iterate on a loop for each backend without knowing a backend goes down. This may result in incorrect processing, or segfault in the worst case. For this reason, when failover occurs, pgpool-II parent process interrupt child process using signal to let them exit. Note that switch over using pcp_detach node has same effect.&lt;br /&gt;
: From Pgpool-II 3.6 or greater, however, a fail over does not cause the disconnection in certain conditions. See the manual for more details.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why am I getting &amp;quot;LOG:  forked new pcp worker ..,&amp;quot; and &amp;quot;LOG:  PCP process with pid: xxxx exit with SUCCESS.&amp;quot; messages in pgpool log?&#039;&#039;&#039; ===&lt;br /&gt;
: Prior to pgpool-II 3.5, pgpool could only handle single PCP command at a time and all PCP commands were handled by a single PCP child process which lives throughout the lifespan of pgpool-II main process. In pgpool II 3.5 the restriction of single PCP command is removed and pgpool-II can now handle multiple simultaneous PCP commands. For every PCP command issued to pgpool a new PCP child process is forked and that process exits after execution of the PCP command is complete. So these log messages are perfectly normal and are generated whenever a new PCP worker process is created or completes execution for a PCP command.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How does pgpool-II handle md5 authentication?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
# PostgreSQL and pgpool store md5(password+username) into pg_authid or pool_password. From now on I denote string md5(password+username) as &amp;quot;S&amp;quot;.&lt;br /&gt;
# When md5 auth is requested, pgpool sends a random number salt &amp;quot;s0&amp;quot; to frontend.&lt;br /&gt;
# Frontend replies back to pgpool with md5(S+s0).&lt;br /&gt;
# pgpool extracts S from pgpool_passwd and calculate md5(S+s0). If #3 and #4 matches, goes to next step.&lt;br /&gt;
# Each backend sends salt to pgpool. Suppose we have two backends b1 and b2, and salts are s1 and s2.&lt;br /&gt;
# pgpool extracts S from pgpool_passwd and calculate md5(S+s1) and send it to b1. pgpool extracts S from pgpool_passwd and calculate md5(S+s2) and send it to b2.&lt;br /&gt;
# If b1 and b2 agree with the authentication, the whole md5 auth process succeeds.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why Pgpool-II does not automatically recognize a database comes back online?&#039;&#039;&#039; ===&lt;br /&gt;
: It would be technically possible but we don&#039;t think it&#039;s a safe feature.&lt;br /&gt;
: Consider a streaming replication configuration. When a standby comes back online, it does not necessarily means it connects to the current primary node. It may connect to a different primary node , or even it&#039;s not a standby any more. If Pgpool-II automatically recognizes such that standby as online, SELECTs to the standby node will return different result as the primary, which is a disaster for database applications.&lt;br /&gt;
: Also please note that &amp;quot;pgpool reload&amp;quot; does not do anything for recognizing the standby node as online. It just reloads configuration files.&lt;br /&gt;
: Please note that in Pgpool-II 4.1 or later, it is possible to automatically make a standby server online if it&#039;s safe enough. See configuration parameter &amp;quot;auto_failback&amp;quot; for more information.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;After enabling idle_in_transaction_session_timeout, Pgpool-II sets the DB node status to all down&#039;&#039;&#039; ===&lt;br /&gt;
: idle_in_transaction_session_timeout was introduced in PostgreSQL 9.6. It is intended to cancel idle transactions. Unfortunately after the time out occurs, PostgreSQL raises a FATAL error, which triggers failover in Pgpool-II if fail_over_on_backend_error is on.&lt;br /&gt;
: These are some workarounds to avoid the unwanted failover.&lt;br /&gt;
* Disable fail_over_on_backend_error. By this, failover will not happen if the FATAL error occurs, but the session will be terminated.&lt;br /&gt;
* Set connection_life_time、child_life_time and client_idle_limit less than idle_in_transaction_session_timeout. This will not terminate the session even if the FATAL error occurs. However even if the FATAL error does not occur, the connection pools are removed if one or more of items satisfy the specified condition, which may affect performance.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I know PostgreSQL backend status connected by Pgpool-II? &#039;&#039;&#039;===&lt;br /&gt;
: The backend status shown in pg_stat_activity can be examined by using &amp;quot;show pool_pools&amp;quot; command. One of the columns &amp;quot;pool_backendpid&amp;quot; shown by &amp;quot;show pool_pools&amp;quot; is the process id of the corresponding PostgreSQL backend process. Once it is determined, you can examine the output of pg_stat_activiy matching with its &amp;quot;pid&amp;quot; column.&lt;br /&gt;
: You can do this automatically by using dblink extension of PostgreSQL. Here is a sample query:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
SELECT * FROM dblink(&#039;dbname=test host=xxx port=11000 user=t-ishii password=xxx&#039;, &#039;show pool_pools&#039;) as t1 (pool_pid int, start_time text, pool_id int, backend_id int, database text, username text, create_time text,majorversion int, minorversion int, pool_counter int, pool_backendpid int, pool_connected int), pg_stat_activity p WHERE p.pid = t1.pool_backendpid;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: You can execute the SQL above on either PostgreSQL or Pgpool-II. The first argument of dblink is a connection string to connect to Pgpool-II, not PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Where can I get Debian packages for Pgpool-II? &#039;&#039;&#039; ===&lt;br /&gt;
: You can get Debian packages here: https://apt.postgresql.org/pub/repos/apt/pool/main/p/pgpool2/&lt;br /&gt;
: For older releases you can find the packages at: https://atalia.postgresql.org/morgue/p/pgpool2/&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How to run Pgpool-II with non-root user? &#039;&#039;&#039; ===&lt;br /&gt;
: If you install Pgpool-II using RPM packages, Pgpool-II will be running by root by default.&lt;br /&gt;
: You can also run Pgpool-II with a non-root user. But root privilege is required to control the virtual IP, so you have to copy ip/ifconfig/arping command and add the setuid flag to them.&lt;br /&gt;
&lt;br /&gt;
: Following is an example to run Pgpool-II with postgres user.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Edit pgpool.service file to use postgres user to start Pgpool-II&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# cp /usr/lib/systemd/system/pgpool.service /etc/systemd/system/pgpool.service&lt;br /&gt;
&lt;br /&gt;
# vi /etc/systemd/system/pgpool.service&lt;br /&gt;
...&lt;br /&gt;
User=postgres&lt;br /&gt;
Group=postgres&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Change owner of /var/{lib,run}/pgpool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# chown postgres:postgres /var/{lib,run}/pgpool&lt;br /&gt;
# cp /usr/lib/tmpfiles.d/pgpool-II-pgxx.conf /etc/tmpfiles.d&lt;br /&gt;
# vi /etc/tmpfiles.d/pgpool-II-pgxx.conf&lt;br /&gt;
===&lt;br /&gt;
d /var/run/pgpool 0755 postgres postgres -&lt;br /&gt;
===&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Change owner of Pgpool-II config files &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
chown -R postgres:postgres /etc/pgpool-II/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Copy ip/ifconfig/arping commands to somewhere where the user has access permissions and add setuid flag to them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# mkdir /var/lib/pgsql/sbin&lt;br /&gt;
# chown postgres:postgres /var/lib/pgsql/sbin&lt;br /&gt;
# chmod 700 /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/ifconfig /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/arping /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/ip /var/lib/pgsql/sbin&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/ip&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/ifconfig&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/arping &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use repmgr with Pgpool-II? &#039;&#039;&#039; ===&lt;br /&gt;
: No. These software do not consider each other. You should use Pgpool-II without repmger or use repmgr without Pgpool-II. See this message for more details: https://www.pgpool.net/pipermail/pgpool-general/2019-August/006743.html&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Connection fails in CentOS6&#039;&#039;&#039; ===&lt;br /&gt;
: Pgpool-II doesn&#039;t support GSSAPI authentication yet, but GSSAPI is requested in CentOS6. Therefore, the connection attempt will fail in CentOS6. &lt;br /&gt;
: A workaround is to set a environment variable to disable GSSAPI encryption in the client: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
export PGGSSENCMODE=disable&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Watchdog standby does not take over master when the master goes down&#039;&#039;&#039; ===&lt;br /&gt;
: If you have even number of watchdog nodes, you need to turn on enable_consensus_with_half_votes parameter, which is new in 4.1. The reason why you need this is explained in the 4.1 release note:&lt;br /&gt;
&amp;lt;q&amp;gt;&lt;br /&gt;
This changes the behavior of the decision of quorum existence and failover consensus on even number (i.e. 2, 4, 6...) of watchdog clusters. Odd number of clusters (3, 5, 7...) are not affected. When this parameter is off (the default), a 2 node watchdog cluster needs to have both 2 nodes are alive to have a quorum. If the quorum does not exist and 1 node goes down, then 1) VIP will be lost, 2) failover srcript is not executed and 3) no watchdog master exists. Especially #2 could be troublesome because no new primary PostgreSQL exists if existing primary goes down. Probably 2 node watchdog cluster users want to turn on this parameter to keep the existing behavior. On the other hand 4 or more even number of watchdog cluster users will benefit from this parameter is off because now it prevents possible split brain when a half of watchdog nodes go down. &lt;br /&gt;
&amp;lt;/q&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:This kind of error could happen for multiple reasons. Here &amp;quot;52&amp;quot; is ASCII code point in hexa decimal number, that is ASCII &#039;R&#039;. &#039;R&#039; is normal response from backend. &amp;quot;45&amp;quot; is &#039;E&#039; in ASCII, which means PostgreSQL complains something. In summary backend 0 accepts the connection request normaly, while backend 1 complains. To solve the problem, you need to look into pgpool.log. For example, If you set &amp;quot;reject&amp;quot; entry for the connection request in backend 1&#039;s pg_hba.conf:&lt;br /&gt;
&lt;br /&gt;
 local	all	foo	reject&lt;br /&gt;
&lt;br /&gt;
: and try to connect to pgpool, you will get the error. You should be able to find something like below in pgpool log:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: LOG:  pool_read_kind: error message from 1 th backend:pg_hba.conf rejects connection for host &amp;quot;[local]&amp;quot;, user &amp;quot;foo&amp;quot;, database &amp;quot;test&amp;quot;, no encryption&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: ERROR:  unable to read message kind&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: DETAIL:  kind does not match between main(52) slot[1] (45)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: You find that you need to fix pg_hba.conf of backend 1.&lt;br /&gt;
&lt;br /&gt;
: Other types of errors include:&lt;br /&gt;
:&amp;lt;ul&amp;gt;&lt;br /&gt;
:&amp;lt;li&amp;gt; Backend 1&#039;s pg_hba.conf setting refuses the connection from pgpool&lt;br /&gt;
:&amp;lt;li&amp;gt; max_connections parameter of PostgreSQL is not identical among backends&lt;br /&gt;
:&amp;lt;/ul&amp;gt;&lt;br /&gt;
: Note that &amp;quot;main&amp;quot; in the error message is &amp;quot;master&amp;quot; in Pgpool-II 4.1 or before. Also note that the detailed error info (&amp;quot;error message from 1 th backend:...&amp;quot;) is not available in Pgpool-II 3.6 or before.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I am getting an authentication error when Pgpool-II connects to Azure PostgreSQL. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Auzre PostgreSQL only accepts clear text password. md5 authentication nor SCRAM-SHA-256 cannot be used. You need to set clear text password in pool_passwd.&lt;br /&gt;
: related bug track entries:&lt;br /&gt;
: &amp;lt;ul&amp;gt;&lt;br /&gt;
: &amp;lt;li&amp;gt; https://www.pgpool.net/mantisbt/view.php?id=737&lt;br /&gt;
: &amp;lt;li&amp;gt; https://www.pgpool.net/mantisbt/view.php?id=699&lt;br /&gt;
: &amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When show pool_nodes does not show replication delay even I set delay_threshold_by_time to 1?&#039;&#039;&#039;&lt;br /&gt;
: There are two possible reasons. First one is sr_check_user does not have enough previlege to query against pg_stat_replication view. Please consult [https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-STATS-VIEWS PostgreSQL manual] for more details.&lt;br /&gt;
&lt;br /&gt;
== pgpoolAdmin Frequently Asked Questions ==&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;pgpoolAdmin does not show any node in pgpool status and node status. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pgpoolAdmin uses PHP&#039;s PostgreSQL extention (pg_connect and pg_query etc.). Probably the extention does not work as expected. Please check apache error log. Also please check the FAQ item below.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why does node status in pgpoolAdmin show &amp;quot;down&amp;quot; status even if PostgreSQL is up and running?&#039;&#039;&#039; ===&lt;br /&gt;
: pgpoolAdmin checks PostgreSQL status by connecting with user = &amp;quot;health_check_user&amp;quot; and database = template1. Thus you should allow pgpoolAdmin to access PostgreSQL with those user and database without password. You can check PostgreSQL log to verify this. If health_check_user does not exist, you will see something like:&lt;br /&gt;
: &amp;lt;pre&amp;gt;20148 2011-07-06 16:41:59 JST FATAL:  role &amp;quot;foo&amp;quot; does not exist&amp;lt;/pre&amp;gt;&lt;br /&gt;
: If the user is protected by password, you will see:&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;20220 2011-07-06 16:42:16 JST FATAL:  password authentication failed for user &amp;quot;foo&amp;quot;&lt;br /&gt;
20221 2011-07-06 16:42:16 JST LOG:  could not receive data from client: Connection reset by peer&lt;br /&gt;
20221 2011-07-06 16:42:16 JST LOG:  unexpected EOF within message length word&lt;br /&gt;
20246 2011-07-06 16:42:26 JST LOG:  could not receive data from client: Connection reset by peer&lt;br /&gt;
20246 2011-07-06 16:42:26 JST LOG:  unexpected EOF within message length word&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=FAQ&amp;diff=3758</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=FAQ&amp;diff=3758"/>
		<updated>2022-12-28T00:25:34Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Why does not Pgpool-II automatically recognize a database comes back online? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II Frequently Asked Questions ==&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why configure fails by &amp;quot;pg_config not found&amp;quot; on my Ubuntu box?&#039;&#039;&#039; ===&lt;br /&gt;
: pg_config is in libpq-dev package. You need to install it before running configure.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why records inserted on the primary node do not appear on the standby nodes?&#039;&#039;&#039; ===&lt;br /&gt;
: Are you using streaming replication and a hash index on the table? Then it&#039;s a known limitation of streaming replication. The inserted record is there. But if you SELECT the record using the hash index, it will not appear. Hash index changes do not produce WAL record thus they are not reflected to the standby nodes. Solutions are: 1) use btree index instead 2) use pgpool-II native replication.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I mix different versions of PostgreSQL as pgpool-II backends?&#039;&#039;&#039; ===&lt;br /&gt;
: You cannot mix different major versions of PostgreSQL, for example 8.4.x and 9.0.x. On the other hand you can mix different minor versions of PostgreSQL, for example 9.0.3 and 9.0.4. Pgpool-II assumes messages from PostgreSQL to pgpool-II are identical anytime. Different major version of PostgreSQL may send out different messages and this would cause trouble for Pgpool-II.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I mix different platforms of PostgreSQL as pgpool-II backends, for example Linux and Windows?&#039;&#039;&#039; ===&lt;br /&gt;
: In streaming replication mode, no. Because streaming replication requires that primary and standby platforms are phsyically identical. On the other hand, pgpool-II&#039;s replication mode only requires logically database clusters identical. Beware, however, that online recovery script does not use rsync or some such, which do phical copying among database clusters. You want to use pg_dumpall instead.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;It seems my pgpool-II does not do load balancing. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: First of all, pgpool-II&#039; load balancing is &amp;quot;session base&amp;quot;, not &amp;quot;statement base&amp;quot;. That means, DB node selection for load balancing is decided at the beginning of session. So all SQL statements are sent to the same DB node until the session ends.&lt;br /&gt;
&lt;br /&gt;
: Another point is, whether statement is in an explicit transaction or not. If the statement is in a transaction, it will not be load balanced in the replication mode. In pgpool-II 3.0 or later, SELECT will be load balanced even in a transaction if operated in the master/slave mode.&lt;br /&gt;
&lt;br /&gt;
: Note the method to choose DB node is not LRU or some such. Pgpool-II chooses DB node randomly considering the &amp;quot;weight&amp;quot; parameter in pgpool.conf. This means that the chosen DB node is not uniformly distributed among DB nodes in short term. You might want to inspect the effect of load balancing after ~100 queries have been sent.&lt;br /&gt;
&lt;br /&gt;
: Also cursor statements are not load balanced in replication mode. i.e.:DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE. Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I observe the effect of load balancing?&#039;&#039;&#039; ===&lt;br /&gt;
: We recommend to enable &amp;quot;log_per_node_statement&amp;quot; directive in pgpool.conf for this. Here is an example of the log:&lt;br /&gt;
: &amp;lt;pre&amp;gt;2011-05-07 08:42:42 LOG:   pid 22382: DB node id: 1 backend pid: 22409 statement: SELECT abalance FROM pgbench_accounts WHERE aid = 62797;&amp;lt;/pre&amp;gt;&lt;br /&gt;
: The &amp;quot;DB node id: 1&amp;quot; shows which DB node was chosen for this loadbalancing session.&lt;br /&gt;
&lt;br /&gt;
: Please make sure that you start pgpool-II with &amp;quot;-n&amp;quot; option to get pgpool-II log. (or you can use syslog in pgpool-II 3.1 or later)&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why am I getting &amp;quot;ProcessFrontendResponse: failed to read kind from frontend. frontend abnormally exited&amp;quot; in my pgool log?&#039;&#039;&#039; ===&lt;br /&gt;
: Well, your clients might be ill-behaved:-) PostgreSQL&#039;s protocol requires clients to send particular packet before they disconnect the connection. pgpool-II complains that clients disconnect without sending the packet. You could reprodcude the problem by using psql. Connect to pgpool using psql. Kill -9 psql. You will silimar message in the log. The message will not appear if you quit psql normaly. Another possibility is unstable network connection between your client machine and pgpool-II. Check the cable and network interface card.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m running pgpool-II in streaming replication mode. It seems it works but I find following errors in the log. Why?&#039;&#039;&#039; ===&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;E&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: s_do_auth: unknown response &amp;quot;[&amp;quot; before processing BackendKeyData&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: pool_read2: EOF encountered with backend&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: make_persistent_db_connection: s_do_auth failed&lt;br /&gt;
2011-07-19 08:21:59 ERROR: pid 10727: find_primary_node: make_persistent_connection failed&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
: pgpool-II tries to connect to PostgreSQL to execute some functions such as pg_current_xlog_location(), which is used for detecting primary server or checking replication delay. The messages above indicates that pgpool-II failed to connect with user = health_check_user and password = health_check_password. You need to set them properly even if health_check_period = 0.&lt;br /&gt;
&lt;br /&gt;
: Note that pgpool-II 3.1 or later will use sr_check_user and sr_check_password for it instead.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I run pgbench to test pgpool-II, pgbench hangs. If I directly run pgbench against PostgreSQL, it works fine. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pgbench creates concurrent connections (the number of connections is specified by &amp;quot;-c&amp;quot; option) before starting actual transactions. So if the number of concurrent transactions specified by &amp;quot;-c&amp;quot; exceeds num_init_children, pgbench will stuck because it will wait for pgpool accepting connections forever (remember that pgpool-II accepts up to num_init_children concurrent sessions. If the number of concurrent sessions reach num_init_children, new session will be queued). On the other hand PostgreSQL does not accept concurrent sessions more than max_connections. So in this case you will just see PostgreSQL errors, rather than connection blocking. If you want to test pgpool-II&#039;s connection queuing, you can use psql instead of pgbench. In the example session below, num_init_children = 1 (this is not a recommended setting in the real world. This is just for simplicity).&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;$ psql test &amp;lt;-- connect to pgpool from terminal #1&lt;br /&gt;
psql (9.1.1)&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
test=# &lt;br /&gt;
$ psql test &amp;lt;-- tries to connect to pgpool from terminal #2 but it is blocked.&lt;br /&gt;
test=# SELECT 1; &amp;lt;--- do something from terminal #1 psql&lt;br /&gt;
test=# \q &amp;lt;-- quit psql session on terminal #1&lt;br /&gt;
psql (9.1.1) &amp;lt;-- now psql on terminal #2 accepts session&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
test=# &amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
=== &#039;&#039;&#039;I created pool_hba.conf and pool_passwd to enable md5 authentication through pgpool-II but it does not work. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Probably you made mistake somewhere. For your help here is a table which describes error patterns depending on the setting of pg_hba.conf, pool_hba.conf and pool_passwd.&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&lt;br /&gt;
{|styles=&amp;quot;background:white; color:black&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|pg_hba.conf&lt;br /&gt;
|pool_hba.conf&lt;br /&gt;
|pool_passwd&lt;br /&gt;
|result&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|md5&lt;br /&gt;
|yes&lt;br /&gt;
|md5 auth&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|md5&lt;br /&gt;
|no&lt;br /&gt;
|&amp;quot;MD5&amp;quot; authentication with pgpool failed for user &amp;quot;XX&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|md5&lt;br /&gt;
|trust&lt;br /&gt;
|yes/no&lt;br /&gt;
|MD5 authentication is unsupported in replication, master-slave and parallel mode&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|md5&lt;br /&gt;
|yes&lt;br /&gt;
|no auth&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|md5&lt;br /&gt;
|no&lt;br /&gt;
|&amp;quot;MD5&amp;quot; authentication with pgpool failed for user &amp;quot;XX&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|trust&lt;br /&gt;
|trust&lt;br /&gt;
|yes/no&lt;br /&gt;
|no auth&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I set up SSL for pgpool-II?&#039;&#039;&#039; ===&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;SSL support for pgpool-II consists of two parts: 1)between client and pgpool-II 2)pgpool-II and PostgreSQL. #1 and #2 are independent each other. For example, you can only enable SSL connection of #1, or #2. Or you can enable both #1 and #2. I explain #1 (for #2, please take a look at PostgreSQL documentation).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Make sure that pgpool is built with openssl. If you build from source code, use --with-openssl option.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
First create server certificate. In the command below you will be asked PEM pass phrase(It will be asked when pgpool starts up).&lt;br /&gt;
If you want to start pgpool without being asked pass phrase, you can remove it later.&lt;br /&gt;
([[sample server certficate create session]])&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
openssl req -new -text -out server.req&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Remove PEM pass phrase if you want.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openssl rsa -in privkey.pem -out server.key&lt;br /&gt;
Enter pass phrase for privkey.pem:&lt;br /&gt;
writing RSA key&lt;br /&gt;
$ rm privkey.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Turn the certificate into a self-signed certificate.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openssl req -x509 -in server.req -text -key server.key -out server.crt&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy server.key and server.crt to appropreate place. Suppose we copy to /usr/local/etc.&lt;br /&gt;
Make sure that you use cp -p to retain appropreate permission of server.key.&lt;br /&gt;
Alternatively you can set permission later.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ chmod og-rwx /usr/local/etc/server.key&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Set the certificate and key location in pgpool.conf.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssl = on&lt;br /&gt;
ssl_key = &#039;/usr/local/etc/server.key&#039;&lt;br /&gt;
ssl_cert = &#039;/usr/local/etc/server.crt&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Restart pgpool.&lt;br /&gt;
To confirm SSL connection between client and pgpool is working, connect to pgpool using psql.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
psql -h localhost -p 9999 test&lt;br /&gt;
psql (9.1.1)&lt;br /&gt;
SSL connection (cipher: AES256-SHA, bits: 256)&lt;br /&gt;
Type &amp;quot;help&amp;quot; for help.&lt;br /&gt;
&lt;br /&gt;
test=# \q&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you see &amp;quot;SSL connection...&amp;quot;, SSL connection between client and pgpool is working.&lt;br /&gt;
Please make sure that use &amp;quot;-h localhost&amp;quot; option. Because SSL only works with TCP/IP,&lt;br /&gt;
with Unix domain socket SSL does not work. &lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m using pgpool-II in replication mode. I expected that pgpool-II replaces current_timestamp call with time constants in my INSERT query, but actually it doesn&#039;t. Why?&#039;&#039;&#039; ===&lt;br /&gt;
:Probably your INSERT query uses schema qualied table name (like public.mytable) and you did not install pool_regclass function coming pgpool. Without pgpool_reglclass, pgpool-II only deals with table names without schema qualification.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why max_connection must satisfy this formula max_connection &amp;gt;= (num_init_children * max_pool) and not max_connection &amp;gt;= num_init_children?&#039;&#039;&#039; ===&lt;br /&gt;
: Probably you need to understand how pgpool uses these variables. Here is internal processing inside pgpool.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Wait for connection request from clients.&lt;br /&gt;
&amp;lt;li&amp;gt;pgpool child receives connection request from a client.&lt;br /&gt;
&amp;lt;li&amp;gt;The pgpool child looks for existing connection in the pool which&lt;br /&gt;
   has requested database/user pair up to max_pool.&lt;br /&gt;
&amp;lt;li&amp;gt;If found, reuse it.&lt;br /&gt;
&amp;lt;li&amp;gt; If not found, opens a new connection to PostgreSQL and registers to&lt;br /&gt;
   the pool.  If the pool has no empty slot, closes the oldest&lt;br /&gt;
   connection to PostgreSQL and reuse the slot.&lt;br /&gt;
&amp;lt;li&amp;gt;Do some query processing until the client sends session close request.&lt;br /&gt;
&amp;lt;li&amp;gt;Close the connection to client but keeps the connection to&lt;br /&gt;
   PostgreSQL for future use.&lt;br /&gt;
&amp;lt;li&amp;gt;Go to #1&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Is connection pool cache shared among pgpool process?&#039;&#039;&#039; ===&lt;br /&gt;
:No, the connection pool cache is in pgpool&#039;s process private memory and is not shared by other pgpool. This is how the connection cache is managed: Suppose pgpool process 12345 has connection cache for database A/user B but process 12346 does not have connection cache for database A/user B and both 12345 and 12346 are in idle state (no client is connecting at this point). If client connects to pgpool process 12345 with database A/user B, then the exisiting connection of 12345 is reused. On the other hand, If client connects to pgpool process 12346, 12346 needs to create new connection.  Whether 12345 or 12346 is chosen, is not under control of pgpool. However in the long run, each pgpool child process will be equally chosen and it is expected that each process&#039;s pool will be resued equally.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why my SELECTs are not cached?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
:Certain libraries such as iBatis, MyBatis always rollback transactions if they are not explicitely committed. Pgpool never caches SELECTs result in a rollbacked transaction because they might not be inconsistent.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use # comments or blank lines in pool_passwd?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
: The answer is simple. No (just like /etc/passwd).&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I cannot use MD5 authentication if start pgpool without -n option. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: You must have given -f option as a relative path: i.e. &amp;quot;-f pgpool.conf&amp;quot;, rather than full path: i.e. &amp;quot;-f /usr/local/etc/pgpool.conf&amp;quot;. Pgpool tries to locate the full path of pool_passwd (which is neccesary for MD5 auth) from pgpool.conf path. This is fine with -n option. However if pgpool starts without -n option, it changes current directory to &amp;quot;/&amp;quot;, which is neccessary processs for daemonizing. As a result, pgpool tries to open &amp;quot;/pool_passwd&amp;quot;, which will not successs.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I see standby servers go down status in steaming replication mode and see PostgreSQL messages &amp;quot;terminating connection due to conflict&amp;quot; Why?&#039;&#039;&#039; ===&lt;br /&gt;
: If you see following messages along with those, it is likely vacuum on primary server removes rows which SELECTs on standby server want to see. Workaround is setting &amp;quot;hot_standby_feedback = on&amp;quot; in your standby server&#039;s postgresql.conf.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
2013-04-07 19:38:10 UTC FATAL:  terminating connection due to conflict with recovery&lt;br /&gt;
2013-04-07 19:38:10 UTC DETAIL:  User query might have needed to see row versions that must be removed.&lt;br /&gt;
2013-04-07 19:38:10 UTC HINT:  In a moment you should be able to reconnect to the database and repeat your command.&lt;br /&gt;
2013-04-07 19:38:10 UTC LOG:  could not send data to client: Connection reset by peer&lt;br /&gt;
2013-04-07 19:38:10 UTC ERROR:  canceling statement due to conflict with recovery&lt;br /&gt;
2013-04-07 19:38:10 UTC DETAIL:  User query might have needed to see row versions that must be removed.&lt;br /&gt;
2013-04-07 19:38:10 UTC LOG:  could not send data to client: Broken pipe&lt;br /&gt;
2013-04-07 19:38:10 UTC FATAL:  connection to client lost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Every few minites load of the system which pgpool-II running on gets high as much as 5-10. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Mulptiple users stats that this is observed only Linux kernel 3.0. 2.6 or 3.2 does show the behavior. We suspect that there is a problem with 3.0 kernel. See more discussions on &amp;quot;[pgpool-general: 1528] Mysterious Load Spikes&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When watchdog enabled and the connection number reach the number of num_init_children, VIP switchover occurs. Why?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
:When the connection number reach the number of num_init_children, the watchdog will be failed because select 1 is failed, and then VIP will be transfer to another pgpool. Unfortunately, there are no way to discriminate normal client&#039;s connections from watchdog&#039;s connection. Larger num_init_children, wd_life_point and smaller wd_interval may prevent the problem somewhat. &lt;br /&gt;
&lt;br /&gt;
:The next major version, pgpool-II 3.3, will support a new monitoring method which uses UDP heartbeat packets instead of queries such like &#039;SELECT 1&#039; to resolve the problem.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why do I need to install pgpool_regclass? &#039;&#039;&#039; ===&lt;br /&gt;
:  If you are using PostgreSQL 8.0 or later, installing pgpool_regclass function on all PostgreSQL to be accessed by pgpool-II is strongly recommended, as it is used internally by pgpool-II. Without this, handling of duplicate table names in different schema might cause trouble (temporary tables aren&#039;t a problem).&lt;br /&gt;
:Related FAQ is here https://www.pgpool.net/mediawiki/index.php?title=FAQ&amp;amp;action=submit#I.27m_using_pgpool-II_in_replication_mode._I_expected_that_pgpool-II_replaces_current_timestamp_call_with_time_constants_in_my_INSERT_query.2C_but_actually_it_doesn.27t._Why.3F&lt;br /&gt;
: If you are using PostgreSQL 9.4.0 or later and pgpool-II 3.3.4 or later, 3.4.0 or later, you don&#039;t need to install pgpool_regclass since PostgreSQL 9.4 has built-in pgpool_regclass like function &amp;quot;to_regclass&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;md5 authentication does not work. Please help&#039;&#039;&#039; ===&lt;br /&gt;
: There&#039;s an excellent summary of various check points to set up md5 authentication. Please take a look at it.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2013-May/001773.html&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m running pgpool/PostgreSQL on Amazon AWS and occasionaly I get network errors. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: It&#039;s a known problem with AWS. We recommend to complain to the Amazon support.&lt;br /&gt;
: pgpool-II 3.3.4, 3.2.9 or later mitigate the problem by changing timeout value for connect(actually select system call) from 1 second to 10 seconds.&lt;br /&gt;
: Also pgpool-II 3.4 or later has a switch to control the timeout value.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I cannot run pcp command on my Ubuntu box. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pcp commands need libpcp.so. In Ubuntu it is included &amp;quot;libpgpool0&amp;quot; package.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;On line recovery failed. How can I debug this?&#039;&#039;&#039; ===&lt;br /&gt;
: pcp_recovery_node executes recovery_1st_stage_command and/or recovery_2nd_stage_command depending on your configuration. Those scripts are supposed to be executed on the master PostgreSQL node (the first live node in replication mode or primary node in streaming replication mode). &amp;quot;BackendError&amp;quot; means there&#039;s something wrong in pgpool and/or PostgreSQL. To verify this, I recommend followings;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;start pgpool with debug option&lt;br /&gt;
&amp;lt;li&amp;gt;execute pcp_recovery_node&lt;br /&gt;
&amp;lt;li&amp;gt;examin pgpool log and master PostgreSQL log&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039; Watchdog doesn&#039;t start if not all &amp;quot;other&amp;quot; nodes are alive&#039;&#039;&#039;===&lt;br /&gt;
: It&#039;s a feature. Watchdog&#039;s lifeheck will start after all of the pgpools has started. Until this, failover of the virtual IP never occurs.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;If I start transaction, pgool-II also starts a transaction on standby nodes. Why?&#039;&#039;&#039;===&lt;br /&gt;
: This is necessary to deal with the case when JDBC driver wants to use cursors. Pgpool-II takes a liberty of distributing SELECTs to the standby node including cursor statements. Unfortunately cursor statements need to be executed in an explicit transaction.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I use schema qualified table names, pgpool-II does not invalidate on memory query cache and I got outdated data. Why?&#039;&#039;&#039;===&lt;br /&gt;
: It seems you did not install &amp;quot;pgpool_regclass&amp;quot; function. Without the function, pgpool-II ignores the schema name pat of the schema qualified table name and the cache invalidation fails.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I periodically get error message like &amp;quot;read_startup_packet: incorrect packet length&amp;quot;. What does it mean?&#039;&#039;&#039;===&lt;br /&gt;
: Monitoring tools including Zabbix and Nagios periodically sends a packet or ping to the port which pgoool is listening on. Unfortunately those packets do not have correct contents, and pgpool-II complains it. If you are not sure who is sending such a packet, you could turn on &amp;quot;log_connections&amp;quot; to know the source host and port info. If they are from such tools, you could stop the monitoring to avoid the problem  or even better, change the monitoring method to send legal query, for example, &amp;quot;SELECT 1&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I&#039;m getting repeated errors like this every few minutes on Tomcat: &amp;quot;An I/O Error occurred while sending to the backend&amp;quot; Why?&#039;&#039;&#039;===&lt;br /&gt;
: Tomcat creates persistent connections to pgpool. If you set client_idle_limit to non 0, pgpool disconnects the connection and next time when Tomcat tries to send something to pgpool it breaks with the error message.&lt;br /&gt;
: One solution is set client_idle_limit to 0. However this will leave lots of idle connections.&lt;br /&gt;
: Another solution provided by Lachezar Dobrev is:&lt;br /&gt;
: You might solve that by adding a time-out on the Tomcat side. https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html&lt;br /&gt;
:   What you should set is (AFAIK):&lt;br /&gt;
:    minIdle (default is 10, set to 0)&lt;br /&gt;
:   timeBetweenEvictionRunsMillis (default 5000)&lt;br /&gt;
:   minEvictableIdleTimeMillis    (default 60000)&lt;br /&gt;
:This will try every 5 seconds and close any connections that were not used in the last 60 seconds. If you keep the sum of both numbers below the client time-out on the pgpool size connections should be closed at Tomcat side before they time-out on the pgpool side.&lt;br /&gt;
: It is also beneficial to set the&lt;br /&gt;
:    testOnBorrow (default false, set to true)&lt;br /&gt;
:    validationQuery (default none, set to &#039;SELECT version();&#039; no quotes)&lt;br /&gt;
:  This will help with connections should they expire while waiting, without supplying a disconnected connection to the application.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;When I check pg_stat_activity view, I see a query like &amp;quot;SELECT count(*) FROM pg_catalog.pg_class AS c WHERE c.oid = pgpool_regclass(&#039;pgbench_accounts&#039;) AND c.relpersistence = &#039;u&#039;&amp;quot; in active state for very long time. Why?&#039;&#039;&#039;===&lt;br /&gt;
: It&#039;s a limitation of pg_stat_activity. You can safely ignore it.&lt;br /&gt;
: Pgpool-II issues queries like above for internal use to master node. When user query runs in extended protocol mode (sent from JDBC driver, for example), pgpool-II&#039;s query also runs in the mode. To make pg_stat_activity recognize the query finishes, pgpool-II needs to send a packet called &amp;quot;Sync&amp;quot;, which unfortunately breaks user&#039;s query (more precisely, unnamed portal). Thus pgpool-II sends &amp;quot;Flush&amp;quot; packet instead but then pg_stat_activity does not recognize the end of the query.&lt;br /&gt;
: Interesting thing is, if you enable log_duration, it logs the query finishes.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Online recovery always fails after certain minutes. Why? &#039;&#039;&#039; ===&lt;br /&gt;
: It is possible that PostgreSQL statement_timeout kills the online recovery process. The process is executed as a SQL statement and if it&#039;s running too long, PostgreSQL sends signal 2 to the SQL and kills it. Varying by the size of the database, the online recovery process takes very long time. Make sure to disable statement_timeout or set it long enough time.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why &amp;quot;SET default_transaction_isolation TO DEFAULT&amp;quot; fails ? &#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
$ psql -h localhost -p 9999 -c &#039;SET default_transaction_isolation to DEFAULT;&#039;&lt;br /&gt;
ERROR: kind mismatch among backends. Possible last query was: &amp;quot;SET default_transaction_isolation to DEFAULT;&amp;quot; kind details are: 0[N: statement: SET default_transaction_isolation to DEFAULT;] 1[C]&lt;br /&gt;
HINT: check data consistency among db nodes&lt;br /&gt;
ERROR: kind mismatch among backends. Possible last query was: &amp;quot;SET default_transaction_isolation to DEFAULT;&amp;quot; kind details are: 0[N: statement: SET default_transaction_isolation to DEFAULT;] 1[C]&lt;br /&gt;
HINT: check data consistency among db nodes&lt;br /&gt;
connection to server was lost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
: Pgpool-II detects that node 0 returns &amp;quot;N&amp;quot; (a NOTICE message comes from PostgreSQL) while node 1 returns &amp;quot;C&amp;quot; (which means the command finished).&lt;br /&gt;
: Though pgpool-II expects that both node 0 and 1 returns identical messages, actually they are not. So pgpool-II threw an error.&lt;br /&gt;
: Probably certain log/message settings are different in node 0 and 1. Please check client_min_messages or something like that.&lt;br /&gt;
: They should be identical.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How does pgpool-II find the primary node?&#039;&#039;&#039;===&lt;br /&gt;
: pgpool-II issues &amp;quot;SELECT pg_is_in_recovery()&amp;quot; to each DB node. If it returns true, then the node is a standby node. If one of DB nodes returns false, then the node is the primary node and done.&lt;br /&gt;
: Because it is possible that promoting node could return true for the SELECT, if no primary node is found and &amp;quot;search_primary_node_timeout&amp;quot; is greater than 0, pgpool-II sleeps 1 second and contines to issues the SELECT query to each DB node again until total sleep time exceeds search_primary_node_timeout.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use pg_cancel_backend() or pg_terminate_backend()?&#039;&#039;&#039;===&lt;br /&gt;
: You can safely use pg_cancel_backend().&lt;br /&gt;
: Be warned that pg_terminate_backend() will cause a fail over because it makes PostgreSQL emit an identical error code as postmaster shutdown. Pgpool-II 3.6 or greater mitigates the problem. See [https://www.pgpool.net/docs/latest/en/html/restrictions.html the manual] for more details.&lt;br /&gt;
&lt;br /&gt;
: Remember that pgpool-II manages multiple PostgreSQL servers. To use the function, you not only need to identify the backend pid but the backend server.&lt;br /&gt;
: If the query is running on the primary server, you can call the function something like&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
/*NO LOAD BALANCE*/ SELECT pg_cancel_backend(pid)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
: SQL comment is to prevent the SELECT be load balanced to standby. Of course you could issue the SELECT against directly the primary server.&lt;br /&gt;
: If the query is running on one of standby servers, you need to issue the SELECT against directly the standby server.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why my client is disconnected to pgpool-II when failover happens?&#039;&#039;&#039;===&lt;br /&gt;
: pgpool-II consists of many process where each process corresponds to client session. When failover occurs, each process may iterate on a loop for each backend without knowing a backend goes down. This may result in incorrect processing, or segfault in the worst case. For this reason, when failover occurs, pgpool-II parent process interrupt child process using signal to let them exit. Note that switch over using pcp_detach node has same effect.&lt;br /&gt;
: From Pgpool-II 3.6 or greater, however, a fail over does not cause the disconnection in certain conditions. See the manual for more details.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why am I getting &amp;quot;LOG:  forked new pcp worker ..,&amp;quot; and &amp;quot;LOG:  PCP process with pid: xxxx exit with SUCCESS.&amp;quot; messages in pgpool log?&#039;&#039;&#039; ===&lt;br /&gt;
: Prior to pgpool-II 3.5, pgpool could only handle single PCP command at a time and all PCP commands were handled by a single PCP child process which lives throughout the lifespan of pgpool-II main process. In pgpool II 3.5 the restriction of single PCP command is removed and pgpool-II can now handle multiple simultaneous PCP commands. For every PCP command issued to pgpool a new PCP child process is forked and that process exits after execution of the PCP command is complete. So these log messages are perfectly normal and are generated whenever a new PCP worker process is created or completes execution for a PCP command.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How does pgpool-II handle md5 authentication?&#039;&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
# PostgreSQL and pgpool store md5(password+username) into pg_authid or pool_password. From now on I denote string md5(password+username) as &amp;quot;S&amp;quot;.&lt;br /&gt;
# When md5 auth is requested, pgpool sends a random number salt &amp;quot;s0&amp;quot; to frontend.&lt;br /&gt;
# Frontend replies back to pgpool with md5(S+s0).&lt;br /&gt;
# pgpool extracts S from pgpool_passwd and calculate md5(S+s0). If #3 and #4 matches, goes to next step.&lt;br /&gt;
# Each backend sends salt to pgpool. Suppose we have two backends b1 and b2, and salts are s1 and s2.&lt;br /&gt;
# pgpool extracts S from pgpool_passwd and calculate md5(S+s1) and send it to b1. pgpool extracts S from pgpool_passwd and calculate md5(S+s2) and send it to b2.&lt;br /&gt;
# If b1 and b2 agree with the authentication, the whole md5 auth process succeeds.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why Pgpool-II does not automatically recognize a database comes back online?&#039;&#039;&#039; ===&lt;br /&gt;
: It would be technically possible but we don&#039;t think it&#039;s a safe feature.&lt;br /&gt;
: Consider a streaming replication configuration. When a standby comes back online, it does not necessarily means it connects to the current primary node. It may connect to a different primary node , or even it&#039;s not a standby any more. If Pgpool-II automatically recognizes such that standby as online, SELECTs to the standby node will return different result as the primary, which is a disaster for database applications.&lt;br /&gt;
: Also please note that &amp;quot;pgpool reload&amp;quot; does not do anything for recognizing the standby node as online. It just reloads configuration files.&lt;br /&gt;
: Please note that in Pgpool-II 4.1 or later, it is possible to automatically make a standby server online if it&#039;s safe enough. See configuration parameter &amp;quot;auto_failback&amp;quot; for more information.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;After enabling idle_in_transaction_session_timeout, Pgpool-II sets the DB node status to all down&#039;&#039;&#039; ===&lt;br /&gt;
: idle_in_transaction_session_timeout was introduced in PostgreSQL 9.6. It is intended to cancel idle transactions. Unfortunately after the time out occurs, PostgreSQL raises a FATAL error, which triggers failover in Pgpool-II if fail_over_on_backend_error is on.&lt;br /&gt;
: These are some workarounds to avoid the unwanted failover.&lt;br /&gt;
* Disable fail_over_on_backend_error. By this, failover will not happen if the FATAL error occurs, but the session will be terminated.&lt;br /&gt;
* Set connection_life_time、child_life_time and client_idle_limit less than idle_in_transaction_session_timeout. This will not terminate the session even if the FATAL error occurs. However even if the FATAL error does not occur, the connection pools are removed if one or more of items satisfy the specified condition, which may affect performance.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How can I know PostgreSQL backend status connected by Pgpool-II? &#039;&#039;&#039;===&lt;br /&gt;
: The backend status shown in pg_stat_activity can be examined by using &amp;quot;show pool_pools&amp;quot; command. One of the columns &amp;quot;pool_backendpid&amp;quot; shown by &amp;quot;show pool_pools&amp;quot; is the process id of the corresponding PostgreSQL backend process. Once it is determined, you can examine the output of pg_stat_activiy matching with its &amp;quot;pid&amp;quot; column.&lt;br /&gt;
: You can do this automatically by using dblink extension of PostgreSQL. Here is a sample query:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
SELECT * FROM dblink(&#039;dbname=test host=xxx port=11000 user=t-ishii password=xxx&#039;, &#039;show pool_pools&#039;) as t1 (pool_pid int, start_time text, pool_id int, backend_id int, database text, username text, create_time text,majorversion int, minorversion int, pool_counter int, pool_backendpid int, pool_connected int), pg_stat_activity p WHERE p.pid = t1.pool_backendpid;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: You can execute the SQL above on either PostgreSQL or Pgpool-II. The first argument of dblink is a connection string to connect to Pgpool-II, not PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Where can I get Debian packages for Pgpool-II? &#039;&#039;&#039; ===&lt;br /&gt;
: You can get Debian packages here: https://apt.postgresql.org/pub/repos/apt/pool/main/p/pgpool2/&lt;br /&gt;
: For older releases you can find the packages at: https://atalia.postgresql.org/morgue/p/pgpool2/&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;How to run Pgpool-II with non-root user? &#039;&#039;&#039; ===&lt;br /&gt;
: If you install Pgpool-II using RPM packages, Pgpool-II will be running by root by default.&lt;br /&gt;
: You can also run Pgpool-II with a non-root user. But root privilege is required to control the virtual IP, so you have to copy ip/ifconfig/arping command and add the setuid flag to them.&lt;br /&gt;
&lt;br /&gt;
: Following is an example to run Pgpool-II with postgres user.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Edit pgpool.service file to use postgres user to start Pgpool-II&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# cp /usr/lib/systemd/system/pgpool.service /etc/systemd/system/pgpool.service&lt;br /&gt;
&lt;br /&gt;
# vi /etc/systemd/system/pgpool.service&lt;br /&gt;
...&lt;br /&gt;
User=postgres&lt;br /&gt;
Group=postgres&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Change owner of /var/{lib,run}/pgpool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# chown postgres:postgres /var/{lib,run}/pgpool&lt;br /&gt;
# cp /usr/lib/tmpfiles.d/pgpool-II-pgxx.conf /etc/tmpfiles.d&lt;br /&gt;
# vi /etc/tmpfiles.d/pgpool-II-pgxx.conf&lt;br /&gt;
===&lt;br /&gt;
d /var/run/pgpool 0755 postgres postgres -&lt;br /&gt;
===&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Change owner of Pgpool-II config files &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
chown -R postgres:postgres /etc/pgpool-II/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Copy ip/ifconfig/arping commands to somewhere where the user has access permissions and add setuid flag to them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# mkdir /var/lib/pgsql/sbin&lt;br /&gt;
# chown postgres:postgres /var/lib/pgsql/sbin&lt;br /&gt;
# chmod 700 /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/ifconfig /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/arping /var/lib/pgsql/sbin&lt;br /&gt;
# cp /sbin/ip /var/lib/pgsql/sbin&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/ip&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/ifconfig&lt;br /&gt;
# chmod 4755 /var/lib/pgsql/sbin/arping &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Can I use repmgr with Pgpool-II? &#039;&#039;&#039; ===&lt;br /&gt;
: No. These software do not consider each other. You should use Pgpool-II without repmger or use repmgr without Pgpool-II. See this message for more details: https://www.pgpool.net/pipermail/pgpool-general/2019-August/006743.html&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Connection fails in CentOS6&#039;&#039;&#039; ===&lt;br /&gt;
: Pgpool-II doesn&#039;t support GSSAPI authentication yet, but GSSAPI is requested in CentOS6. Therefore, the connection attempt will fail in CentOS6. &lt;br /&gt;
: A workaround is to set a environment variable to disable GSSAPI encryption in the client: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
export PGGSSENCMODE=disable&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Watchdog standby does not take over master when the master goes down&#039;&#039;&#039; ===&lt;br /&gt;
: If you have even number of watchdog nodes, you need to turn on enable_consensus_with_half_votes parameter, which is new in 4.1. The reason why you need this is explained in the 4.1 release note:&lt;br /&gt;
&amp;lt;q&amp;gt;&lt;br /&gt;
This changes the behavior of the decision of quorum existence and failover consensus on even number (i.e. 2, 4, 6...) of watchdog clusters. Odd number of clusters (3, 5, 7...) are not affected. When this parameter is off (the default), a 2 node watchdog cluster needs to have both 2 nodes are alive to have a quorum. If the quorum does not exist and 1 node goes down, then 1) VIP will be lost, 2) failover srcript is not executed and 3) no watchdog master exists. Especially #2 could be troublesome because no new primary PostgreSQL exists if existing primary goes down. Probably 2 node watchdog cluster users want to turn on this parameter to keep the existing behavior. On the other hand 4 or more even number of watchdog cluster users will benefit from this parameter is off because now it prevents possible split brain when a half of watchdog nodes go down. &lt;br /&gt;
&amp;lt;/q&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:This kind of error could happen for multiple reasons. Here &amp;quot;52&amp;quot; is ASCII code point in hexa decimal number, that is ASCII &#039;R&#039;. &#039;R&#039; is normal response from backend. &amp;quot;45&amp;quot; is &#039;E&#039; in ASCII, which means PostgreSQL complains something. In summary backend 0 accepts the connection request normaly, while backend 1 complains. To solve the problem, you need to look into pgpool.log. For example, If you set &amp;quot;reject&amp;quot; entry for the connection request in backend 1&#039;s pg_hba.conf:&lt;br /&gt;
&lt;br /&gt;
 local	all	foo	reject&lt;br /&gt;
&lt;br /&gt;
: and try to connect to pgpool, you will get the error. You should be able to find something like below in pgpool log:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: LOG:  pool_read_kind: error message from 1 th backend:pg_hba.conf rejects connection for host &amp;quot;[local]&amp;quot;, user &amp;quot;foo&amp;quot;, database &amp;quot;test&amp;quot;, no encryption&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: ERROR:  unable to read message kind&lt;br /&gt;
 2021-05-23 15:38:38: child pid 375: DETAIL:  kind does not match between main(52) slot[1] (45)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: You find that you need to fix pg_hba.conf of backend 1.&lt;br /&gt;
&lt;br /&gt;
: Other types of errors include:&lt;br /&gt;
:&amp;lt;ul&amp;gt;&lt;br /&gt;
:&amp;lt;li&amp;gt; Backend 1&#039;s pg_hba.conf setting refuses the connection from pgpool&lt;br /&gt;
:&amp;lt;li&amp;gt; max_connections parameter of PostgreSQL is not identical among backends&lt;br /&gt;
:&amp;lt;/ul&amp;gt;&lt;br /&gt;
: Note that &amp;quot;main&amp;quot; in the error message is &amp;quot;master&amp;quot; in Pgpool-II 4.1 or before. Also note that the detailed error info (&amp;quot;error message from 1 th backend:...&amp;quot;) is not available in Pgpool-II 3.6 or before.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;I am getting an authentication error when Pgpool-II connects to Azure PostgreSQL. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: Auzre PostgreSQL only accepts clear text password. md5 authentication nor SCRAM-SHA-256 cannot be used. You need to set clear text password in pool_passwd.&lt;br /&gt;
: related bug track entries:&lt;br /&gt;
: &amp;lt;ul&amp;gt;&lt;br /&gt;
: &amp;lt;li&amp;gt; https://www.pgpool.net/mantisbt/view.php?id=737&lt;br /&gt;
: &amp;lt;li&amp;gt; https://www.pgpool.net/mantisbt/view.php?id=699&lt;br /&gt;
: &amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== pgpoolAdmin Frequently Asked Questions ==&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;pgpoolAdmin does not show any node in pgpool status and node status. Why?&#039;&#039;&#039; ===&lt;br /&gt;
: pgpoolAdmin uses PHP&#039;s PostgreSQL extention (pg_connect and pg_query etc.). Probably the extention does not work as expected. Please check apache error log. Also please check the FAQ item below.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Why does node status in pgpoolAdmin show &amp;quot;down&amp;quot; status even if PostgreSQL is up and running?&#039;&#039;&#039; ===&lt;br /&gt;
: pgpoolAdmin checks PostgreSQL status by connecting with user = &amp;quot;health_check_user&amp;quot; and database = template1. Thus you should allow pgpoolAdmin to access PostgreSQL with those user and database without password. You can check PostgreSQL log to verify this. If health_check_user does not exist, you will see something like:&lt;br /&gt;
: &amp;lt;pre&amp;gt;20148 2011-07-06 16:41:59 JST FATAL:  role &amp;quot;foo&amp;quot; does not exist&amp;lt;/pre&amp;gt;&lt;br /&gt;
: If the user is protected by password, you will see:&lt;br /&gt;
&amp;lt;dl&amp;gt;&amp;lt;dd&amp;gt;&amp;lt;pre&amp;gt;20220 2011-07-06 16:42:16 JST FATAL:  password authentication failed for user &amp;quot;foo&amp;quot;&lt;br /&gt;
20221 2011-07-06 16:42:16 JST LOG:  could not receive data from client: Connection reset by peer&lt;br /&gt;
20221 2011-07-06 16:42:16 JST LOG:  unexpected EOF within message length word&lt;br /&gt;
20246 2011-07-06 16:42:26 JST LOG:  could not receive data from client: Connection reset by peer&lt;br /&gt;
20246 2011-07-06 16:42:26 JST LOG:  unexpected EOF within message length word&amp;lt;/pre&amp;gt;&amp;lt;/dl&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=Roadmap&amp;diff=3739</id>
		<title>Roadmap</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=Roadmap&amp;diff=3739"/>
		<updated>2022-12-20T23:12:22Z</updated>

		<summary type="html">&lt;p&gt;Ishii: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Upcoming minor releases == &lt;br /&gt;
&lt;br /&gt;
PgPool Global Development Group will make at least one minor release quarterly according to a predefined schedule.&lt;br /&gt;
&lt;br /&gt;
If there are important bug fixes or security issues, more releases will be made between these scheduled dates.&lt;br /&gt;
&lt;br /&gt;
The current schedule for upcoming releases is: &lt;br /&gt;
&lt;br /&gt;
* &amp;lt;s&amp;gt;November 17th, 2022&amp;lt;/s&amp;gt; December 22th, 2022&lt;br /&gt;
* February 16th, 2023&lt;br /&gt;
* May 18th, 2023&lt;br /&gt;
* August 17th, 2023&lt;br /&gt;
* November 16th, 2023&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3688</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3688"/>
		<updated>2022-11-06T01:45:49Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Do not disturb sessions in failover when load_balance_mode is off */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, watchdog process only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== Do not disturb sessions in failover of standby servers when load_balance_mode is off ===&lt;br /&gt;
: In streaming replication mode if load_balance_mode is off, it would be desirable to not disconnect sessions in failover of standby servers. Currently Pgpool-II connects to all backend even if load_balance_mode is off. But it is actually unnecessary to connect to standby servers if load_balance_mode is off. If pgpool only connects to primary server, it does not need to disconnect session in failover of standby servers.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
&lt;br /&gt;
=== Support multiple unix_socket_directories and related parameters ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
: Also unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: These have been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3687</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3687"/>
		<updated>2022-11-06T01:43:31Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Pgpool-II TODO list */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, watchdog process only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
=== Do not disturb sessions in failover when load_balance_mode is off ===&lt;br /&gt;
: In streaming replication mode if load_balance_mode is off, it would be desirable to not disconnect sessions in failover of standby servers. Currently Pgpool-II connects to all backend even if load_balance_mode is off. But it is actually unnecessary to connect to standby servers if load_balance_mode is off. If pgpool only connects to primary server, it does not need to disconnect session in failover of standby servers.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
&lt;br /&gt;
=== Support multiple unix_socket_directories and related parameters ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
: Also unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: These have been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3686</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3686"/>
		<updated>2022-11-06T01:35:31Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* [WIP] Support multiple UNIX domain socket directories */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, watchdog process only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
&lt;br /&gt;
=== Support multiple unix_socket_directories and related parameters ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
: Also unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: These have been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3685</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3685"/>
		<updated>2022-11-06T01:34:21Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* TODOs already done */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, watchdog process only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== [WIP] Support multiple UNIX domain socket directories ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
&lt;br /&gt;
=== Support multiple unix_socket_directories and related parameters ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
: Also unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: These have been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3684</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3684"/>
		<updated>2022-11-06T01:07:22Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Pgpool-II TODO list */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, watchdog process only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== [WIP] Support multiple UNIX domain socket directories ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
=== Support unix_socket_directories and related parameters ===&lt;br /&gt;
: unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3683</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3683"/>
		<updated>2022-11-06T01:06:16Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* TODOs already done */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
&lt;br /&gt;
=== [WIP] Support unix_socket_directories and related parameters ===&lt;br /&gt;
: unix_socket_group and unix_socket_permissions&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, watchdog process only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== [WIP] Support multiple UNIX domain socket directories ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
:https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=b7bcf0d7b833559962cde8c5f4dfe3f5c07dda3c&lt;br /&gt;
=== Support unix_socket_directories and related parameters ===&lt;br /&gt;
: unix_socket_group and unix_socket_permissions are needed to be supported&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=bc03514b124de01176d5ded220f33cabff742ade&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=EOL_information&amp;diff=3679</id>
		<title>EOL information</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=EOL_information&amp;diff=3679"/>
		<updated>2022-09-27T01:47:11Z</updated>

		<summary type="html">&lt;p&gt;Ishii: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Pgpool-II release support policy ===&lt;br /&gt;
&lt;br /&gt;
The Pgpool-II project aims to fully support a major release for &#039;&#039;&#039;five years&#039;&#039;&#039;. (if you need longer support, please contact to [https://www.sraoss.co.jp/prod_serv/support/pgsql-mainte_en.php SRA OSS LLC]).&lt;br /&gt;
&lt;br /&gt;
After a release falls out of full support, we may (at our committers&#039; discretion) continue to apply further critical fixes to the source code, on a best-effort basis. No formal releases or binary packages will be produced by the project, but the updated source code will be available from our source code control system.&lt;br /&gt;
&lt;br /&gt;
This policy will be followed on a best-effort basis. In extreme cases it may not be possible to support a release for the planned lifetime; for example if a serious bug is found that cannot be resolved in a given major version without significant risk to the stability of the code or loss of application compatibility. In such cases, early retirement of a major version may be required.&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
! Version&lt;br /&gt;
! First release date&lt;br /&gt;
! EOL date&lt;br /&gt;
|-&lt;br /&gt;
|4.3&lt;br /&gt;
|2021/12&lt;br /&gt;
|2026/12&lt;br /&gt;
|-&lt;br /&gt;
|4.2&lt;br /&gt;
|2020/11&lt;br /&gt;
|2025/11&lt;br /&gt;
|-&lt;br /&gt;
|4.1&lt;br /&gt;
|2019/10&lt;br /&gt;
|2024/10&lt;br /&gt;
|-&lt;br /&gt;
|4.0&lt;br /&gt;
|2018/10&lt;br /&gt;
|2023/10&lt;br /&gt;
|-&lt;br /&gt;
|3.7&lt;br /&gt;
|2017/11&lt;br /&gt;
|2022/11&lt;br /&gt;
|- style=&amp;quot;background:silver&amp;quot;&lt;br /&gt;
|3.6&lt;br /&gt;
|2016/11&lt;br /&gt;
|2021/11&lt;br /&gt;
|- style=&amp;quot;background:silver&amp;quot;&lt;br /&gt;
|3.5&lt;br /&gt;
|2016/1&lt;br /&gt;
|2021/1&lt;br /&gt;
|- style=&amp;quot;background:silver&amp;quot;&lt;br /&gt;
|3.4&lt;br /&gt;
|2014/11&lt;br /&gt;
|2019/11&lt;br /&gt;
|- style=&amp;quot;background:silver&amp;quot;&lt;br /&gt;
|3.3&lt;br /&gt;
|2013/7&lt;br /&gt;
|2018/7&lt;br /&gt;
|- style=&amp;quot;background:silver&amp;quot;&lt;br /&gt;
|3.2&lt;br /&gt;
|2012/8&lt;br /&gt;
|2017/8&lt;br /&gt;
|- style=&amp;quot;background:silver&amp;quot;&lt;br /&gt;
|3.1&lt;br /&gt;
|2011/9&lt;br /&gt;
|2016/9&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=Documentation&amp;diff=3678</id>
		<title>Documentation</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=Documentation&amp;diff=3678"/>
		<updated>2022-09-27T01:44:18Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Developer&amp;#039;s documentation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Official documentation ==&lt;br /&gt;
* &#039;&#039;&#039;Pgpool-II&#039;&#039;&#039;&lt;br /&gt;
** Pgpool-II manual (English)&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/ Pgpool-II 4.3] (latest)&lt;br /&gt;
*** [https://www.pgpool.net/docs/42/en/html/ Pgpool-II 4.2]&lt;br /&gt;
*** [https://www.pgpool.net/docs/41/en/html/ Pgpool-II 4.1]&lt;br /&gt;
*** [https://www.pgpool.net/docs/40/en/html/ Pgpool-II 4.0]&lt;br /&gt;
*** [https://www.pgpool.net/docs/37/en/html/ Pgpool-II 3.7]&lt;br /&gt;
** Pgpool-II manual (Japanese)&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/ja/html/ Pgpool-II 4.3] (latest)&lt;br /&gt;
*** [https://www.pgpool.net/docs/42/ja/html/ Pgpool-II 4.2]&lt;br /&gt;
*** [https://www.pgpool.net/docs/41/ja/html/ Pgpool-II 4.1]&lt;br /&gt;
*** [https://www.pgpool.net/docs/40/ja/html/ Pgpool-II 4.0]&lt;br /&gt;
*** [https://www.pgpool.net/docs/37/ja/html/ Pgpool-II 3.7]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/pgpool-zh_cn.html pgpool-II manual] (Simplified Chinese)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-en.html pgpool-II tutorial] (English)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-ja.html pgpool-II tutorial] (Japanese)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-zh_cn.html pgpool-II tutorial] (Simplified Chinese)--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;pgpoolAdmin&#039;&#039;&#039;&lt;br /&gt;
** [https://www.pgpool.net/docs/pgpoolAdmin/index_en.html pgpoolAdmin manual] (English)&lt;br /&gt;
** [https://www.pgpool.net/docs/pgpoolAdmin/index_ja.html pgpoolAdmin manual] (Japanese)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of Pgpool-II&#039;&#039;&#039;&lt;br /&gt;
** Basic Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-basic.html English] [https://www.pgpool.net/docs/latest/ja/html/example-basic.html Japanese]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/docs/latest/en/html/example-cluster.html English] [https://www.pgpool.net/docs/latest/ja/html/example-cluster.html Japanese]&lt;br /&gt;
** AWS Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aws.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aws.html Japanese]&lt;br /&gt;
** Aurora Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aurora.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aurora.html Japanese]&lt;br /&gt;
** Kubernetes Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-kubernetes.html English] [https://www.pgpool.net/docs/latest/ja/html/example-kubernetes.html Japanese]&lt;br /&gt;
*** [https://github.com/pgpool/pgpool2_on_k8s/blob/master/docs/index.md English]&lt;br /&gt;
** Pgpool-II Exporter&#039;&#039;&#039;&lt;br /&gt;
*** [https://github.com/pgpool/pgpool2_exporter English]&lt;br /&gt;
&lt;br /&gt;
== Developer&#039;s documentation ==&lt;br /&gt;
&lt;br /&gt;
* [https://www.pgcon.org/events/pgcon_2020/sessions/session/45/slides/44/HA_Cluster_on_K8s.pdf PostgreSQL HA Clusterwith Query Load Balancing on Kubernetes] at [https://www.pgcon.org/2020/ PGCon 2020 Ottawa] (English, PDF) (2020/05/27)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/dbtech2019-sraoss-postgresql-cluster.pdf PostgreSQLによるクラスタ運用および負荷分散術] at [https://www.db-tech-showcase.com/dbts/tokyo db tech showcase Tokyo 2019] (Japanese, PDF) (2019/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/PGConf.ASIA.Bali.2019-PENG.pdf Setup a High-Availability and Load Balancing PostgreSQL Cluster - New Features of Pgpool-II 4.1 -] at [https://2019.pgconf.asia/ PGConf.ASIA 2019 Bali] (English, PDF) (2019/09/10)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/Introducing%20PostgreSQL%20SQL%20Parser.pdf Introducing PostgreSQL SQL Parser - Use of PostgreSQL Parser in other Applications -] at [https://www.pgcon.org/2019/ PGCon 2019 Ottawa] (English, PDF) (2019/05/31)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgconf.asia-JA-20181211-day1-B4-peng.pdf Celebrating its 15th Anniversary: Pgpool-II Past, Present and Future - Part 2] at [https://www.pgconf.asia/EN/2018/day1/#B4 PGConf.ASIA 2018] (English, PDF) (2018/12/11)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgconf.asia-JA-20181211-day1-B4-ishii.pdf Celebrating its 15th Anniversary: Pgpool-II Past, Present and Future - Part 1]  at [https://www.pgconf.asia/EN/2018/day1/#B4 PGConf.ASIA 2018] (English, PDF) (2018/12/11)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/PostgreSQL-HA-with-Pgpool-II-20180925.pdf PostgreSQL HA with Pgpool-II and whats been happening in Pgpool world lately...] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (English, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/Pgpool-II-4-20180925.pdf PostgreSQL クラスタ環境の管理機能を大幅に強化！Pgpool-II 4.0 のご紹介] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (Japanese, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgpool-past-now-and-future-20180925.pdf 誕生から 15 年を迎えた Pgpool-II の過去と現在、そして未来] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (Japanese, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2017/Pgpool-II-3.7.pdf 信頼性を向上させ、PostgreSQL 10 に対応した Pgpool-II 3.7 のご紹介] at [https://www.pgconf.asia/JA/2017/day-2/#B5 PGConf.ASIA 2017] (2017/12/06)&lt;br /&gt;
* [https://www.pgpool.net/download.php?f=Pgpool-II-history.pdf Pgpool-II: Past, Present and Future] at [https://www.pgconf.asia/EN/2016/day-2/#B3 PGConf.ASIA 2016] (Japanese, PDF) (2016/12/07)&lt;br /&gt;
* [https://pgpool.net/mediawiki/images/2016-02-Moscow-pgpool-II-3.5.pdf &amp;quot;How to manage a herd of elephants: PostgreSQL clusters using streaming replication and pgpool-II&amp;quot;] at [https://pgconf.ru/en/2016/89695 &amp;quot;PGConf.Russia 2016&amp;quot;] (English, PDF) (2016/02/03)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2015/pgpool-II-3.5.pdf &amp;quot;How to manage a herd of elephants: Introducing new features of pgpool-II 3.5&amp;quot;] at [https://www.eventdove.com/event/106042 &amp;quot;PostgreSQL Conference China 2015&amp;quot;] (English, PDF) (2015/11/21)&lt;br /&gt;
* [https://pgpool.net/mediawiki/index.php?title=pgpool-II_3.5_features&amp;amp;redirect=no pgpool-II 3.5 new features] (English, Wiki)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/pgpool-II-3.5.pdf 6th Postgres Cluster Hackers Summit, pgCon 2015 &amp;quot;pgpool-II 3.5 How it will look like?&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2015.pdf 6th Postgres Cluster Hackers Summit, pgCon 2015 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2014.pdf 5th Postgres Cluster Hackers Summit, pgCon 2014 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2013.pdf 4th Postgres Cluster Hackers Summit, pgCon 2013 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2012.pdf 3rd Postgres Cluster Hackers Summit, pgCon 2012 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
&lt;br /&gt;
== Blog posts by Pgpool-II developers ==&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2022/09/query-cache-improvement-in-pgpool-ii-44.html Query cache improvement in Pgpool-II 4.4] By Tatsuo Ishii (2022/9/26)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2022/09/configuring-vip-route-table.html How to make Pgpool-II Leader Switchover Seamless on AWS - Updating Route Table] By Bo Peng (2022/9/19)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2022/09/configuring-vip-in-cloud.html Configuring and Managing VIP for Pgpool-II on AWS] By Bo Peng (2022/9/10)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2022/03/pgo-v5-installation.html Installing Crunchy Postgres Operator v5 on EKS] By Bo Peng (2022/3/29)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2022/03/pgpool-debian.html Installing Pgpool-II on Debian/Ubuntu] By Bo Peng (2022/3/19)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2022/02/auto-failback.html Pgpool-II Configuration Parameters - auto_failback] By Bo Peng (2022/2/28)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2022/02/whats-new-in-pgpool-ii-43-part3.html What&#039;s new in Pgpool-II 4.3? (part3)] By Tatsuo Ishii (2022/2/11)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2022/02/whats-new-in-pgpool-ii-43-part2.html What&#039;s new in Pgpool-II 4.3? (part2)] By Tatsuo Ishii (2022/2/6)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2022/01/whats-new-in-pgpool-ii-43.html What&#039;s new in Pgpool-II 4.3?] By Tatsuo Ishii (2022/1/31)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/09/enable-shared-relcache.html  Pgpool-II Configuration Parameters - enable_shared_relcache] By Bo Peng (2021/9/22)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/09/reserved-connections.html  Pgpool-II Configuration Parameters - reserved_connections] By Bo Peng (2021/9/21)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/08/failover-triggered-by-postgresql.html Failover triggered by PostgreSQL shutdown] By Tatsuo Ishii (2021/8/24)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/07/deploying-pgpool2-exporter-with-docker.html Deploying Pgpool-II Exporter with Docker] By Bo Peng (2021/7/26)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/07/postgres-disaster-recovery-on-k8s-zalando.html Disaster Recovery Strategies for PostgreSQL Deployments on Kubernetes (Part 2)] By Bo Peng (2021/7/04)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/06/promoting-specied-node-in-pgpool-ii.html Promoting specied node in Pgpool-II] By Tatsuo Ishii (2021/6/18)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/05/postgres-disaster-recovery-on-k8s.html Disaster Recovery Strategies for PostgreSQL Deployments on Kubernetes (Part 1)] By Bo Peng (2021/5/31)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/04/pgpool-logging-debugging.html  Pgpool-II Logging and Debugging] By Bo Peng (2021/4/30)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/04/visibility-with-query-cache.html Visibility with query cache] By Tatsuo Ishii (2021/4/19)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/03/logging-pgpool-on-k8s.html Logging of Pgpool-II on Kubernetes] By Bo Peng (2021/3/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/02/clustering-modes-in-pgpool.html  Pgpool-II&#039;s Clustering Modes] By Bo Peng (2021/2/28)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/02/what-pool-means-in-pgpool-ii.html What &amp;quot;pool&amp;quot; means in Pgpool-II?] By Tatsuo Ishii (2021/2/6)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/01/statistics-in-pgpool.html Various Ways to Retrieve Pgpool-II&#039;s Statistics] By Bo Peng (2021/1/31)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/12/load-balancing-in-pgpool.html Query Load Balancing in Pgpool-II] By Bo Peng (2020/12/29)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/12/timeouts-in-pgpool-ii-connections.html Timeouts in Pgpool-II connections] By Tatsuo Ishii (2020/12/27)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/11/pgpool2-on-k8s.html  Deploy Pgpool-II on Kubernetes to Achieve Query Load Balancing and Monitoring] By Bo Peng (2020/11/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/10/pgpool2-exporter.html Monitoring PostgreSQL Cluster via Pgpool-II with Prometheus] By Bo Peng (2020/10/31)&lt;br /&gt;
* [https://www.highgo.ca/2020/10/08/configuring-pgpool-ii-watchdog-its-going-to-be-a-lot-easier/ Configuring Pgpool-II watchdog: It’s going to be a lot easier] By Muhammad Usama (2020/10/08)&lt;br /&gt;
* [https://www.highgo.ca/2020/09/30/pgpool-ii-4-2-features/ pgpool II 4.2 features] By Ahsan Hadi (2020/09/30)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/09/fixing-language-in-pgpool-ii-42.html Fixing language in Pgpool-II 4.2] By Tatsuo Ishii (2020/09/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/09/how-to-configure-scram-and-md5.html How to Configure SCRAM and MD5 Authentication in Pgpool-II] By Bo Peng (2020/09/28)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/08/new-statistics-data-in-pgpool-ii.html New statistics data in Pgpool-II] By Tatsuo Ishii (2020/08/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/08/authentication-in-pgpool.html  Authentication in Pgpool-II] By Bo Peng (2020/08/27)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/07/connection-pooling-in-pgpool.html  Connection Pooling in Pgpool-II] By Bo Peng (2020/07/31)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/07/snapshot-isolation-mode.html Snapshot Isolation Mode] By Tatsuo Ishii (2020/07/25)&lt;br /&gt;
* [https://www.highgo.ca/2020/06/25/authenticating-pgpool-ii-with-ldap/ Authenticating pgpool II with LDAP] By Ahsan Hadi (2020/06/25)&lt;br /&gt;
* [https://www.highgo.ca/2020/02/25/setting-up-ssl-certificate-authentication-with-pgpool-ii/ Setting up SSL certificate authentication with Pgpool-II]  By Muhammad Usama (2020/02/25)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/07/when-primary-server-is-far-away-from.html When primary server is far away from standby server] By Tatsuo Ishii (2019/07/24)&lt;br /&gt;
* [https://www.highgo.ca/2019/07/19/pgpool-ii-4-1-taking-the-bull-by-its-horn/ Pgpool II 4.1 taking the bull by its horn] By Ahsan Hadi (2019/07/19)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/04/statement-level-load-balancing.html Statement level load balancing] By Tatsuo Ishii (2019/04/01)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/03/shared-relation-cache.html Shared Relation Cache] By Tatsuo Ishii (2019/03/24)&lt;br /&gt;
* [https://www.highgo.ca/2019/09/06/can-you-gain-performance-with-pgpool-ii-as-a-load-balancer/ Can you gain performance with Pgpool-II as a load balancer?] By Muhammad Usama (2019/04/02)&lt;br /&gt;
* &#039;&#039;&#039;old blog posts&#039;&#039;&#039;&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2013/06/settinng-up-everything-at-one-time.html Setting up everything at one time]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2013/06/pgpool-ii-33-alpha-1-is-out.html pgpool-II 3.3 alpha1 is out!]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2012/10/pgpool-ii-now.html pgpool-II + now()]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2012/08/pgpool-ii-talk-at-postgresql-conference.html Pgpool-II talk at PostgreSQL Conference Europe 2012]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== User contributed documentation ==&lt;br /&gt;
* &#039;&#039;&#039;Documentation&#039;&#039;&#039;]&lt;br /&gt;
** [https://www.pgpool.net/pgpool-web/contrib_docs/pgpool-II_for_beginners.pdf Gerd Koenig&#039;s &amp;quot;pgpool-II for beginners&amp;quot;] (English)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool/1 What is pgpool-II] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool/2 Creating a replication system using pgpool-II] (Japanese)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2010/20100702-03char10.pdf Making master/slave systems work better with pgpool-II] (English, PDF)&lt;br /&gt;
** [[Relationship_between_max_pool,_num_init_children,_and_max_connections|Relationship between max_pool, num_init_children, and max_connections]](English, 2012/8/25)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2013/20130212_pgpool_seminar_sraoss.pdf New features of pgpool-II, multifunctional middleware for PostgreSQL] (Japanese, PDF)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2013/20131115_dbshowtech.pdf Construct scale out configuration with PostgreSQL] (Japanese, PDF)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-pgpool_setup/1 Let&#039;s try pgpool-II easy setup function] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-3.3-watchdog/1 About pgpool-II 3.3 watchdog] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-tcp-tuning/1 Improvements of connection performance in pgpool-II] (Japanese)&lt;br /&gt;
** [https://www.postgresql.jp/events/jpug-pgcon2013-files/C1_jpugpgcon2013_slide Construct high-availability, high-performance system with pgpool-II] (Japanese, PDF)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.6&#039;&#039;&#039;&lt;br /&gt;
** Basic Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-basic.html English] [https://www.pgpool.net/docs/latest/ja/html/example-basic.html Japanese]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** Basic: [https://www.pgpool.net/docs/latest/en/html/example-watchdog.html English] [https://www.pgpool.net/docs/latest/ja/html/example-watchdog.html Japanese]&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/docs/latest/en/html/example-cluster.html English] [https://www.pgpool.net/docs/latest/ja/html/example-cluster.html Japanese]&lt;br /&gt;
** AWS Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aws.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aws.html Japanese]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of Streaming Replication with pgpool-II (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** Simple Streaming replication setting with pgpool-II&lt;br /&gt;
*** For pgpool-II 3.1 and PostgreSQL 9.1: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting/index.html English (2012/01/31) ]  [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting/index-ja.html Japanese (2012/6/1) ]&lt;br /&gt;
*** For pgpool-II 3.0 and PostgreSQL 9.0: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting_3.0/index.html English] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting_3.0/index-ja.html Japanese]&lt;br /&gt;
** multiple server version&lt;br /&gt;
*** For pgpool-II 3.3 and PostgreSQL 9.3: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.3/index.html English (2014/04/07)] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.3/index-ja.html Japanese (2014/04/07)]&lt;br /&gt;
*** For pgpool-II 3.1 and PostgreSQL 9.1: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2/index.html English (2012/01/31)] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.0/index-ja.html Japanese (2012/6/1)]&lt;br /&gt;
*** For pgpool-II 3.0 and PostgreSQL 9.0: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2/index.html English] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.0/index-ja.html Japanese]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.3 (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave_3.3/en.html English (2014/04/07)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave_3.3/ja.html Japanese (2014/04/07)]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.2 (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** On memory query cache: [https://www.pgpool.net/pgpool-web/contrib_docs/memqcache/en.html English (2012/07/20)] [https://www.pgpool.net/pgpool-web/contrib_docs/memqcache/ja.html Japanese (2012/07/20)]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** Basic: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog/en.html English (2012/07/20)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog/ja.html Japanese (2012/07/20)]&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave/en.html English (2012/10/22)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave/ja.html Japanese (2012/10/15)]&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=Documentation&amp;diff=3677</id>
		<title>Documentation</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=Documentation&amp;diff=3677"/>
		<updated>2022-09-27T01:37:42Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Blog posts by Pgpool-II developers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Official documentation ==&lt;br /&gt;
* &#039;&#039;&#039;Pgpool-II&#039;&#039;&#039;&lt;br /&gt;
** Pgpool-II manual (English)&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/ Pgpool-II 4.3] (latest)&lt;br /&gt;
*** [https://www.pgpool.net/docs/42/en/html/ Pgpool-II 4.2]&lt;br /&gt;
*** [https://www.pgpool.net/docs/41/en/html/ Pgpool-II 4.1]&lt;br /&gt;
*** [https://www.pgpool.net/docs/40/en/html/ Pgpool-II 4.0]&lt;br /&gt;
*** [https://www.pgpool.net/docs/37/en/html/ Pgpool-II 3.7]&lt;br /&gt;
** Pgpool-II manual (Japanese)&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/ja/html/ Pgpool-II 4.3] (latest)&lt;br /&gt;
*** [https://www.pgpool.net/docs/42/ja/html/ Pgpool-II 4.2]&lt;br /&gt;
*** [https://www.pgpool.net/docs/41/ja/html/ Pgpool-II 4.1]&lt;br /&gt;
*** [https://www.pgpool.net/docs/40/ja/html/ Pgpool-II 4.0]&lt;br /&gt;
*** [https://www.pgpool.net/docs/37/ja/html/ Pgpool-II 3.7]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/pgpool-zh_cn.html pgpool-II manual] (Simplified Chinese)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-en.html pgpool-II tutorial] (English)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-ja.html pgpool-II tutorial] (Japanese)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-zh_cn.html pgpool-II tutorial] (Simplified Chinese)--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;pgpoolAdmin&#039;&#039;&#039;&lt;br /&gt;
** [https://www.pgpool.net/docs/pgpoolAdmin/index_en.html pgpoolAdmin manual] (English)&lt;br /&gt;
** [https://www.pgpool.net/docs/pgpoolAdmin/index_ja.html pgpoolAdmin manual] (Japanese)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of Pgpool-II&#039;&#039;&#039;&lt;br /&gt;
** Basic Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-basic.html English] [https://www.pgpool.net/docs/latest/ja/html/example-basic.html Japanese]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/docs/latest/en/html/example-cluster.html English] [https://www.pgpool.net/docs/latest/ja/html/example-cluster.html Japanese]&lt;br /&gt;
** AWS Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aws.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aws.html Japanese]&lt;br /&gt;
** Aurora Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aurora.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aurora.html Japanese]&lt;br /&gt;
** Kubernetes Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-kubernetes.html English] [https://www.pgpool.net/docs/latest/ja/html/example-kubernetes.html Japanese]&lt;br /&gt;
*** [https://github.com/pgpool/pgpool2_on_k8s/blob/master/docs/index.md English]&lt;br /&gt;
** Pgpool-II Exporter&#039;&#039;&#039;&lt;br /&gt;
*** [https://github.com/pgpool/pgpool2_exporter English]&lt;br /&gt;
&lt;br /&gt;
== Developer&#039;s documentation ==&lt;br /&gt;
&lt;br /&gt;
* [https://www.pgcon.org/events/pgcon_2020/sessions/session/45/slides/44/HA_Cluster_on_K8s.pdf PostgreSQL HA Clusterwith Query Load Balancingon Kubernetes] at [https://www.pgcon.org/2020/ PGCon 2020 Ottawa] (English, PDF) (2020/05/27)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/dbtech2019-sraoss-postgresql-cluster.pdf PostgreSQLによるクラスタ運用および負荷分散術] at [https://www.db-tech-showcase.com/dbts/tokyo db tech showcase Tokyo 2019] (Japanese, PDF) (2019/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/PGConf.ASIA.Bali.2019-PENG.pdf Setup a High-Availability and Load Balancing PostgreSQL Cluster - New Features of Pgpool-II 4.1 -] at [https://2019.pgconf.asia/ PGConf.ASIA 2019 Bali] (English, PDF) (2019/09/10)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/Introducing%20PostgreSQL%20SQL%20Parser.pdf Introducing PostgreSQL SQL Parser - Use of PostgreSQL Parser in other Applications -] at [https://www.pgcon.org/2019/ PGCon 2019 Ottawa] (English, PDF) (2019/05/31)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgconf.asia-JA-20181211-day1-B4-peng.pdf Celebrating its 15th Anniversary: Pgpool-II Past, Present and Future - Part 2] at [https://www.pgconf.asia/EN/2018/day1/#B4 PGConf.ASIA 2018] (English, PDF) (2018/12/11)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgconf.asia-JA-20181211-day1-B4-ishii.pdf Celebrating its 15th Anniversary: Pgpool-II Past, Present and Future - Part 1]  at [https://www.pgconf.asia/EN/2018/day1/#B4 PGConf.ASIA 2018] (English, PDF) (2018/12/11)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/PostgreSQL-HA-with-Pgpool-II-20180925.pdf PostgreSQL HA with Pgpool-II and whats been happening in Pgpool world lately...] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (English, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/Pgpool-II-4-20180925.pdf PostgreSQL クラスタ環境の管理機能を大幅に強化！Pgpool-II 4.0 のご紹介] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (Japanese, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgpool-past-now-and-future-20180925.pdf 誕生から 15 年を迎えた Pgpool-II の過去と現在、そして未来] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (Japanese, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2017/Pgpool-II-3.7.pdf 信頼性を向上させ、PostgreSQL 10 に対応した Pgpool-II 3.7 のご紹介] at [https://www.pgconf.asia/JA/2017/day-2/#B5 PGConf.ASIA 2017] (2017/12/06)&lt;br /&gt;
* [https://www.pgpool.net/download.php?f=Pgpool-II-history.pdf Pgpool-II: Past, Present and Future] at [https://www.pgconf.asia/EN/2016/day-2/#B3 PGConf.ASIA 2016] (Japanese, PDF) (2016/12/07)&lt;br /&gt;
* [https://pgpool.net/mediawiki/images/2016-02-Moscow-pgpool-II-3.5.pdf &amp;quot;How to manage a herd of elephants: PostgreSQL clusters using streaming replication and pgpool-II&amp;quot;] at [https://pgconf.ru/en/2016/89695 &amp;quot;PGConf.Russia 2016&amp;quot;] (English, PDF) (2016/02/03)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2015/pgpool-II-3.5.pdf &amp;quot;How to manage a herd of elephants: Introducing new features of pgpool-II 3.5&amp;quot;] at [https://www.eventdove.com/event/106042 &amp;quot;PostgreSQL Conference China 2015&amp;quot;] (English, PDF) (2015/11/21)&lt;br /&gt;
* [https://pgpool.net/mediawiki/index.php?title=pgpool-II_3.5_features&amp;amp;redirect=no pgpool-II 3.5 new features] (English, Wiki)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/pgpool-II-3.5.pdf 6th Postgres Cluster Hackers Summit, pgCon 2015 &amp;quot;pgpool-II 3.5 How it will look like?&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2015.pdf 6th Postgres Cluster Hackers Summit, pgCon 2015 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2014.pdf 5th Postgres Cluster Hackers Summit, pgCon 2014 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2013.pdf 4th Postgres Cluster Hackers Summit, pgCon 2013 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2012.pdf 3rd Postgres Cluster Hackers Summit, pgCon 2012 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
&lt;br /&gt;
== Blog posts by Pgpool-II developers ==&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2022/09/query-cache-improvement-in-pgpool-ii-44.html Query cache improvement in Pgpool-II 4.4] By Tatsuo Ishii (2022/9/26)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2022/09/configuring-vip-route-table.html How to make Pgpool-II Leader Switchover Seamless on AWS - Updating Route Table] By Bo Peng (2022/9/19)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2022/09/configuring-vip-in-cloud.html Configuring and Managing VIP for Pgpool-II on AWS] By Bo Peng (2022/9/10)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2022/03/pgo-v5-installation.html Installing Crunchy Postgres Operator v5 on EKS] By Bo Peng (2022/3/29)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2022/03/pgpool-debian.html Installing Pgpool-II on Debian/Ubuntu] By Bo Peng (2022/3/19)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2022/02/auto-failback.html Pgpool-II Configuration Parameters - auto_failback] By Bo Peng (2022/2/28)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2022/02/whats-new-in-pgpool-ii-43-part3.html What&#039;s new in Pgpool-II 4.3? (part3)] By Tatsuo Ishii (2022/2/11)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2022/02/whats-new-in-pgpool-ii-43-part2.html What&#039;s new in Pgpool-II 4.3? (part2)] By Tatsuo Ishii (2022/2/6)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2022/01/whats-new-in-pgpool-ii-43.html What&#039;s new in Pgpool-II 4.3?] By Tatsuo Ishii (2022/1/31)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/09/enable-shared-relcache.html  Pgpool-II Configuration Parameters - enable_shared_relcache] By Bo Peng (2021/9/22)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/09/reserved-connections.html  Pgpool-II Configuration Parameters - reserved_connections] By Bo Peng (2021/9/21)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/08/failover-triggered-by-postgresql.html Failover triggered by PostgreSQL shutdown] By Tatsuo Ishii (2021/8/24)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/07/deploying-pgpool2-exporter-with-docker.html Deploying Pgpool-II Exporter with Docker] By Bo Peng (2021/7/26)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/07/postgres-disaster-recovery-on-k8s-zalando.html Disaster Recovery Strategies for PostgreSQL Deployments on Kubernetes (Part 2)] By Bo Peng (2021/7/04)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/06/promoting-specied-node-in-pgpool-ii.html Promoting specied node in Pgpool-II] By Tatsuo Ishii (2021/6/18)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/05/postgres-disaster-recovery-on-k8s.html Disaster Recovery Strategies for PostgreSQL Deployments on Kubernetes (Part 1)] By Bo Peng (2021/5/31)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/04/pgpool-logging-debugging.html  Pgpool-II Logging and Debugging] By Bo Peng (2021/4/30)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/04/visibility-with-query-cache.html Visibility with query cache] By Tatsuo Ishii (2021/4/19)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/03/logging-pgpool-on-k8s.html Logging of Pgpool-II on Kubernetes] By Bo Peng (2021/3/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/02/clustering-modes-in-pgpool.html  Pgpool-II&#039;s Clustering Modes] By Bo Peng (2021/2/28)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/02/what-pool-means-in-pgpool-ii.html What &amp;quot;pool&amp;quot; means in Pgpool-II?] By Tatsuo Ishii (2021/2/6)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/01/statistics-in-pgpool.html Various Ways to Retrieve Pgpool-II&#039;s Statistics] By Bo Peng (2021/1/31)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/12/load-balancing-in-pgpool.html Query Load Balancing in Pgpool-II] By Bo Peng (2020/12/29)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/12/timeouts-in-pgpool-ii-connections.html Timeouts in Pgpool-II connections] By Tatsuo Ishii (2020/12/27)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/11/pgpool2-on-k8s.html  Deploy Pgpool-II on Kubernetes to Achieve Query Load Balancing and Monitoring] By Bo Peng (2020/11/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/10/pgpool2-exporter.html Monitoring PostgreSQL Cluster via Pgpool-II with Prometheus] By Bo Peng (2020/10/31)&lt;br /&gt;
* [https://www.highgo.ca/2020/10/08/configuring-pgpool-ii-watchdog-its-going-to-be-a-lot-easier/ Configuring Pgpool-II watchdog: It’s going to be a lot easier] By Muhammad Usama (2020/10/08)&lt;br /&gt;
* [https://www.highgo.ca/2020/09/30/pgpool-ii-4-2-features/ pgpool II 4.2 features] By Ahsan Hadi (2020/09/30)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/09/fixing-language-in-pgpool-ii-42.html Fixing language in Pgpool-II 4.2] By Tatsuo Ishii (2020/09/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/09/how-to-configure-scram-and-md5.html How to Configure SCRAM and MD5 Authentication in Pgpool-II] By Bo Peng (2020/09/28)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/08/new-statistics-data-in-pgpool-ii.html New statistics data in Pgpool-II] By Tatsuo Ishii (2020/08/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/08/authentication-in-pgpool.html  Authentication in Pgpool-II] By Bo Peng (2020/08/27)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/07/connection-pooling-in-pgpool.html  Connection Pooling in Pgpool-II] By Bo Peng (2020/07/31)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/07/snapshot-isolation-mode.html Snapshot Isolation Mode] By Tatsuo Ishii (2020/07/25)&lt;br /&gt;
* [https://www.highgo.ca/2020/06/25/authenticating-pgpool-ii-with-ldap/ Authenticating pgpool II with LDAP] By Ahsan Hadi (2020/06/25)&lt;br /&gt;
* [https://www.highgo.ca/2020/02/25/setting-up-ssl-certificate-authentication-with-pgpool-ii/ Setting up SSL certificate authentication with Pgpool-II]  By Muhammad Usama (2020/02/25)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/07/when-primary-server-is-far-away-from.html When primary server is far away from standby server] By Tatsuo Ishii (2019/07/24)&lt;br /&gt;
* [https://www.highgo.ca/2019/07/19/pgpool-ii-4-1-taking-the-bull-by-its-horn/ Pgpool II 4.1 taking the bull by its horn] By Ahsan Hadi (2019/07/19)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/04/statement-level-load-balancing.html Statement level load balancing] By Tatsuo Ishii (2019/04/01)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/03/shared-relation-cache.html Shared Relation Cache] By Tatsuo Ishii (2019/03/24)&lt;br /&gt;
* [https://www.highgo.ca/2019/09/06/can-you-gain-performance-with-pgpool-ii-as-a-load-balancer/ Can you gain performance with Pgpool-II as a load balancer?] By Muhammad Usama (2019/04/02)&lt;br /&gt;
* &#039;&#039;&#039;old blog posts&#039;&#039;&#039;&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2013/06/settinng-up-everything-at-one-time.html Setting up everything at one time]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2013/06/pgpool-ii-33-alpha-1-is-out.html pgpool-II 3.3 alpha1 is out!]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2012/10/pgpool-ii-now.html pgpool-II + now()]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2012/08/pgpool-ii-talk-at-postgresql-conference.html Pgpool-II talk at PostgreSQL Conference Europe 2012]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== User contributed documentation ==&lt;br /&gt;
* &#039;&#039;&#039;Documentation&#039;&#039;&#039;]&lt;br /&gt;
** [https://www.pgpool.net/pgpool-web/contrib_docs/pgpool-II_for_beginners.pdf Gerd Koenig&#039;s &amp;quot;pgpool-II for beginners&amp;quot;] (English)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool/1 What is pgpool-II] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool/2 Creating a replication system using pgpool-II] (Japanese)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2010/20100702-03char10.pdf Making master/slave systems work better with pgpool-II] (English, PDF)&lt;br /&gt;
** [[Relationship_between_max_pool,_num_init_children,_and_max_connections|Relationship between max_pool, num_init_children, and max_connections]](English, 2012/8/25)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2013/20130212_pgpool_seminar_sraoss.pdf New features of pgpool-II, multifunctional middleware for PostgreSQL] (Japanese, PDF)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2013/20131115_dbshowtech.pdf Construct scale out configuration with PostgreSQL] (Japanese, PDF)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-pgpool_setup/1 Let&#039;s try pgpool-II easy setup function] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-3.3-watchdog/1 About pgpool-II 3.3 watchdog] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-tcp-tuning/1 Improvements of connection performance in pgpool-II] (Japanese)&lt;br /&gt;
** [https://www.postgresql.jp/events/jpug-pgcon2013-files/C1_jpugpgcon2013_slide Construct high-availability, high-performance system with pgpool-II] (Japanese, PDF)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.6&#039;&#039;&#039;&lt;br /&gt;
** Basic Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-basic.html English] [https://www.pgpool.net/docs/latest/ja/html/example-basic.html Japanese]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** Basic: [https://www.pgpool.net/docs/latest/en/html/example-watchdog.html English] [https://www.pgpool.net/docs/latest/ja/html/example-watchdog.html Japanese]&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/docs/latest/en/html/example-cluster.html English] [https://www.pgpool.net/docs/latest/ja/html/example-cluster.html Japanese]&lt;br /&gt;
** AWS Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aws.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aws.html Japanese]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of Streaming Replication with pgpool-II (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** Simple Streaming replication setting with pgpool-II&lt;br /&gt;
*** For pgpool-II 3.1 and PostgreSQL 9.1: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting/index.html English (2012/01/31) ]  [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting/index-ja.html Japanese (2012/6/1) ]&lt;br /&gt;
*** For pgpool-II 3.0 and PostgreSQL 9.0: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting_3.0/index.html English] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting_3.0/index-ja.html Japanese]&lt;br /&gt;
** multiple server version&lt;br /&gt;
*** For pgpool-II 3.3 and PostgreSQL 9.3: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.3/index.html English (2014/04/07)] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.3/index-ja.html Japanese (2014/04/07)]&lt;br /&gt;
*** For pgpool-II 3.1 and PostgreSQL 9.1: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2/index.html English (2012/01/31)] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.0/index-ja.html Japanese (2012/6/1)]&lt;br /&gt;
*** For pgpool-II 3.0 and PostgreSQL 9.0: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2/index.html English] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.0/index-ja.html Japanese]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.3 (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave_3.3/en.html English (2014/04/07)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave_3.3/ja.html Japanese (2014/04/07)]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.2 (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** On memory query cache: [https://www.pgpool.net/pgpool-web/contrib_docs/memqcache/en.html English (2012/07/20)] [https://www.pgpool.net/pgpool-web/contrib_docs/memqcache/ja.html Japanese (2012/07/20)]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** Basic: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog/en.html English (2012/07/20)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog/ja.html Japanese (2012/07/20)]&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave/en.html English (2012/10/22)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave/ja.html Japanese (2012/10/15)]&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=Documentation&amp;diff=3676</id>
		<title>Documentation</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=Documentation&amp;diff=3676"/>
		<updated>2022-09-27T01:34:10Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Blog posts by Pgpool-II developers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Official documentation ==&lt;br /&gt;
* &#039;&#039;&#039;Pgpool-II&#039;&#039;&#039;&lt;br /&gt;
** Pgpool-II manual (English)&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/ Pgpool-II 4.3] (latest)&lt;br /&gt;
*** [https://www.pgpool.net/docs/42/en/html/ Pgpool-II 4.2]&lt;br /&gt;
*** [https://www.pgpool.net/docs/41/en/html/ Pgpool-II 4.1]&lt;br /&gt;
*** [https://www.pgpool.net/docs/40/en/html/ Pgpool-II 4.0]&lt;br /&gt;
*** [https://www.pgpool.net/docs/37/en/html/ Pgpool-II 3.7]&lt;br /&gt;
** Pgpool-II manual (Japanese)&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/ja/html/ Pgpool-II 4.3] (latest)&lt;br /&gt;
*** [https://www.pgpool.net/docs/42/ja/html/ Pgpool-II 4.2]&lt;br /&gt;
*** [https://www.pgpool.net/docs/41/ja/html/ Pgpool-II 4.1]&lt;br /&gt;
*** [https://www.pgpool.net/docs/40/ja/html/ Pgpool-II 4.0]&lt;br /&gt;
*** [https://www.pgpool.net/docs/37/ja/html/ Pgpool-II 3.7]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/pgpool-zh_cn.html pgpool-II manual] (Simplified Chinese)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-en.html pgpool-II tutorial] (English)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-ja.html pgpool-II tutorial] (Japanese)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-zh_cn.html pgpool-II tutorial] (Simplified Chinese)--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;pgpoolAdmin&#039;&#039;&#039;&lt;br /&gt;
** [https://www.pgpool.net/docs/pgpoolAdmin/index_en.html pgpoolAdmin manual] (English)&lt;br /&gt;
** [https://www.pgpool.net/docs/pgpoolAdmin/index_ja.html pgpoolAdmin manual] (Japanese)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of Pgpool-II&#039;&#039;&#039;&lt;br /&gt;
** Basic Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-basic.html English] [https://www.pgpool.net/docs/latest/ja/html/example-basic.html Japanese]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/docs/latest/en/html/example-cluster.html English] [https://www.pgpool.net/docs/latest/ja/html/example-cluster.html Japanese]&lt;br /&gt;
** AWS Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aws.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aws.html Japanese]&lt;br /&gt;
** Aurora Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aurora.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aurora.html Japanese]&lt;br /&gt;
** Kubernetes Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-kubernetes.html English] [https://www.pgpool.net/docs/latest/ja/html/example-kubernetes.html Japanese]&lt;br /&gt;
*** [https://github.com/pgpool/pgpool2_on_k8s/blob/master/docs/index.md English]&lt;br /&gt;
** Pgpool-II Exporter&#039;&#039;&#039;&lt;br /&gt;
*** [https://github.com/pgpool/pgpool2_exporter English]&lt;br /&gt;
&lt;br /&gt;
== Developer&#039;s documentation ==&lt;br /&gt;
&lt;br /&gt;
* [https://www.pgcon.org/events/pgcon_2020/sessions/session/45/slides/44/HA_Cluster_on_K8s.pdf PostgreSQL HA Clusterwith Query Load Balancingon Kubernetes] at [https://www.pgcon.org/2020/ PGCon 2020 Ottawa] (English, PDF) (2020/05/27)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/dbtech2019-sraoss-postgresql-cluster.pdf PostgreSQLによるクラスタ運用および負荷分散術] at [https://www.db-tech-showcase.com/dbts/tokyo db tech showcase Tokyo 2019] (Japanese, PDF) (2019/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/PGConf.ASIA.Bali.2019-PENG.pdf Setup a High-Availability and Load Balancing PostgreSQL Cluster - New Features of Pgpool-II 4.1 -] at [https://2019.pgconf.asia/ PGConf.ASIA 2019 Bali] (English, PDF) (2019/09/10)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/Introducing%20PostgreSQL%20SQL%20Parser.pdf Introducing PostgreSQL SQL Parser - Use of PostgreSQL Parser in other Applications -] at [https://www.pgcon.org/2019/ PGCon 2019 Ottawa] (English, PDF) (2019/05/31)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgconf.asia-JA-20181211-day1-B4-peng.pdf Celebrating its 15th Anniversary: Pgpool-II Past, Present and Future - Part 2] at [https://www.pgconf.asia/EN/2018/day1/#B4 PGConf.ASIA 2018] (English, PDF) (2018/12/11)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgconf.asia-JA-20181211-day1-B4-ishii.pdf Celebrating its 15th Anniversary: Pgpool-II Past, Present and Future - Part 1]  at [https://www.pgconf.asia/EN/2018/day1/#B4 PGConf.ASIA 2018] (English, PDF) (2018/12/11)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/PostgreSQL-HA-with-Pgpool-II-20180925.pdf PostgreSQL HA with Pgpool-II and whats been happening in Pgpool world lately...] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (English, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/Pgpool-II-4-20180925.pdf PostgreSQL クラスタ環境の管理機能を大幅に強化！Pgpool-II 4.0 のご紹介] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (Japanese, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgpool-past-now-and-future-20180925.pdf 誕生から 15 年を迎えた Pgpool-II の過去と現在、そして未来] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (Japanese, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2017/Pgpool-II-3.7.pdf 信頼性を向上させ、PostgreSQL 10 に対応した Pgpool-II 3.7 のご紹介] at [https://www.pgconf.asia/JA/2017/day-2/#B5 PGConf.ASIA 2017] (2017/12/06)&lt;br /&gt;
* [https://www.pgpool.net/download.php?f=Pgpool-II-history.pdf Pgpool-II: Past, Present and Future] at [https://www.pgconf.asia/EN/2016/day-2/#B3 PGConf.ASIA 2016] (Japanese, PDF) (2016/12/07)&lt;br /&gt;
* [https://pgpool.net/mediawiki/images/2016-02-Moscow-pgpool-II-3.5.pdf &amp;quot;How to manage a herd of elephants: PostgreSQL clusters using streaming replication and pgpool-II&amp;quot;] at [https://pgconf.ru/en/2016/89695 &amp;quot;PGConf.Russia 2016&amp;quot;] (English, PDF) (2016/02/03)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2015/pgpool-II-3.5.pdf &amp;quot;How to manage a herd of elephants: Introducing new features of pgpool-II 3.5&amp;quot;] at [https://www.eventdove.com/event/106042 &amp;quot;PostgreSQL Conference China 2015&amp;quot;] (English, PDF) (2015/11/21)&lt;br /&gt;
* [https://pgpool.net/mediawiki/index.php?title=pgpool-II_3.5_features&amp;amp;redirect=no pgpool-II 3.5 new features] (English, Wiki)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/pgpool-II-3.5.pdf 6th Postgres Cluster Hackers Summit, pgCon 2015 &amp;quot;pgpool-II 3.5 How it will look like?&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2015.pdf 6th Postgres Cluster Hackers Summit, pgCon 2015 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2014.pdf 5th Postgres Cluster Hackers Summit, pgCon 2014 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2013.pdf 4th Postgres Cluster Hackers Summit, pgCon 2013 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2012.pdf 3rd Postgres Cluster Hackers Summit, pgCon 2012 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
&lt;br /&gt;
== Blog posts by Pgpool-II developers ==&lt;br /&gt;
* [https://b-peng.blogspot.com/2022/09/configuring-vip-route-table.html How to make Pgpool-II Leader Switchover Seamless on AWS - Updating Route Table] By Bo Peng (2022/9/19)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2022/09/configuring-vip-in-cloud.html Configuring and Managing VIP for Pgpool-II on AWS] By Bo Peng (2022/9/10)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2022/03/pgo-v5-installation.html Installing Crunchy Postgres Operator v5 on EKS] By Bo Peng (2022/3/29)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2022/03/pgpool-debian.html Installing Pgpool-II on Debian/Ubuntu] By Bo Peng (2022/3/19)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2022/02/auto-failback.html Pgpool-II Configuration Parameters - auto_failback] By Bo Peng (2022/2/28)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2022/02/whats-new-in-pgpool-ii-43-part3.html What&#039;s new in Pgpool-II 4.3? (part3)] By Tatsuo Ishii (2022/2/11)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2022/02/whats-new-in-pgpool-ii-43-part2.html What&#039;s new in Pgpool-II 4.3? (part2)] By Tatsuo Ishii (2022/2/6)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2022/01/whats-new-in-pgpool-ii-43.html What&#039;s new in Pgpool-II 4.3?] By Tatsuo Ishii (2022/1/31)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/09/enable-shared-relcache.html  Pgpool-II Configuration Parameters - enable_shared_relcache] By Bo Peng (2021/9/22)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/09/reserved-connections.html  Pgpool-II Configuration Parameters - reserved_connections] By Bo Peng (2021/9/21)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/08/failover-triggered-by-postgresql.html Failover triggered by PostgreSQL shutdown] By Tatsuo Ishii (2021/8/24)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/07/deploying-pgpool2-exporter-with-docker.html Deploying Pgpool-II Exporter with Docker] By Bo Peng (2021/7/26)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/07/postgres-disaster-recovery-on-k8s-zalando.html Disaster Recovery Strategies for PostgreSQL Deployments on Kubernetes (Part 2)] By Bo Peng (2021/7/04)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/06/promoting-specied-node-in-pgpool-ii.html Promoting specied node in Pgpool-II] By Tatsuo Ishii (2021/6/18)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/05/postgres-disaster-recovery-on-k8s.html Disaster Recovery Strategies for PostgreSQL Deployments on Kubernetes (Part 1)] By Bo Peng (2021/5/31)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/04/pgpool-logging-debugging.html  Pgpool-II Logging and Debugging] By Bo Peng (2021/4/30)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/04/visibility-with-query-cache.html Visibility with query cache] By Tatsuo Ishii (2021/4/19)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/03/logging-pgpool-on-k8s.html Logging of Pgpool-II on Kubernetes] By Bo Peng (2021/3/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/02/clustering-modes-in-pgpool.html  Pgpool-II&#039;s Clustering Modes] By Bo Peng (2021/2/28)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/02/what-pool-means-in-pgpool-ii.html What &amp;quot;pool&amp;quot; means in Pgpool-II?] By Tatsuo Ishii (2021/2/6)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/01/statistics-in-pgpool.html Various Ways to Retrieve Pgpool-II&#039;s Statistics] By Bo Peng (2021/1/31)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/12/load-balancing-in-pgpool.html Query Load Balancing in Pgpool-II] By Bo Peng (2020/12/29)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/12/timeouts-in-pgpool-ii-connections.html Timeouts in Pgpool-II connections] By Tatsuo Ishii (2020/12/27)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/11/pgpool2-on-k8s.html  Deploy Pgpool-II on Kubernetes to Achieve Query Load Balancing and Monitoring] By Bo Peng (2020/11/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/10/pgpool2-exporter.html Monitoring PostgreSQL Cluster via Pgpool-II with Prometheus] By Bo Peng (2020/10/31)&lt;br /&gt;
* [https://www.highgo.ca/2020/10/08/configuring-pgpool-ii-watchdog-its-going-to-be-a-lot-easier/ Configuring Pgpool-II watchdog: It’s going to be a lot easier] By Muhammad Usama (2020/10/08)&lt;br /&gt;
* [https://www.highgo.ca/2020/09/30/pgpool-ii-4-2-features/ pgpool II 4.2 features] By Ahsan Hadi (2020/09/30)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/09/fixing-language-in-pgpool-ii-42.html Fixing language in Pgpool-II 4.2] By Tatsuo Ishii (2020/09/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/09/how-to-configure-scram-and-md5.html How to Configure SCRAM and MD5 Authentication in Pgpool-II] By Bo Peng (2020/09/28)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/08/new-statistics-data-in-pgpool-ii.html New statistics data in Pgpool-II] By Tatsuo Ishii (2020/08/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/08/authentication-in-pgpool.html  Authentication in Pgpool-II] By Bo Peng (2020/08/27)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/07/connection-pooling-in-pgpool.html  Connection Pooling in Pgpool-II] By Bo Peng (2020/07/31)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/07/snapshot-isolation-mode.html Snapshot Isolation Mode] By Tatsuo Ishii (2020/07/25)&lt;br /&gt;
* [https://www.highgo.ca/2020/06/25/authenticating-pgpool-ii-with-ldap/ Authenticating pgpool II with LDAP] By Ahsan Hadi (2020/06/25)&lt;br /&gt;
* [https://www.highgo.ca/2020/02/25/setting-up-ssl-certificate-authentication-with-pgpool-ii/ Setting up SSL certificate authentication with Pgpool-II]  By Muhammad Usama (2020/02/25)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/07/when-primary-server-is-far-away-from.html When primary server is far away from standby server] By Tatsuo Ishii (2019/07/24)&lt;br /&gt;
* [https://www.highgo.ca/2019/07/19/pgpool-ii-4-1-taking-the-bull-by-its-horn/ Pgpool II 4.1 taking the bull by its horn] By Ahsan Hadi (2019/07/19)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/04/statement-level-load-balancing.html Statement level load balancing] By Tatsuo Ishii (2019/04/01)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/03/shared-relation-cache.html Shared Relation Cache] By Tatsuo Ishii (2019/03/24)&lt;br /&gt;
* [https://www.highgo.ca/2019/09/06/can-you-gain-performance-with-pgpool-ii-as-a-load-balancer/ Can you gain performance with Pgpool-II as a load balancer?] By Muhammad Usama (2019/04/02)&lt;br /&gt;
* &#039;&#039;&#039;old blog posts&#039;&#039;&#039;&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2013/06/settinng-up-everything-at-one-time.html Setting up everything at one time]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2013/06/pgpool-ii-33-alpha-1-is-out.html pgpool-II 3.3 alpha1 is out!]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2012/10/pgpool-ii-now.html pgpool-II + now()]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2012/08/pgpool-ii-talk-at-postgresql-conference.html Pgpool-II talk at PostgreSQL Conference Europe 2012]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== User contributed documentation ==&lt;br /&gt;
* &#039;&#039;&#039;Documentation&#039;&#039;&#039;]&lt;br /&gt;
** [https://www.pgpool.net/pgpool-web/contrib_docs/pgpool-II_for_beginners.pdf Gerd Koenig&#039;s &amp;quot;pgpool-II for beginners&amp;quot;] (English)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool/1 What is pgpool-II] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool/2 Creating a replication system using pgpool-II] (Japanese)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2010/20100702-03char10.pdf Making master/slave systems work better with pgpool-II] (English, PDF)&lt;br /&gt;
** [[Relationship_between_max_pool,_num_init_children,_and_max_connections|Relationship between max_pool, num_init_children, and max_connections]](English, 2012/8/25)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2013/20130212_pgpool_seminar_sraoss.pdf New features of pgpool-II, multifunctional middleware for PostgreSQL] (Japanese, PDF)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2013/20131115_dbshowtech.pdf Construct scale out configuration with PostgreSQL] (Japanese, PDF)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-pgpool_setup/1 Let&#039;s try pgpool-II easy setup function] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-3.3-watchdog/1 About pgpool-II 3.3 watchdog] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-tcp-tuning/1 Improvements of connection performance in pgpool-II] (Japanese)&lt;br /&gt;
** [https://www.postgresql.jp/events/jpug-pgcon2013-files/C1_jpugpgcon2013_slide Construct high-availability, high-performance system with pgpool-II] (Japanese, PDF)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.6&#039;&#039;&#039;&lt;br /&gt;
** Basic Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-basic.html English] [https://www.pgpool.net/docs/latest/ja/html/example-basic.html Japanese]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** Basic: [https://www.pgpool.net/docs/latest/en/html/example-watchdog.html English] [https://www.pgpool.net/docs/latest/ja/html/example-watchdog.html Japanese]&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/docs/latest/en/html/example-cluster.html English] [https://www.pgpool.net/docs/latest/ja/html/example-cluster.html Japanese]&lt;br /&gt;
** AWS Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aws.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aws.html Japanese]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of Streaming Replication with pgpool-II (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** Simple Streaming replication setting with pgpool-II&lt;br /&gt;
*** For pgpool-II 3.1 and PostgreSQL 9.1: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting/index.html English (2012/01/31) ]  [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting/index-ja.html Japanese (2012/6/1) ]&lt;br /&gt;
*** For pgpool-II 3.0 and PostgreSQL 9.0: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting_3.0/index.html English] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting_3.0/index-ja.html Japanese]&lt;br /&gt;
** multiple server version&lt;br /&gt;
*** For pgpool-II 3.3 and PostgreSQL 9.3: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.3/index.html English (2014/04/07)] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.3/index-ja.html Japanese (2014/04/07)]&lt;br /&gt;
*** For pgpool-II 3.1 and PostgreSQL 9.1: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2/index.html English (2012/01/31)] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.0/index-ja.html Japanese (2012/6/1)]&lt;br /&gt;
*** For pgpool-II 3.0 and PostgreSQL 9.0: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2/index.html English] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.0/index-ja.html Japanese]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.3 (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave_3.3/en.html English (2014/04/07)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave_3.3/ja.html Japanese (2014/04/07)]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.2 (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** On memory query cache: [https://www.pgpool.net/pgpool-web/contrib_docs/memqcache/en.html English (2012/07/20)] [https://www.pgpool.net/pgpool-web/contrib_docs/memqcache/ja.html Japanese (2012/07/20)]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** Basic: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog/en.html English (2012/07/20)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog/ja.html Japanese (2012/07/20)]&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave/en.html English (2012/10/22)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave/ja.html Japanese (2012/10/15)]&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=Documentation&amp;diff=3675</id>
		<title>Documentation</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=Documentation&amp;diff=3675"/>
		<updated>2022-09-27T01:26:54Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Blog posts by Pgpool-II developers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Official documentation ==&lt;br /&gt;
* &#039;&#039;&#039;Pgpool-II&#039;&#039;&#039;&lt;br /&gt;
** Pgpool-II manual (English)&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/ Pgpool-II 4.3] (latest)&lt;br /&gt;
*** [https://www.pgpool.net/docs/42/en/html/ Pgpool-II 4.2]&lt;br /&gt;
*** [https://www.pgpool.net/docs/41/en/html/ Pgpool-II 4.1]&lt;br /&gt;
*** [https://www.pgpool.net/docs/40/en/html/ Pgpool-II 4.0]&lt;br /&gt;
*** [https://www.pgpool.net/docs/37/en/html/ Pgpool-II 3.7]&lt;br /&gt;
** Pgpool-II manual (Japanese)&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/ja/html/ Pgpool-II 4.3] (latest)&lt;br /&gt;
*** [https://www.pgpool.net/docs/42/ja/html/ Pgpool-II 4.2]&lt;br /&gt;
*** [https://www.pgpool.net/docs/41/ja/html/ Pgpool-II 4.1]&lt;br /&gt;
*** [https://www.pgpool.net/docs/40/ja/html/ Pgpool-II 4.0]&lt;br /&gt;
*** [https://www.pgpool.net/docs/37/ja/html/ Pgpool-II 3.7]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/pgpool-zh_cn.html pgpool-II manual] (Simplified Chinese)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-en.html pgpool-II tutorial] (English)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-ja.html pgpool-II tutorial] (Japanese)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-zh_cn.html pgpool-II tutorial] (Simplified Chinese)--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;pgpoolAdmin&#039;&#039;&#039;&lt;br /&gt;
** [https://www.pgpool.net/docs/pgpoolAdmin/index_en.html pgpoolAdmin manual] (English)&lt;br /&gt;
** [https://www.pgpool.net/docs/pgpoolAdmin/index_ja.html pgpoolAdmin manual] (Japanese)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of Pgpool-II&#039;&#039;&#039;&lt;br /&gt;
** Basic Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-basic.html English] [https://www.pgpool.net/docs/latest/ja/html/example-basic.html Japanese]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/docs/latest/en/html/example-cluster.html English] [https://www.pgpool.net/docs/latest/ja/html/example-cluster.html Japanese]&lt;br /&gt;
** AWS Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aws.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aws.html Japanese]&lt;br /&gt;
** Aurora Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aurora.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aurora.html Japanese]&lt;br /&gt;
** Kubernetes Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-kubernetes.html English] [https://www.pgpool.net/docs/latest/ja/html/example-kubernetes.html Japanese]&lt;br /&gt;
*** [https://github.com/pgpool/pgpool2_on_k8s/blob/master/docs/index.md English]&lt;br /&gt;
** Pgpool-II Exporter&#039;&#039;&#039;&lt;br /&gt;
*** [https://github.com/pgpool/pgpool2_exporter English]&lt;br /&gt;
&lt;br /&gt;
== Developer&#039;s documentation ==&lt;br /&gt;
&lt;br /&gt;
* [https://www.pgcon.org/events/pgcon_2020/sessions/session/45/slides/44/HA_Cluster_on_K8s.pdf PostgreSQL HA Clusterwith Query Load Balancingon Kubernetes] at [https://www.pgcon.org/2020/ PGCon 2020 Ottawa] (English, PDF) (2020/05/27)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/dbtech2019-sraoss-postgresql-cluster.pdf PostgreSQLによるクラスタ運用および負荷分散術] at [https://www.db-tech-showcase.com/dbts/tokyo db tech showcase Tokyo 2019] (Japanese, PDF) (2019/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/PGConf.ASIA.Bali.2019-PENG.pdf Setup a High-Availability and Load Balancing PostgreSQL Cluster - New Features of Pgpool-II 4.1 -] at [https://2019.pgconf.asia/ PGConf.ASIA 2019 Bali] (English, PDF) (2019/09/10)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/Introducing%20PostgreSQL%20SQL%20Parser.pdf Introducing PostgreSQL SQL Parser - Use of PostgreSQL Parser in other Applications -] at [https://www.pgcon.org/2019/ PGCon 2019 Ottawa] (English, PDF) (2019/05/31)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgconf.asia-JA-20181211-day1-B4-peng.pdf Celebrating its 15th Anniversary: Pgpool-II Past, Present and Future - Part 2] at [https://www.pgconf.asia/EN/2018/day1/#B4 PGConf.ASIA 2018] (English, PDF) (2018/12/11)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgconf.asia-JA-20181211-day1-B4-ishii.pdf Celebrating its 15th Anniversary: Pgpool-II Past, Present and Future - Part 1]  at [https://www.pgconf.asia/EN/2018/day1/#B4 PGConf.ASIA 2018] (English, PDF) (2018/12/11)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/PostgreSQL-HA-with-Pgpool-II-20180925.pdf PostgreSQL HA with Pgpool-II and whats been happening in Pgpool world lately...] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (English, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/Pgpool-II-4-20180925.pdf PostgreSQL クラスタ環境の管理機能を大幅に強化！Pgpool-II 4.0 のご紹介] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (Japanese, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgpool-past-now-and-future-20180925.pdf 誕生から 15 年を迎えた Pgpool-II の過去と現在、そして未来] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (Japanese, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2017/Pgpool-II-3.7.pdf 信頼性を向上させ、PostgreSQL 10 に対応した Pgpool-II 3.7 のご紹介] at [https://www.pgconf.asia/JA/2017/day-2/#B5 PGConf.ASIA 2017] (2017/12/06)&lt;br /&gt;
* [https://www.pgpool.net/download.php?f=Pgpool-II-history.pdf Pgpool-II: Past, Present and Future] at [https://www.pgconf.asia/EN/2016/day-2/#B3 PGConf.ASIA 2016] (Japanese, PDF) (2016/12/07)&lt;br /&gt;
* [https://pgpool.net/mediawiki/images/2016-02-Moscow-pgpool-II-3.5.pdf &amp;quot;How to manage a herd of elephants: PostgreSQL clusters using streaming replication and pgpool-II&amp;quot;] at [https://pgconf.ru/en/2016/89695 &amp;quot;PGConf.Russia 2016&amp;quot;] (English, PDF) (2016/02/03)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2015/pgpool-II-3.5.pdf &amp;quot;How to manage a herd of elephants: Introducing new features of pgpool-II 3.5&amp;quot;] at [https://www.eventdove.com/event/106042 &amp;quot;PostgreSQL Conference China 2015&amp;quot;] (English, PDF) (2015/11/21)&lt;br /&gt;
* [https://pgpool.net/mediawiki/index.php?title=pgpool-II_3.5_features&amp;amp;redirect=no pgpool-II 3.5 new features] (English, Wiki)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/pgpool-II-3.5.pdf 6th Postgres Cluster Hackers Summit, pgCon 2015 &amp;quot;pgpool-II 3.5 How it will look like?&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2015.pdf 6th Postgres Cluster Hackers Summit, pgCon 2015 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2014.pdf 5th Postgres Cluster Hackers Summit, pgCon 2014 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2013.pdf 4th Postgres Cluster Hackers Summit, pgCon 2013 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2012.pdf 3rd Postgres Cluster Hackers Summit, pgCon 2012 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
&lt;br /&gt;
== Blog posts by Pgpool-II developers ==&lt;br /&gt;
* [https://b-peng.blogspot.com/2022/09/configuring-vip-in-cloud.html Configuring and Managing VIP for Pgpool-II on AWS] By Bo Peng (2022/9/10)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2022/03/pgo-v5-installation.html Installing Crunchy Postgres Operator v5 on EKS] By Bo Peng (2022/3/29)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2022/03/pgpool-debian.html Installing Pgpool-II on Debian/Ubuntu] By Bo Peng (2022/3/19)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2022/02/auto-failback.html Pgpool-II Configuration Parameters - auto_failback] By Bo Peng (2022/2/28)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2022/02/whats-new-in-pgpool-ii-43-part3.html What&#039;s new in Pgpool-II 4.3? (part3)] By Tatsuo Ishii (2022/2/11)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2022/02/whats-new-in-pgpool-ii-43-part2.html What&#039;s new in Pgpool-II 4.3? (part2)] By Tatsuo Ishii (2022/2/6)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2022/01/whats-new-in-pgpool-ii-43.html What&#039;s new in Pgpool-II 4.3?] By Tatsuo Ishii (2022/1/31)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/09/enable-shared-relcache.html  Pgpool-II Configuration Parameters - enable_shared_relcache] By Bo Peng (2021/9/22)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/09/reserved-connections.html  Pgpool-II Configuration Parameters - reserved_connections] By Bo Peng (2021/9/21)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/08/failover-triggered-by-postgresql.html Failover triggered by PostgreSQL shutdown] By Tatsuo Ishii (2021/8/24)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/07/deploying-pgpool2-exporter-with-docker.html Deploying Pgpool-II Exporter with Docker] By Bo Peng (2021/7/26)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/07/postgres-disaster-recovery-on-k8s-zalando.html Disaster Recovery Strategies for PostgreSQL Deployments on Kubernetes (Part 2)] By Bo Peng (2021/7/04)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/06/promoting-specied-node-in-pgpool-ii.html Promoting specied node in Pgpool-II] By Tatsuo Ishii (2021/6/18)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/05/postgres-disaster-recovery-on-k8s.html Disaster Recovery Strategies for PostgreSQL Deployments on Kubernetes (Part 1)] By Bo Peng (2021/5/31)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/04/pgpool-logging-debugging.html  Pgpool-II Logging and Debugging] By Bo Peng (2021/4/30)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/04/visibility-with-query-cache.html Visibility with query cache] By Tatsuo Ishii (2021/4/19)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/03/logging-pgpool-on-k8s.html Logging of Pgpool-II on Kubernetes] By Bo Peng (2021/3/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/02/clustering-modes-in-pgpool.html  Pgpool-II&#039;s Clustering Modes] By Bo Peng (2021/2/28)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/02/what-pool-means-in-pgpool-ii.html What &amp;quot;pool&amp;quot; means in Pgpool-II?] By Tatsuo Ishii (2021/2/6)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/01/statistics-in-pgpool.html Various Ways to Retrieve Pgpool-II&#039;s Statistics] By Bo Peng (2021/1/31)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/12/load-balancing-in-pgpool.html Query Load Balancing in Pgpool-II] By Bo Peng (2020/12/29)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/12/timeouts-in-pgpool-ii-connections.html Timeouts in Pgpool-II connections] By Tatsuo Ishii (2020/12/27)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/11/pgpool2-on-k8s.html  Deploy Pgpool-II on Kubernetes to Achieve Query Load Balancing and Monitoring] By Bo Peng (2020/11/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/10/pgpool2-exporter.html Monitoring PostgreSQL Cluster via Pgpool-II with Prometheus] By Bo Peng (2020/10/31)&lt;br /&gt;
* [https://www.highgo.ca/2020/10/08/configuring-pgpool-ii-watchdog-its-going-to-be-a-lot-easier/ Configuring Pgpool-II watchdog: It’s going to be a lot easier] By Muhammad Usama (2020/10/08)&lt;br /&gt;
* [https://www.highgo.ca/2020/09/30/pgpool-ii-4-2-features/ pgpool II 4.2 features] By Ahsan Hadi (2020/09/30)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/09/fixing-language-in-pgpool-ii-42.html Fixing language in Pgpool-II 4.2] By Tatsuo Ishii (2020/09/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/09/how-to-configure-scram-and-md5.html How to Configure SCRAM and MD5 Authentication in Pgpool-II] By Bo Peng (2020/09/28)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/08/new-statistics-data-in-pgpool-ii.html New statistics data in Pgpool-II] By Tatsuo Ishii (2020/08/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/08/authentication-in-pgpool.html  Authentication in Pgpool-II] By Bo Peng (2020/08/27)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/07/connection-pooling-in-pgpool.html  Connection Pooling in Pgpool-II] By Bo Peng (2020/07/31)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/07/snapshot-isolation-mode.html Snapshot Isolation Mode] By Tatsuo Ishii (2020/07/25)&lt;br /&gt;
* [https://www.highgo.ca/2020/06/25/authenticating-pgpool-ii-with-ldap/ Authenticating pgpool II with LDAP] By Ahsan Hadi (2020/06/25)&lt;br /&gt;
* [https://www.highgo.ca/2020/02/25/setting-up-ssl-certificate-authentication-with-pgpool-ii/ Setting up SSL certificate authentication with Pgpool-II]  By Muhammad Usama (2020/02/25)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/07/when-primary-server-is-far-away-from.html When primary server is far away from standby server] By Tatsuo Ishii (2019/07/24)&lt;br /&gt;
* [https://www.highgo.ca/2019/07/19/pgpool-ii-4-1-taking-the-bull-by-its-horn/ Pgpool II 4.1 taking the bull by its horn] By Ahsan Hadi (2019/07/19)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/04/statement-level-load-balancing.html Statement level load balancing] By Tatsuo Ishii (2019/04/01)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/03/shared-relation-cache.html Shared Relation Cache] By Tatsuo Ishii (2019/03/24)&lt;br /&gt;
* [https://www.highgo.ca/2019/09/06/can-you-gain-performance-with-pgpool-ii-as-a-load-balancer/ Can you gain performance with Pgpool-II as a load balancer?] By Muhammad Usama (2019/04/02)&lt;br /&gt;
* &#039;&#039;&#039;old blog posts&#039;&#039;&#039;&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2013/06/settinng-up-everything-at-one-time.html Setting up everything at one time]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2013/06/pgpool-ii-33-alpha-1-is-out.html pgpool-II 3.3 alpha1 is out!]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2012/10/pgpool-ii-now.html pgpool-II + now()]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2012/08/pgpool-ii-talk-at-postgresql-conference.html Pgpool-II talk at PostgreSQL Conference Europe 2012]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== User contributed documentation ==&lt;br /&gt;
* &#039;&#039;&#039;Documentation&#039;&#039;&#039;]&lt;br /&gt;
** [https://www.pgpool.net/pgpool-web/contrib_docs/pgpool-II_for_beginners.pdf Gerd Koenig&#039;s &amp;quot;pgpool-II for beginners&amp;quot;] (English)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool/1 What is pgpool-II] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool/2 Creating a replication system using pgpool-II] (Japanese)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2010/20100702-03char10.pdf Making master/slave systems work better with pgpool-II] (English, PDF)&lt;br /&gt;
** [[Relationship_between_max_pool,_num_init_children,_and_max_connections|Relationship between max_pool, num_init_children, and max_connections]](English, 2012/8/25)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2013/20130212_pgpool_seminar_sraoss.pdf New features of pgpool-II, multifunctional middleware for PostgreSQL] (Japanese, PDF)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2013/20131115_dbshowtech.pdf Construct scale out configuration with PostgreSQL] (Japanese, PDF)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-pgpool_setup/1 Let&#039;s try pgpool-II easy setup function] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-3.3-watchdog/1 About pgpool-II 3.3 watchdog] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-tcp-tuning/1 Improvements of connection performance in pgpool-II] (Japanese)&lt;br /&gt;
** [https://www.postgresql.jp/events/jpug-pgcon2013-files/C1_jpugpgcon2013_slide Construct high-availability, high-performance system with pgpool-II] (Japanese, PDF)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.6&#039;&#039;&#039;&lt;br /&gt;
** Basic Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-basic.html English] [https://www.pgpool.net/docs/latest/ja/html/example-basic.html Japanese]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** Basic: [https://www.pgpool.net/docs/latest/en/html/example-watchdog.html English] [https://www.pgpool.net/docs/latest/ja/html/example-watchdog.html Japanese]&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/docs/latest/en/html/example-cluster.html English] [https://www.pgpool.net/docs/latest/ja/html/example-cluster.html Japanese]&lt;br /&gt;
** AWS Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aws.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aws.html Japanese]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of Streaming Replication with pgpool-II (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** Simple Streaming replication setting with pgpool-II&lt;br /&gt;
*** For pgpool-II 3.1 and PostgreSQL 9.1: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting/index.html English (2012/01/31) ]  [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting/index-ja.html Japanese (2012/6/1) ]&lt;br /&gt;
*** For pgpool-II 3.0 and PostgreSQL 9.0: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting_3.0/index.html English] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting_3.0/index-ja.html Japanese]&lt;br /&gt;
** multiple server version&lt;br /&gt;
*** For pgpool-II 3.3 and PostgreSQL 9.3: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.3/index.html English (2014/04/07)] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.3/index-ja.html Japanese (2014/04/07)]&lt;br /&gt;
*** For pgpool-II 3.1 and PostgreSQL 9.1: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2/index.html English (2012/01/31)] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.0/index-ja.html Japanese (2012/6/1)]&lt;br /&gt;
*** For pgpool-II 3.0 and PostgreSQL 9.0: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2/index.html English] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.0/index-ja.html Japanese]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.3 (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave_3.3/en.html English (2014/04/07)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave_3.3/ja.html Japanese (2014/04/07)]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.2 (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** On memory query cache: [https://www.pgpool.net/pgpool-web/contrib_docs/memqcache/en.html English (2012/07/20)] [https://www.pgpool.net/pgpool-web/contrib_docs/memqcache/ja.html Japanese (2012/07/20)]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** Basic: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog/en.html English (2012/07/20)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog/ja.html Japanese (2012/07/20)]&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave/en.html English (2012/10/22)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave/ja.html Japanese (2012/10/15)]&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3674</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3674"/>
		<updated>2022-08-28T03:14:02Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Pgpool-II TODO list */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
&lt;br /&gt;
=== [WIP] Support unix_socket_directories and related parameters ===&lt;br /&gt;
: unix_socket_group and unix_socket_permissions&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, watchdog process only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== [WIP] Support multiple UNIX domain socket directories ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3673</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3673"/>
		<updated>2022-08-28T03:13:17Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* TODOs already done */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use a custom script to communicate with trusted servers ===&lt;br /&gt;
: A hard coded timeout for ping (3 seconds) is not always appropreate.&lt;br /&gt;
: It also allows to use alternative command which is more suitable than ping in certain system configuration.&lt;br /&gt;
&lt;br /&gt;
=== [WIP] Support unix_socket_directories and related parameters ===&lt;br /&gt;
: unix_socket_group and unix_socket_permissions&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, watchdog process only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== [WIP] Support multiple UNIX domain socket directories ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
: This has been implemented in 2.3.2 (released on 2010/2/7) since SSL was intrdoduced. We usually list newer entries first but it was discovered quite recently that the item had been implemented, and we decided to list the item here.&lt;br /&gt;
: Note that the streaming replication delay check worker process was introduced in 3.0 (released on 2010/9/10). SSL was already supported in the streaming replication delay check worker.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3609</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3609"/>
		<updated>2022-03-05T12:30:14Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Pgpool-II TODO list */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Support unix_socket_directories and related parameters ===&lt;br /&gt;
: unix_socket_group and unix_socket_permissions&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, watchdog process only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Support multiple UNIX domain socket directories ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3608</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3608"/>
		<updated>2022-03-05T12:24:44Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* TODOs already done */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, watchdog process only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Support multiple UNIX domain socket directories ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3607</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3607"/>
		<updated>2022-03-05T12:19:00Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Support IPv6 network */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, watchdog process only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Support multiple UNIX domain socket directories ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3606</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3606"/>
		<updated>2022-03-05T12:17:18Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* TODOs already done */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket. Same thing can be said to watchdog,&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Support multiple UNIX domain socket directories ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
: This has been implemented in 4.4.&lt;br /&gt;
: commit:&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fd0efceae011c8d2c2f7c2b26dc0a738f055972e&lt;br /&gt;
: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=e661dd1fd561792500ec0a7a4fc05c33891c2dec&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=Documentation&amp;diff=3592</id>
		<title>Documentation</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=Documentation&amp;diff=3592"/>
		<updated>2022-02-12T01:04:42Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Blog posts by Pgpool-II developers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Official documentation ==&lt;br /&gt;
* &#039;&#039;&#039;Pgpool-II&#039;&#039;&#039;&lt;br /&gt;
** Pgpool-II manual (English)&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/ Pgpool-II 4.3] (latest)&lt;br /&gt;
*** [https://www.pgpool.net/docs/42/en/html/ Pgpool-II 4.2]&lt;br /&gt;
*** [https://www.pgpool.net/docs/41/en/html/ Pgpool-II 4.1]&lt;br /&gt;
*** [https://www.pgpool.net/docs/40/en/html/ Pgpool-II 4.0]&lt;br /&gt;
*** [https://www.pgpool.net/docs/37/en/html/ Pgpool-II 3.7]&lt;br /&gt;
*** [https://www.pgpool.net/docs/36/en/html/ Pgpool-II 3.6]&lt;br /&gt;
** Pgpool-II manual (Japanese)&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/ja/html/ Pgpool-II 4.3] (latest)&lt;br /&gt;
*** [https://www.pgpool.net/docs/42/ja/html/ Pgpool-II 4.2]&lt;br /&gt;
*** [https://www.pgpool.net/docs/41/ja/html/ Pgpool-II 4.1]&lt;br /&gt;
*** [https://www.pgpool.net/docs/40/ja/html/ Pgpool-II 4.0]&lt;br /&gt;
*** [https://www.pgpool.net/docs/37/ja/html/ Pgpool-II 3.7]&lt;br /&gt;
*** [https://www.pgpool.net/docs/36/ja/html/ Pgpool-II 3.6]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/pgpool-zh_cn.html pgpool-II manual] (Simplified Chinese)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-en.html pgpool-II tutorial] (English)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-ja.html pgpool-II tutorial] (Japanese)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-zh_cn.html pgpool-II tutorial] (Simplified Chinese)--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;pgpoolAdmin&#039;&#039;&#039;&lt;br /&gt;
** [https://www.pgpool.net/docs/pgpoolAdmin/index_en.html pgpoolAdmin manual] (English)&lt;br /&gt;
** [https://www.pgpool.net/docs/pgpoolAdmin/index_ja.html pgpoolAdmin manual] (Japanese)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of Pgpool-II&#039;&#039;&#039;&lt;br /&gt;
** Basic Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-basic.html English] [https://www.pgpool.net/docs/latest/ja/html/example-basic.html Japanese]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/docs/latest/en/html/example-cluster.html English] [https://www.pgpool.net/docs/latest/ja/html/example-cluster.html Japanese]&lt;br /&gt;
** AWS Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aws.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aws.html Japanese]&lt;br /&gt;
** Aurora Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aurora.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aurora.html Japanese]&lt;br /&gt;
** Kubernetes Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-kubernetes.html English] [https://www.pgpool.net/docs/latest/ja/html/example-kubernetes.html Japanese]&lt;br /&gt;
*** [https://github.com/pgpool/pgpool2_on_k8s/blob/master/docs/index.md English]&lt;br /&gt;
** Pgpool-II Exporter&#039;&#039;&#039;&lt;br /&gt;
*** [https://github.com/pgpool/pgpool2_exporter English]&lt;br /&gt;
&lt;br /&gt;
== Developer&#039;s documentation ==&lt;br /&gt;
&lt;br /&gt;
* [https://www.pgcon.org/events/pgcon_2020/sessions/session/45/slides/44/HA_Cluster_on_K8s.pdf PostgreSQL HA Clusterwith Query Load Balancingon Kubernetes] at [https://www.pgcon.org/2020/ PGCon 2020 Ottawa] (English, PDF) (2020/05/27)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/dbtech2019-sraoss-postgresql-cluster.pdf PostgreSQLによるクラスタ運用および負荷分散術] at [https://www.db-tech-showcase.com/dbts/tokyo db tech showcase Tokyo 2019] (Japanese, PDF) (2019/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/PGConf.ASIA.Bali.2019-PENG.pdf Setup a High-Availability and Load Balancing PostgreSQL Cluster - New Features of Pgpool-II 4.1 -] at [https://2019.pgconf.asia/ PGConf.ASIA 2019 Bali] (English, PDF) (2019/09/10)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/Introducing%20PostgreSQL%20SQL%20Parser.pdf Introducing PostgreSQL SQL Parser - Use of PostgreSQL Parser in other Applications -] at [https://www.pgcon.org/2019/ PGCon 2019 Ottawa] (English, PDF) (2019/05/31)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgconf.asia-JA-20181211-day1-B4-peng.pdf Celebrating its 15th Anniversary: Pgpool-II Past, Present and Future - Part 2] at [https://www.pgconf.asia/EN/2018/day1/#B4 PGConf.ASIA 2018] (English, PDF) (2018/12/11)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgconf.asia-JA-20181211-day1-B4-ishii.pdf Celebrating its 15th Anniversary: Pgpool-II Past, Present and Future - Part 1]  at [https://www.pgconf.asia/EN/2018/day1/#B4 PGConf.ASIA 2018] (English, PDF) (2018/12/11)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/PostgreSQL-HA-with-Pgpool-II-20180925.pdf PostgreSQL HA with Pgpool-II and whats been happening in Pgpool world lately...] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (English, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/Pgpool-II-4-20180925.pdf PostgreSQL クラスタ環境の管理機能を大幅に強化！Pgpool-II 4.0 のご紹介] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (Japanese, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgpool-past-now-and-future-20180925.pdf 誕生から 15 年を迎えた Pgpool-II の過去と現在、そして未来] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (Japanese, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2017/Pgpool-II-3.7.pdf 信頼性を向上させ、PostgreSQL 10 に対応した Pgpool-II 3.7 のご紹介] at [https://www.pgconf.asia/JA/2017/day-2/#B5 PGConf.ASIA 2017] (2017/12/06)&lt;br /&gt;
* [https://www.pgpool.net/download.php?f=Pgpool-II-history.pdf Pgpool-II: Past, Present and Future] at [https://www.pgconf.asia/EN/2016/day-2/#B3 PGConf.ASIA 2016] (Japanese, PDF) (2016/12/07)&lt;br /&gt;
* [https://pgpool.net/mediawiki/images/2016-02-Moscow-pgpool-II-3.5.pdf &amp;quot;How to manage a herd of elephants: PostgreSQL clusters using streaming replication and pgpool-II&amp;quot;] at [https://pgconf.ru/en/2016/89695 &amp;quot;PGConf.Russia 2016&amp;quot;] (English, PDF) (2016/02/03)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2015/pgpool-II-3.5.pdf &amp;quot;How to manage a herd of elephants: Introducing new features of pgpool-II 3.5&amp;quot;] at [https://www.eventdove.com/event/106042 &amp;quot;PostgreSQL Conference China 2015&amp;quot;] (English, PDF) (2015/11/21)&lt;br /&gt;
* [https://pgpool.net/mediawiki/index.php?title=pgpool-II_3.5_features&amp;amp;redirect=no pgpool-II 3.5 new features] (English, Wiki)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/pgpool-II-3.5.pdf 6th Postgres Cluster Hackers Summit, pgCon 2015 &amp;quot;pgpool-II 3.5 How it will look like?&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2015.pdf 6th Postgres Cluster Hackers Summit, pgCon 2015 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2014.pdf 5th Postgres Cluster Hackers Summit, pgCon 2014 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2013.pdf 4th Postgres Cluster Hackers Summit, pgCon 2013 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2012.pdf 3rd Postgres Cluster Hackers Summit, pgCon 2012 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
&lt;br /&gt;
== Blog posts by Pgpool-II developers ==&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2022/02/whats-new-in-pgpool-ii-43-part3.html What&#039;s new in Pgpool-II 4.3? (part3)] By Tatsuo Ishii (2022/2/11)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2022/02/whats-new-in-pgpool-ii-43-part2.html What&#039;s new in Pgpool-II 4.3? (part2)] By Tatsuo Ishii (2022/2/6)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2022/01/whats-new-in-pgpool-ii-43.html What&#039;s new in Pgpool-II 4.3?] By Tatsuo Ishii (2022/1/31)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/09/enable-shared-relcache.html  Pgpool-II Configuration Parameters - enable_shared_relcache] By Bo Peng (2021/9/22)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/09/reserved-connections.html  Pgpool-II Configuration Parameters - reserved_connections] By Bo Peng (2021/9/21)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/08/failover-triggered-by-postgresql.html Failover triggered by PostgreSQL shutdown] By Tatsuo Ishii (2021/8/24)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/07/deploying-pgpool2-exporter-with-docker.html Deploying Pgpool-II Exporter with Docker] By Bo Peng (2021/7/26)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/07/postgres-disaster-recovery-on-k8s-zalando.html Disaster Recovery Strategies for PostgreSQL Deployments on Kubernetes (Part 2)] By Bo Peng (2021/7/04)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/06/promoting-specied-node-in-pgpool-ii.html Promoting specied node in Pgpool-II] By Tatsuo Ishii (2021/6/18)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/05/postgres-disaster-recovery-on-k8s.html Disaster Recovery Strategies for PostgreSQL Deployments on Kubernetes (Part 1)] By Bo Peng (2021/5/31)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/04/pgpool-logging-debugging.html  Pgpool-II Logging and Debugging] By Bo Peng (2021/4/30)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/04/visibility-with-query-cache.html Visibility with query cache] By Tatsuo Ishii (2021/4/19)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/03/logging-pgpool-on-k8s.html Logging of Pgpool-II on Kubernetes] By Bo Peng (2021/3/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/02/clustering-modes-in-pgpool.html  Pgpool-II&#039;s Clustering Modes] By Bo Peng (2021/2/28)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/02/what-pool-means-in-pgpool-ii.html What &amp;quot;pool&amp;quot; means in Pgpool-II?] By Tatsuo Ishii (2021/2/6)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/01/statistics-in-pgpool.html Various Ways to Retrieve Pgpool-II&#039;s Statistics] By Bo Peng (2021/1/31)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/12/load-balancing-in-pgpool.html Query Load Balancing in Pgpool-II] By Bo Peng (2020/12/29)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/12/timeouts-in-pgpool-ii-connections.html Timeouts in Pgpool-II connections] By Tatsuo Ishii (2020/12/27)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/11/pgpool2-on-k8s.html  Deploy Pgpool-II on Kubernetes to Achieve Query Load Balancing and Monitoring] By Bo Peng (2020/11/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/10/pgpool2-exporter.html Monitoring PostgreSQL Cluster via Pgpool-II with Prometheus] By Bo Peng (2020/10/31)&lt;br /&gt;
* [https://www.highgo.ca/2020/10/08/configuring-pgpool-ii-watchdog-its-going-to-be-a-lot-easier/ Configuring Pgpool-II watchdog: It’s going to be a lot easier] By Muhammad Usama (2020/10/08)&lt;br /&gt;
* [https://www.highgo.ca/2020/09/30/pgpool-ii-4-2-features/ pgpool II 4.2 features] By Ahsan Hadi (2020/09/30)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/09/fixing-language-in-pgpool-ii-42.html Fixing language in Pgpool-II 4.2] By Tatsuo Ishii (2020/09/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/09/how-to-configure-scram-and-md5.html How to Configure SCRAM and MD5 Authentication in Pgpool-II] By Bo Peng (2020/09/28)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/08/new-statistics-data-in-pgpool-ii.html New statistics data in Pgpool-II] By Tatsuo Ishii (2020/08/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/08/authentication-in-pgpool.html  Authentication in Pgpool-II] By Bo Peng (2020/08/27)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/07/connection-pooling-in-pgpool.html  Connection Pooling in Pgpool-II] By Bo Peng (2020/07/31)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/07/snapshot-isolation-mode.html Snapshot Isolation Mode] By Tatsuo Ishii (2020/07/25)&lt;br /&gt;
* [https://www.highgo.ca/2020/06/25/authenticating-pgpool-ii-with-ldap/ Authenticating pgpool II with LDAP] By Ahsan Hadi (2020/06/25)&lt;br /&gt;
* [https://www.highgo.ca/2020/02/25/setting-up-ssl-certificate-authentication-with-pgpool-ii/ Setting up SSL certificate authentication with Pgpool-II]  By Muhammad Usama (2020/02/25)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/07/when-primary-server-is-far-away-from.html When primary server is far away from standby server] By Tatsuo Ishii (2019/07/24)&lt;br /&gt;
* [https://www.highgo.ca/2019/07/19/pgpool-ii-4-1-taking-the-bull-by-its-horn/ Pgpool II 4.1 taking the bull by its horn] By Ahsan Hadi (2019/07/19)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/04/statement-level-load-balancing.html Statement level load balancing] By Tatsuo Ishii (2019/04/01)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/03/shared-relation-cache.html Shared Relation Cache] By Tatsuo Ishii (2019/03/24)&lt;br /&gt;
* [https://www.highgo.ca/2019/09/06/can-you-gain-performance-with-pgpool-ii-as-a-load-balancer/ Can you gain performance with Pgpool-II as a load balancer?] By Muhammad Usama (2019/04/02)&lt;br /&gt;
* &#039;&#039;&#039;old blog posts&#039;&#039;&#039;&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2013/06/settinng-up-everything-at-one-time.html Setting up everything at one time]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2013/06/pgpool-ii-33-alpha-1-is-out.html pgpool-II 3.3 alpha1 is out!]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2012/10/pgpool-ii-now.html pgpool-II + now()]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2012/08/pgpool-ii-talk-at-postgresql-conference.html Pgpool-II talk at PostgreSQL Conference Europe 2012]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== User contributed documentation ==&lt;br /&gt;
* &#039;&#039;&#039;Documentation&#039;&#039;&#039;]&lt;br /&gt;
** [https://www.pgpool.net/pgpool-web/contrib_docs/pgpool-II_for_beginners.pdf Gerd Koenig&#039;s &amp;quot;pgpool-II for beginners&amp;quot;] (English)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool/1 What is pgpool-II] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool/2 Creating a replication system using pgpool-II] (Japanese)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2010/20100702-03char10.pdf Making master/slave systems work better with pgpool-II] (English, PDF)&lt;br /&gt;
** [[Relationship_between_max_pool,_num_init_children,_and_max_connections|Relationship between max_pool, num_init_children, and max_connections]](English, 2012/8/25)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2013/20130212_pgpool_seminar_sraoss.pdf New features of pgpool-II, multifunctional middleware for PostgreSQL] (Japanese, PDF)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2013/20131115_dbshowtech.pdf Construct scale out configuration with PostgreSQL] (Japanese, PDF)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-pgpool_setup/1 Let&#039;s try pgpool-II easy setup function] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-3.3-watchdog/1 About pgpool-II 3.3 watchdog] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-tcp-tuning/1 Improvements of connection performance in pgpool-II] (Japanese)&lt;br /&gt;
** [https://www.postgresql.jp/events/jpug-pgcon2013-files/C1_jpugpgcon2013_slide Construct high-availability, high-performance system with pgpool-II] (Japanese, PDF)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.6&#039;&#039;&#039;&lt;br /&gt;
** Basic Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-basic.html English] [https://www.pgpool.net/docs/latest/ja/html/example-basic.html Japanese]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** Basic: [https://www.pgpool.net/docs/latest/en/html/example-watchdog.html English] [https://www.pgpool.net/docs/latest/ja/html/example-watchdog.html Japanese]&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/docs/latest/en/html/example-cluster.html English] [https://www.pgpool.net/docs/latest/ja/html/example-cluster.html Japanese]&lt;br /&gt;
** AWS Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aws.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aws.html Japanese]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of Streaming Replication with pgpool-II (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** Simple Streaming replication setting with pgpool-II&lt;br /&gt;
*** For pgpool-II 3.1 and PostgreSQL 9.1: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting/index.html English (2012/01/31) ]  [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting/index-ja.html Japanese (2012/6/1) ]&lt;br /&gt;
*** For pgpool-II 3.0 and PostgreSQL 9.0: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting_3.0/index.html English] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting_3.0/index-ja.html Japanese]&lt;br /&gt;
** multiple server version&lt;br /&gt;
*** For pgpool-II 3.3 and PostgreSQL 9.3: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.3/index.html English (2014/04/07)] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.3/index-ja.html Japanese (2014/04/07)]&lt;br /&gt;
*** For pgpool-II 3.1 and PostgreSQL 9.1: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2/index.html English (2012/01/31)] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.0/index-ja.html Japanese (2012/6/1)]&lt;br /&gt;
*** For pgpool-II 3.0 and PostgreSQL 9.0: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2/index.html English] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.0/index-ja.html Japanese]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.3 (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave_3.3/en.html English (2014/04/07)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave_3.3/ja.html Japanese (2014/04/07)]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.2 (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** On memory query cache: [https://www.pgpool.net/pgpool-web/contrib_docs/memqcache/en.html English (2012/07/20)] [https://www.pgpool.net/pgpool-web/contrib_docs/memqcache/ja.html Japanese (2012/07/20)]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** Basic: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog/en.html English (2012/07/20)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog/ja.html Japanese (2012/07/20)]&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave/en.html English (2012/10/22)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave/ja.html Japanese (2012/10/15)]&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=Documentation&amp;diff=3591</id>
		<title>Documentation</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=Documentation&amp;diff=3591"/>
		<updated>2022-02-12T01:02:48Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Blog posts by Pgpool-II developers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Official documentation ==&lt;br /&gt;
* &#039;&#039;&#039;Pgpool-II&#039;&#039;&#039;&lt;br /&gt;
** Pgpool-II manual (English)&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/ Pgpool-II 4.3] (latest)&lt;br /&gt;
*** [https://www.pgpool.net/docs/42/en/html/ Pgpool-II 4.2]&lt;br /&gt;
*** [https://www.pgpool.net/docs/41/en/html/ Pgpool-II 4.1]&lt;br /&gt;
*** [https://www.pgpool.net/docs/40/en/html/ Pgpool-II 4.0]&lt;br /&gt;
*** [https://www.pgpool.net/docs/37/en/html/ Pgpool-II 3.7]&lt;br /&gt;
*** [https://www.pgpool.net/docs/36/en/html/ Pgpool-II 3.6]&lt;br /&gt;
** Pgpool-II manual (Japanese)&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/ja/html/ Pgpool-II 4.3] (latest)&lt;br /&gt;
*** [https://www.pgpool.net/docs/42/ja/html/ Pgpool-II 4.2]&lt;br /&gt;
*** [https://www.pgpool.net/docs/41/ja/html/ Pgpool-II 4.1]&lt;br /&gt;
*** [https://www.pgpool.net/docs/40/ja/html/ Pgpool-II 4.0]&lt;br /&gt;
*** [https://www.pgpool.net/docs/37/ja/html/ Pgpool-II 3.7]&lt;br /&gt;
*** [https://www.pgpool.net/docs/36/ja/html/ Pgpool-II 3.6]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/pgpool-zh_cn.html pgpool-II manual] (Simplified Chinese)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-en.html pgpool-II tutorial] (English)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-ja.html pgpool-II tutorial] (Japanese)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-zh_cn.html pgpool-II tutorial] (Simplified Chinese)--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;pgpoolAdmin&#039;&#039;&#039;&lt;br /&gt;
** [https://www.pgpool.net/docs/pgpoolAdmin/index_en.html pgpoolAdmin manual] (English)&lt;br /&gt;
** [https://www.pgpool.net/docs/pgpoolAdmin/index_ja.html pgpoolAdmin manual] (Japanese)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of Pgpool-II&#039;&#039;&#039;&lt;br /&gt;
** Basic Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-basic.html English] [https://www.pgpool.net/docs/latest/ja/html/example-basic.html Japanese]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/docs/latest/en/html/example-cluster.html English] [https://www.pgpool.net/docs/latest/ja/html/example-cluster.html Japanese]&lt;br /&gt;
** AWS Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aws.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aws.html Japanese]&lt;br /&gt;
** Aurora Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aurora.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aurora.html Japanese]&lt;br /&gt;
** Kubernetes Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-kubernetes.html English] [https://www.pgpool.net/docs/latest/ja/html/example-kubernetes.html Japanese]&lt;br /&gt;
*** [https://github.com/pgpool/pgpool2_on_k8s/blob/master/docs/index.md English]&lt;br /&gt;
** Pgpool-II Exporter&#039;&#039;&#039;&lt;br /&gt;
*** [https://github.com/pgpool/pgpool2_exporter English]&lt;br /&gt;
&lt;br /&gt;
== Developer&#039;s documentation ==&lt;br /&gt;
&lt;br /&gt;
* [https://www.pgcon.org/events/pgcon_2020/sessions/session/45/slides/44/HA_Cluster_on_K8s.pdf PostgreSQL HA Clusterwith Query Load Balancingon Kubernetes] at [https://www.pgcon.org/2020/ PGCon 2020 Ottawa] (English, PDF) (2020/05/27)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/dbtech2019-sraoss-postgresql-cluster.pdf PostgreSQLによるクラスタ運用および負荷分散術] at [https://www.db-tech-showcase.com/dbts/tokyo db tech showcase Tokyo 2019] (Japanese, PDF) (2019/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/PGConf.ASIA.Bali.2019-PENG.pdf Setup a High-Availability and Load Balancing PostgreSQL Cluster - New Features of Pgpool-II 4.1 -] at [https://2019.pgconf.asia/ PGConf.ASIA 2019 Bali] (English, PDF) (2019/09/10)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/Introducing%20PostgreSQL%20SQL%20Parser.pdf Introducing PostgreSQL SQL Parser - Use of PostgreSQL Parser in other Applications -] at [https://www.pgcon.org/2019/ PGCon 2019 Ottawa] (English, PDF) (2019/05/31)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgconf.asia-JA-20181211-day1-B4-peng.pdf Celebrating its 15th Anniversary: Pgpool-II Past, Present and Future - Part 2] at [https://www.pgconf.asia/EN/2018/day1/#B4 PGConf.ASIA 2018] (English, PDF) (2018/12/11)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgconf.asia-JA-20181211-day1-B4-ishii.pdf Celebrating its 15th Anniversary: Pgpool-II Past, Present and Future - Part 1]  at [https://www.pgconf.asia/EN/2018/day1/#B4 PGConf.ASIA 2018] (English, PDF) (2018/12/11)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/PostgreSQL-HA-with-Pgpool-II-20180925.pdf PostgreSQL HA with Pgpool-II and whats been happening in Pgpool world lately...] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (English, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/Pgpool-II-4-20180925.pdf PostgreSQL クラスタ環境の管理機能を大幅に強化！Pgpool-II 4.0 のご紹介] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (Japanese, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgpool-past-now-and-future-20180925.pdf 誕生から 15 年を迎えた Pgpool-II の過去と現在、そして未来] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (Japanese, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2017/Pgpool-II-3.7.pdf 信頼性を向上させ、PostgreSQL 10 に対応した Pgpool-II 3.7 のご紹介] at [https://www.pgconf.asia/JA/2017/day-2/#B5 PGConf.ASIA 2017] (2017/12/06)&lt;br /&gt;
* [https://www.pgpool.net/download.php?f=Pgpool-II-history.pdf Pgpool-II: Past, Present and Future] at [https://www.pgconf.asia/EN/2016/day-2/#B3 PGConf.ASIA 2016] (Japanese, PDF) (2016/12/07)&lt;br /&gt;
* [https://pgpool.net/mediawiki/images/2016-02-Moscow-pgpool-II-3.5.pdf &amp;quot;How to manage a herd of elephants: PostgreSQL clusters using streaming replication and pgpool-II&amp;quot;] at [https://pgconf.ru/en/2016/89695 &amp;quot;PGConf.Russia 2016&amp;quot;] (English, PDF) (2016/02/03)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2015/pgpool-II-3.5.pdf &amp;quot;How to manage a herd of elephants: Introducing new features of pgpool-II 3.5&amp;quot;] at [https://www.eventdove.com/event/106042 &amp;quot;PostgreSQL Conference China 2015&amp;quot;] (English, PDF) (2015/11/21)&lt;br /&gt;
* [https://pgpool.net/mediawiki/index.php?title=pgpool-II_3.5_features&amp;amp;redirect=no pgpool-II 3.5 new features] (English, Wiki)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/pgpool-II-3.5.pdf 6th Postgres Cluster Hackers Summit, pgCon 2015 &amp;quot;pgpool-II 3.5 How it will look like?&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2015.pdf 6th Postgres Cluster Hackers Summit, pgCon 2015 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2014.pdf 5th Postgres Cluster Hackers Summit, pgCon 2014 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2013.pdf 4th Postgres Cluster Hackers Summit, pgCon 2013 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2012.pdf 3rd Postgres Cluster Hackers Summit, pgCon 2012 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
&lt;br /&gt;
== Blog posts by Pgpool-II developers ==&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2022/01/whats-new-in-pgpool-ii-43.html What&#039;s new in Pgpool-II 4.3?] By Tatsuo Ishii (2022/1/31)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/09/enable-shared-relcache.html  Pgpool-II Configuration Parameters - enable_shared_relcache] By Bo Peng (2021/9/22)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/09/reserved-connections.html  Pgpool-II Configuration Parameters - reserved_connections] By Bo Peng (2021/9/21)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/08/failover-triggered-by-postgresql.html Failover triggered by PostgreSQL shutdown] By Tatsuo Ishii (2021/8/24)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/07/deploying-pgpool2-exporter-with-docker.html Deploying Pgpool-II Exporter with Docker] By Bo Peng (2021/7/26)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/07/postgres-disaster-recovery-on-k8s-zalando.html Disaster Recovery Strategies for PostgreSQL Deployments on Kubernetes (Part 2)] By Bo Peng (2021/7/04)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/06/promoting-specied-node-in-pgpool-ii.html Promoting specied node in Pgpool-II] By Tatsuo Ishii (2021/6/18)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/05/postgres-disaster-recovery-on-k8s.html Disaster Recovery Strategies for PostgreSQL Deployments on Kubernetes (Part 1)] By Bo Peng (2021/5/31)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/04/pgpool-logging-debugging.html  Pgpool-II Logging and Debugging] By Bo Peng (2021/4/30)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/04/visibility-with-query-cache.html Visibility with query cache] By Tatsuo Ishii (2021/4/19)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/03/logging-pgpool-on-k8s.html Logging of Pgpool-II on Kubernetes] By Bo Peng (2021/3/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/02/clustering-modes-in-pgpool.html  Pgpool-II&#039;s Clustering Modes] By Bo Peng (2021/2/28)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/02/what-pool-means-in-pgpool-ii.html What &amp;quot;pool&amp;quot; means in Pgpool-II?] By Tatsuo Ishii (2021/2/6)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/01/statistics-in-pgpool.html Various Ways to Retrieve Pgpool-II&#039;s Statistics] By Bo Peng (2021/1/31)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/12/load-balancing-in-pgpool.html Query Load Balancing in Pgpool-II] By Bo Peng (2020/12/29)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/12/timeouts-in-pgpool-ii-connections.html Timeouts in Pgpool-II connections] By Tatsuo Ishii (2020/12/27)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/11/pgpool2-on-k8s.html  Deploy Pgpool-II on Kubernetes to Achieve Query Load Balancing and Monitoring] By Bo Peng (2020/11/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/10/pgpool2-exporter.html Monitoring PostgreSQL Cluster via Pgpool-II with Prometheus] By Bo Peng (2020/10/31)&lt;br /&gt;
* [https://www.highgo.ca/2020/10/08/configuring-pgpool-ii-watchdog-its-going-to-be-a-lot-easier/ Configuring Pgpool-II watchdog: It’s going to be a lot easier] By Muhammad Usama (2020/10/08)&lt;br /&gt;
* [https://www.highgo.ca/2020/09/30/pgpool-ii-4-2-features/ pgpool II 4.2 features] By Ahsan Hadi (2020/09/30)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/09/fixing-language-in-pgpool-ii-42.html Fixing language in Pgpool-II 4.2] By Tatsuo Ishii (2020/09/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/09/how-to-configure-scram-and-md5.html How to Configure SCRAM and MD5 Authentication in Pgpool-II] By Bo Peng (2020/09/28)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/08/new-statistics-data-in-pgpool-ii.html New statistics data in Pgpool-II] By Tatsuo Ishii (2020/08/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/08/authentication-in-pgpool.html  Authentication in Pgpool-II] By Bo Peng (2020/08/27)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/07/connection-pooling-in-pgpool.html  Connection Pooling in Pgpool-II] By Bo Peng (2020/07/31)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/07/snapshot-isolation-mode.html Snapshot Isolation Mode] By Tatsuo Ishii (2020/07/25)&lt;br /&gt;
* [https://www.highgo.ca/2020/06/25/authenticating-pgpool-ii-with-ldap/ Authenticating pgpool II with LDAP] By Ahsan Hadi (2020/06/25)&lt;br /&gt;
* [https://www.highgo.ca/2020/02/25/setting-up-ssl-certificate-authentication-with-pgpool-ii/ Setting up SSL certificate authentication with Pgpool-II]  By Muhammad Usama (2020/02/25)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/07/when-primary-server-is-far-away-from.html When primary server is far away from standby server] By Tatsuo Ishii (2019/07/24)&lt;br /&gt;
* [https://www.highgo.ca/2019/07/19/pgpool-ii-4-1-taking-the-bull-by-its-horn/ Pgpool II 4.1 taking the bull by its horn] By Ahsan Hadi (2019/07/19)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/04/statement-level-load-balancing.html Statement level load balancing] By Tatsuo Ishii (2019/04/01)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/03/shared-relation-cache.html Shared Relation Cache] By Tatsuo Ishii (2019/03/24)&lt;br /&gt;
* [https://www.highgo.ca/2019/09/06/can-you-gain-performance-with-pgpool-ii-as-a-load-balancer/ Can you gain performance with Pgpool-II as a load balancer?] By Muhammad Usama (2019/04/02)&lt;br /&gt;
* &#039;&#039;&#039;old blog posts&#039;&#039;&#039;&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2013/06/settinng-up-everything-at-one-time.html Setting up everything at one time]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2013/06/pgpool-ii-33-alpha-1-is-out.html pgpool-II 3.3 alpha1 is out!]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2012/10/pgpool-ii-now.html pgpool-II + now()]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2012/08/pgpool-ii-talk-at-postgresql-conference.html Pgpool-II talk at PostgreSQL Conference Europe 2012]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== User contributed documentation ==&lt;br /&gt;
* &#039;&#039;&#039;Documentation&#039;&#039;&#039;]&lt;br /&gt;
** [https://www.pgpool.net/pgpool-web/contrib_docs/pgpool-II_for_beginners.pdf Gerd Koenig&#039;s &amp;quot;pgpool-II for beginners&amp;quot;] (English)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool/1 What is pgpool-II] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool/2 Creating a replication system using pgpool-II] (Japanese)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2010/20100702-03char10.pdf Making master/slave systems work better with pgpool-II] (English, PDF)&lt;br /&gt;
** [[Relationship_between_max_pool,_num_init_children,_and_max_connections|Relationship between max_pool, num_init_children, and max_connections]](English, 2012/8/25)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2013/20130212_pgpool_seminar_sraoss.pdf New features of pgpool-II, multifunctional middleware for PostgreSQL] (Japanese, PDF)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2013/20131115_dbshowtech.pdf Construct scale out configuration with PostgreSQL] (Japanese, PDF)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-pgpool_setup/1 Let&#039;s try pgpool-II easy setup function] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-3.3-watchdog/1 About pgpool-II 3.3 watchdog] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-tcp-tuning/1 Improvements of connection performance in pgpool-II] (Japanese)&lt;br /&gt;
** [https://www.postgresql.jp/events/jpug-pgcon2013-files/C1_jpugpgcon2013_slide Construct high-availability, high-performance system with pgpool-II] (Japanese, PDF)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.6&#039;&#039;&#039;&lt;br /&gt;
** Basic Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-basic.html English] [https://www.pgpool.net/docs/latest/ja/html/example-basic.html Japanese]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** Basic: [https://www.pgpool.net/docs/latest/en/html/example-watchdog.html English] [https://www.pgpool.net/docs/latest/ja/html/example-watchdog.html Japanese]&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/docs/latest/en/html/example-cluster.html English] [https://www.pgpool.net/docs/latest/ja/html/example-cluster.html Japanese]&lt;br /&gt;
** AWS Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aws.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aws.html Japanese]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of Streaming Replication with pgpool-II (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** Simple Streaming replication setting with pgpool-II&lt;br /&gt;
*** For pgpool-II 3.1 and PostgreSQL 9.1: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting/index.html English (2012/01/31) ]  [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting/index-ja.html Japanese (2012/6/1) ]&lt;br /&gt;
*** For pgpool-II 3.0 and PostgreSQL 9.0: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting_3.0/index.html English] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting_3.0/index-ja.html Japanese]&lt;br /&gt;
** multiple server version&lt;br /&gt;
*** For pgpool-II 3.3 and PostgreSQL 9.3: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.3/index.html English (2014/04/07)] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.3/index-ja.html Japanese (2014/04/07)]&lt;br /&gt;
*** For pgpool-II 3.1 and PostgreSQL 9.1: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2/index.html English (2012/01/31)] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.0/index-ja.html Japanese (2012/6/1)]&lt;br /&gt;
*** For pgpool-II 3.0 and PostgreSQL 9.0: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2/index.html English] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.0/index-ja.html Japanese]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.3 (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave_3.3/en.html English (2014/04/07)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave_3.3/ja.html Japanese (2014/04/07)]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.2 (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** On memory query cache: [https://www.pgpool.net/pgpool-web/contrib_docs/memqcache/en.html English (2012/07/20)] [https://www.pgpool.net/pgpool-web/contrib_docs/memqcache/ja.html Japanese (2012/07/20)]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** Basic: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog/en.html English (2012/07/20)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog/ja.html Japanese (2012/07/20)]&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave/en.html English (2012/10/22)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave/ja.html Japanese (2012/10/15)]&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=Documentation&amp;diff=3590</id>
		<title>Documentation</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=Documentation&amp;diff=3590"/>
		<updated>2022-02-12T01:01:57Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Blog posts by Pgpool-II developers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Official documentation ==&lt;br /&gt;
* &#039;&#039;&#039;Pgpool-II&#039;&#039;&#039;&lt;br /&gt;
** Pgpool-II manual (English)&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/ Pgpool-II 4.3] (latest)&lt;br /&gt;
*** [https://www.pgpool.net/docs/42/en/html/ Pgpool-II 4.2]&lt;br /&gt;
*** [https://www.pgpool.net/docs/41/en/html/ Pgpool-II 4.1]&lt;br /&gt;
*** [https://www.pgpool.net/docs/40/en/html/ Pgpool-II 4.0]&lt;br /&gt;
*** [https://www.pgpool.net/docs/37/en/html/ Pgpool-II 3.7]&lt;br /&gt;
*** [https://www.pgpool.net/docs/36/en/html/ Pgpool-II 3.6]&lt;br /&gt;
** Pgpool-II manual (Japanese)&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/ja/html/ Pgpool-II 4.3] (latest)&lt;br /&gt;
*** [https://www.pgpool.net/docs/42/ja/html/ Pgpool-II 4.2]&lt;br /&gt;
*** [https://www.pgpool.net/docs/41/ja/html/ Pgpool-II 4.1]&lt;br /&gt;
*** [https://www.pgpool.net/docs/40/ja/html/ Pgpool-II 4.0]&lt;br /&gt;
*** [https://www.pgpool.net/docs/37/ja/html/ Pgpool-II 3.7]&lt;br /&gt;
*** [https://www.pgpool.net/docs/36/ja/html/ Pgpool-II 3.6]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/pgpool-zh_cn.html pgpool-II manual] (Simplified Chinese)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-en.html pgpool-II tutorial] (English)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-ja.html pgpool-II tutorial] (Japanese)--&amp;gt;&lt;br /&gt;
&amp;lt;!-- ** [https://www.pgpool.net/docs/latest/tutorial-zh_cn.html pgpool-II tutorial] (Simplified Chinese)--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;pgpoolAdmin&#039;&#039;&#039;&lt;br /&gt;
** [https://www.pgpool.net/docs/pgpoolAdmin/index_en.html pgpoolAdmin manual] (English)&lt;br /&gt;
** [https://www.pgpool.net/docs/pgpoolAdmin/index_ja.html pgpoolAdmin manual] (Japanese)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of Pgpool-II&#039;&#039;&#039;&lt;br /&gt;
** Basic Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-basic.html English] [https://www.pgpool.net/docs/latest/ja/html/example-basic.html Japanese]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/docs/latest/en/html/example-cluster.html English] [https://www.pgpool.net/docs/latest/ja/html/example-cluster.html Japanese]&lt;br /&gt;
** AWS Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aws.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aws.html Japanese]&lt;br /&gt;
** Aurora Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aurora.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aurora.html Japanese]&lt;br /&gt;
** Kubernetes Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-kubernetes.html English] [https://www.pgpool.net/docs/latest/ja/html/example-kubernetes.html Japanese]&lt;br /&gt;
*** [https://github.com/pgpool/pgpool2_on_k8s/blob/master/docs/index.md English]&lt;br /&gt;
** Pgpool-II Exporter&#039;&#039;&#039;&lt;br /&gt;
*** [https://github.com/pgpool/pgpool2_exporter English]&lt;br /&gt;
&lt;br /&gt;
== Developer&#039;s documentation ==&lt;br /&gt;
&lt;br /&gt;
* [https://www.pgcon.org/events/pgcon_2020/sessions/session/45/slides/44/HA_Cluster_on_K8s.pdf PostgreSQL HA Clusterwith Query Load Balancingon Kubernetes] at [https://www.pgcon.org/2020/ PGCon 2020 Ottawa] (English, PDF) (2020/05/27)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/dbtech2019-sraoss-postgresql-cluster.pdf PostgreSQLによるクラスタ運用および負荷分散術] at [https://www.db-tech-showcase.com/dbts/tokyo db tech showcase Tokyo 2019] (Japanese, PDF) (2019/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/PGConf.ASIA.Bali.2019-PENG.pdf Setup a High-Availability and Load Balancing PostgreSQL Cluster - New Features of Pgpool-II 4.1 -] at [https://2019.pgconf.asia/ PGConf.ASIA 2019 Bali] (English, PDF) (2019/09/10)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2019/Introducing%20PostgreSQL%20SQL%20Parser.pdf Introducing PostgreSQL SQL Parser - Use of PostgreSQL Parser in other Applications -] at [https://www.pgcon.org/2019/ PGCon 2019 Ottawa] (English, PDF) (2019/05/31)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgconf.asia-JA-20181211-day1-B4-peng.pdf Celebrating its 15th Anniversary: Pgpool-II Past, Present and Future - Part 2] at [https://www.pgconf.asia/EN/2018/day1/#B4 PGConf.ASIA 2018] (English, PDF) (2018/12/11)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgconf.asia-JA-20181211-day1-B4-ishii.pdf Celebrating its 15th Anniversary: Pgpool-II Past, Present and Future - Part 1]  at [https://www.pgconf.asia/EN/2018/day1/#B4 PGConf.ASIA 2018] (English, PDF) (2018/12/11)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/PostgreSQL-HA-with-Pgpool-II-20180925.pdf PostgreSQL HA with Pgpool-II and whats been happening in Pgpool world lately...] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (English, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/Pgpool-II-4-20180925.pdf PostgreSQL クラスタ環境の管理機能を大幅に強化！Pgpool-II 4.0 のご紹介] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (Japanese, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2018/pgpool-past-now-and-future-20180925.pdf 誕生から 15 年を迎えた Pgpool-II の過去と現在、そして未来] at [https://www.sraoss.co.jp/event_seminar/2018/0925.php Pgpool-II Day - Pgpool-II 4.0 Anniversary -] (Japanese, PDF) (2018/9/25)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2017/Pgpool-II-3.7.pdf 信頼性を向上させ、PostgreSQL 10 に対応した Pgpool-II 3.7 のご紹介] at [https://www.pgconf.asia/JA/2017/day-2/#B5 PGConf.ASIA 2017] (2017/12/06)&lt;br /&gt;
* [https://www.pgpool.net/download.php?f=Pgpool-II-history.pdf Pgpool-II: Past, Present and Future] at [https://www.pgconf.asia/EN/2016/day-2/#B3 PGConf.ASIA 2016] (Japanese, PDF) (2016/12/07)&lt;br /&gt;
* [https://pgpool.net/mediawiki/images/2016-02-Moscow-pgpool-II-3.5.pdf &amp;quot;How to manage a herd of elephants: PostgreSQL clusters using streaming replication and pgpool-II&amp;quot;] at [https://pgconf.ru/en/2016/89695 &amp;quot;PGConf.Russia 2016&amp;quot;] (English, PDF) (2016/02/03)&lt;br /&gt;
* [https://www.sraoss.co.jp/event_seminar/2015/pgpool-II-3.5.pdf &amp;quot;How to manage a herd of elephants: Introducing new features of pgpool-II 3.5&amp;quot;] at [https://www.eventdove.com/event/106042 &amp;quot;PostgreSQL Conference China 2015&amp;quot;] (English, PDF) (2015/11/21)&lt;br /&gt;
* [https://pgpool.net/mediawiki/index.php?title=pgpool-II_3.5_features&amp;amp;redirect=no pgpool-II 3.5 new features] (English, Wiki)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/pgpool-II-3.5.pdf 6th Postgres Cluster Hackers Summit, pgCon 2015 &amp;quot;pgpool-II 3.5 How it will look like?&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2015.pdf 6th Postgres Cluster Hackers Summit, pgCon 2015 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2014.pdf 5th Postgres Cluster Hackers Summit, pgCon 2014 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2013.pdf 4th Postgres Cluster Hackers Summit, pgCon 2013 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
* [https://www.pgpool.net/pgpool-web/contrib_docs/ClusterMeeting2012.pdf 3rd Postgres Cluster Hackers Summit, pgCon 2012 &amp;quot;pgpool-II Development Status Updates&amp;quot;] (English, PDF)&lt;br /&gt;
&lt;br /&gt;
== Blog posts by Pgpool-II developers ==&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2022/01/whats-new-in-pgpool-ii-43.html] By Tatsuo Ishii (2022/1/31)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/09/enable-shared-relcache.html  Pgpool-II Configuration Parameters - enable_shared_relcache] By Bo Peng (2021/9/22)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/09/reserved-connections.html  Pgpool-II Configuration Parameters - reserved_connections] By Bo Peng (2021/9/21)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/08/failover-triggered-by-postgresql.html Failover triggered by PostgreSQL shutdown] By Tatsuo Ishii (2021/8/24)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/07/deploying-pgpool2-exporter-with-docker.html Deploying Pgpool-II Exporter with Docker] By Bo Peng (2021/7/26)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/07/postgres-disaster-recovery-on-k8s-zalando.html Disaster Recovery Strategies for PostgreSQL Deployments on Kubernetes (Part 2)] By Bo Peng (2021/7/04)&lt;br /&gt;
&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/06/promoting-specied-node-in-pgpool-ii.html Promoting specied node in Pgpool-II] By Tatsuo Ishii (2021/6/18)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/05/postgres-disaster-recovery-on-k8s.html Disaster Recovery Strategies for PostgreSQL Deployments on Kubernetes (Part 1)] By Bo Peng (2021/5/31)&lt;br /&gt;
&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/04/pgpool-logging-debugging.html  Pgpool-II Logging and Debugging] By Bo Peng (2021/4/30)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/04/visibility-with-query-cache.html Visibility with query cache] By Tatsuo Ishii (2021/4/19)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/03/logging-pgpool-on-k8s.html Logging of Pgpool-II on Kubernetes] By Bo Peng (2021/3/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/02/clustering-modes-in-pgpool.html  Pgpool-II&#039;s Clustering Modes] By Bo Peng (2021/2/28)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2021/02/what-pool-means-in-pgpool-ii.html What &amp;quot;pool&amp;quot; means in Pgpool-II?] By Tatsuo Ishii (2021/2/6)&lt;br /&gt;
* [https://b-peng.blogspot.com/2021/01/statistics-in-pgpool.html Various Ways to Retrieve Pgpool-II&#039;s Statistics] By Bo Peng (2021/1/31)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/12/load-balancing-in-pgpool.html Query Load Balancing in Pgpool-II] By Bo Peng (2020/12/29)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/12/timeouts-in-pgpool-ii-connections.html Timeouts in Pgpool-II connections] By Tatsuo Ishii (2020/12/27)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/11/pgpool2-on-k8s.html  Deploy Pgpool-II on Kubernetes to Achieve Query Load Balancing and Monitoring] By Bo Peng (2020/11/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/10/pgpool2-exporter.html Monitoring PostgreSQL Cluster via Pgpool-II with Prometheus] By Bo Peng (2020/10/31)&lt;br /&gt;
* [https://www.highgo.ca/2020/10/08/configuring-pgpool-ii-watchdog-its-going-to-be-a-lot-easier/ Configuring Pgpool-II watchdog: It’s going to be a lot easier] By Muhammad Usama (2020/10/08)&lt;br /&gt;
* [https://www.highgo.ca/2020/09/30/pgpool-ii-4-2-features/ pgpool II 4.2 features] By Ahsan Hadi (2020/09/30)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/09/fixing-language-in-pgpool-ii-42.html Fixing language in Pgpool-II 4.2] By Tatsuo Ishii (2020/09/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/09/how-to-configure-scram-and-md5.html How to Configure SCRAM and MD5 Authentication in Pgpool-II] By Bo Peng (2020/09/28)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/08/new-statistics-data-in-pgpool-ii.html New statistics data in Pgpool-II] By Tatsuo Ishii (2020/08/30)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/08/authentication-in-pgpool.html  Authentication in Pgpool-II] By Bo Peng (2020/08/27)&lt;br /&gt;
* [https://b-peng.blogspot.com/2020/07/connection-pooling-in-pgpool.html  Connection Pooling in Pgpool-II] By Bo Peng (2020/07/31)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2020/07/snapshot-isolation-mode.html Snapshot Isolation Mode] By Tatsuo Ishii (2020/07/25)&lt;br /&gt;
* [https://www.highgo.ca/2020/06/25/authenticating-pgpool-ii-with-ldap/ Authenticating pgpool II with LDAP] By Ahsan Hadi (2020/06/25)&lt;br /&gt;
* [https://www.highgo.ca/2020/02/25/setting-up-ssl-certificate-authentication-with-pgpool-ii/ Setting up SSL certificate authentication with Pgpool-II]  By Muhammad Usama (2020/02/25)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/07/when-primary-server-is-far-away-from.html When primary server is far away from standby server] By Tatsuo Ishii (2019/07/24)&lt;br /&gt;
* [https://www.highgo.ca/2019/07/19/pgpool-ii-4-1-taking-the-bull-by-its-horn/ Pgpool II 4.1 taking the bull by its horn] By Ahsan Hadi (2019/07/19)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/04/statement-level-load-balancing.html Statement level load balancing] By Tatsuo Ishii (2019/04/01)&lt;br /&gt;
* [https://pgsqlpgpool.blogspot.com/2019/03/shared-relation-cache.html Shared Relation Cache] By Tatsuo Ishii (2019/03/24)&lt;br /&gt;
* [https://www.highgo.ca/2019/09/06/can-you-gain-performance-with-pgpool-ii-as-a-load-balancer/ Can you gain performance with Pgpool-II as a load balancer?] By Muhammad Usama (2019/04/02)&lt;br /&gt;
* &#039;&#039;&#039;old blog posts&#039;&#039;&#039;&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2013/06/settinng-up-everything-at-one-time.html Setting up everything at one time]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2013/06/pgpool-ii-33-alpha-1-is-out.html pgpool-II 3.3 alpha1 is out!]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2012/10/pgpool-ii-now.html pgpool-II + now()]&lt;br /&gt;
** [https://pgsqlpgpool.blogspot.jp/2012/08/pgpool-ii-talk-at-postgresql-conference.html Pgpool-II talk at PostgreSQL Conference Europe 2012]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== User contributed documentation ==&lt;br /&gt;
* &#039;&#039;&#039;Documentation&#039;&#039;&#039;]&lt;br /&gt;
** [https://www.pgpool.net/pgpool-web/contrib_docs/pgpool-II_for_beginners.pdf Gerd Koenig&#039;s &amp;quot;pgpool-II for beginners&amp;quot;] (English)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool/1 What is pgpool-II] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool/2 Creating a replication system using pgpool-II] (Japanese)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2010/20100702-03char10.pdf Making master/slave systems work better with pgpool-II] (English, PDF)&lt;br /&gt;
** [[Relationship_between_max_pool,_num_init_children,_and_max_connections|Relationship between max_pool, num_init_children, and max_connections]](English, 2012/8/25)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2013/20130212_pgpool_seminar_sraoss.pdf New features of pgpool-II, multifunctional middleware for PostgreSQL] (Japanese, PDF)&lt;br /&gt;
** [https://www.sraoss.co.jp/event_seminar/2013/20131115_dbshowtech.pdf Construct scale out configuration with PostgreSQL] (Japanese, PDF)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-pgpool_setup/1 Let&#039;s try pgpool-II easy setup function] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-3.3-watchdog/1 About pgpool-II 3.3 watchdog] (Japanese)&lt;br /&gt;
** [https://lets.postgresql.jp/documents/technical/pgpool-II-tcp-tuning/1 Improvements of connection performance in pgpool-II] (Japanese)&lt;br /&gt;
** [https://www.postgresql.jp/events/jpug-pgcon2013-files/C1_jpugpgcon2013_slide Construct high-availability, high-performance system with pgpool-II] (Japanese, PDF)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.6&#039;&#039;&#039;&lt;br /&gt;
** Basic Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-basic.html English] [https://www.pgpool.net/docs/latest/ja/html/example-basic.html Japanese]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** Basic: [https://www.pgpool.net/docs/latest/en/html/example-watchdog.html English] [https://www.pgpool.net/docs/latest/ja/html/example-watchdog.html Japanese]&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/docs/latest/en/html/example-cluster.html English] [https://www.pgpool.net/docs/latest/ja/html/example-cluster.html Japanese]&lt;br /&gt;
** AWS Configuration Example&lt;br /&gt;
*** [https://www.pgpool.net/docs/latest/en/html/example-aws.html English] [https://www.pgpool.net/docs/latest/ja/html/example-aws.html Japanese]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of Streaming Replication with pgpool-II (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** Simple Streaming replication setting with pgpool-II&lt;br /&gt;
*** For pgpool-II 3.1 and PostgreSQL 9.1: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting/index.html English (2012/01/31) ]  [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting/index-ja.html Japanese (2012/6/1) ]&lt;br /&gt;
*** For pgpool-II 3.0 and PostgreSQL 9.0: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting_3.0/index.html English] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting_3.0/index-ja.html Japanese]&lt;br /&gt;
** multiple server version&lt;br /&gt;
*** For pgpool-II 3.3 and PostgreSQL 9.3: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.3/index.html English (2014/04/07)] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.3/index-ja.html Japanese (2014/04/07)]&lt;br /&gt;
*** For pgpool-II 3.1 and PostgreSQL 9.1: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2/index.html English (2012/01/31)] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.0/index-ja.html Japanese (2012/6/1)]&lt;br /&gt;
*** For pgpool-II 3.0 and PostgreSQL 9.0: [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2/index.html English] [https://www.pgpool.net/pgpool-web/contrib_docs/simple_sr_setting2_3.0/index-ja.html Japanese]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.3 (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave_3.3/en.html English (2014/04/07)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave_3.3/ja.html Japanese (2014/04/07)]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Tutorials of pgpool-II 3.2 (Obsolete)&#039;&#039;&#039;&lt;br /&gt;
** On memory query cache: [https://www.pgpool.net/pgpool-web/contrib_docs/memqcache/en.html English (2012/07/20)] [https://www.pgpool.net/pgpool-web/contrib_docs/memqcache/ja.html Japanese (2012/07/20)]&lt;br /&gt;
** Watchdog:&lt;br /&gt;
*** Basic: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog/en.html English (2012/07/20)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog/ja.html Japanese (2012/07/20)]&lt;br /&gt;
*** master-slave mode: [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave/en.html English (2012/10/22)] [https://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave/ja.html Japanese (2012/10/15)]&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3589</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3589"/>
		<updated>2022-01-31T08:31:41Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Pgpool-II TODO list */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket. Same thing can be said to watchdog,&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Support multiple UNIX domain socket directories ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3588</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3588"/>
		<updated>2022-01-31T08:31:18Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* Allow to specify replication delay by time */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket. Same thing can be said to watchdog,&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Support multiple UNIX domain socket directories ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use (pg_last_committed_xact()).timestamp instead of pg_current_wal_lsn(), and pg_last_xact_replay_timestamp() instead of pg_last_wal_replay_lsn(). One thing we need to care about is, to use pg_last_committed_xact(), track_commit_timestamp (available in PostgreSQL 9.5 or after) must be enabled. If this is not enabled, pg_last_committed_xact() raises an error. Also the function returns NULL if no transaction is committed since the system started.&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag. One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
	<entry>
		<id>https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3587</id>
		<title>TODO</title>
		<link rel="alternate" type="text/html" href="https://pgpool.net/mediawiki/index.php?title=TODO&amp;diff=3587"/>
		<updated>2022-01-31T08:30:32Z</updated>

		<summary type="html">&lt;p&gt;Ishii: /* TODOs already done */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== Pgpool-II TODO list ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to use multiple pgpool-II instances with in-memory query cache enabled ===&lt;br /&gt;
: For this purpose we not only use memcached but also we need to store the oid map info on it to share the info among pgpool-II instances.&lt;br /&gt;
: According to https://www.pgpool.net/pipermail/pgpool-hackers/2018-November/003143.html , attempt to put oid map into memcached was failed due to reliability and performance reason. Maybe we should try with more reliable in memory storage engine, such as Redis.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use pg_rewind in online recovery ===&lt;br /&gt;
: pg_rewind could speed up online recovery. However it only works when the target node was normally shut down. Can we recognize that?&lt;br /&gt;
: Probably yes by looking at pg_controldata.&lt;br /&gt;
&lt;br /&gt;
=== Support peer auth ===&lt;br /&gt;
: Apparently pool_hba.conf should recognize it if we are going to support it. Also pgpool-II should forward it to PostgreSQL. We need think the case if pg_hba.conf does not use peer auth.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use client encoding ===&lt;br /&gt;
:It would be nice if pgpool client could use encoding which different from PostgreSQL server encoding.&lt;br /&gt;
: To implement this, the parser should be able to handle &amp;quot;unsafe&amp;quot; encodings such as Shift_JIS. psql replaces second byte of each multibyte character to fool the parser. We could hire similar strategy.&lt;br /&gt;
&lt;br /&gt;
=== Recognize multi statemnet queries ===&lt;br /&gt;
: As stated in the document, pgpool-II does not recognize multi statement queries correctly (BEGIN;SELECT 1;END). Pgpool-II only parses the first element of the query (&amp;quot;BEGIN&amp;quot; in this case) and decides how to behave.&lt;br /&gt;
: Of course this will bring various problems. It would be nice if pgpool-II could understand the each part of the multi statement queries.&lt;br /&gt;
: Problem is, how PostgreSQL backend handles the multi statement queries. For example, when client sends BEGIN;SELECT 1;END, backend returns &amp;quot;Command Complete&amp;quot; respectively and &amp;quot;Ready for query&amp;quot; is returned only once. Thus, trying to split multi statement queries to non multi statement queries like what psql is doing will not work.&lt;br /&gt;
: Simon Riggs suggested that if Pgpool-II cannot process multi-statement query properly, then it should have an option to prohibit the multi stattement queries in the developer unconference held in PGConf.ASIA 2016 on December 1st 2016 in Tokyo. (or maybe we could disregard the 2nd or later queires instead).&lt;br /&gt;
=== Cursor statements are not load balanced, sent to all DB nodes in replication mode ===&lt;br /&gt;
: DECLARE..FETCH are sent to all DB nodes in replication mode. This is because the SELECT might come with FOR UPDATE/FOR SHARE.&lt;br /&gt;
: It would be nice if pgpool-II checks if the SELECT uses FOR UPDATE/FOR SHARE and if not, enable load balance (or only sends to the master node if load balance is disabled).&lt;br /&gt;
: Note that some applications including psql could use CURSOR for SELECT. For example, from PostgreSQL 8.2, if &amp;quot;\set FETCH_COUNT n&amp;quot; is executed, psql unconditionaly uses a curor named &amp;quot;_psql_cursor&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Support IPv6 network ===&lt;br /&gt;
: As of 3.4, it is allowed to use IPv6 address for PostgreSQL backend server and bind address of pgpool-II itself.&lt;br /&gt;
: However, PCP process still only binds to IPv4 and UNIX domain socket. Same thing can be said to watchdog,&lt;br /&gt;
&lt;br /&gt;
=== Handle abnormal down of virtual IP interface when watchdog enabled ===&lt;br /&gt;
: When virtual IP interface is dropped abnormally by manual ifconfig etc., there are no one holding VIP, and clients aren&#039;t able to connect pgpool-II. Watchdog of active pgpool should monitor the interface or VIP, and handle its down.&lt;br /&gt;
&lt;br /&gt;
=== Do not invalidate query cache created in a transaction in some cases ===&lt;br /&gt;
: Currently new query cache for table t1 created in a transaction is removed at commit if there&#039;s DMLs which touch t1 in the same transaction. Apparently this is overkill for same cases:&lt;br /&gt;
 BEGIN;&lt;br /&gt;
 INSERT INTO t1 VALUES(1);&lt;br /&gt;
 SELECT * FROM t1;&lt;br /&gt;
 COMMIT;&lt;br /&gt;
: To enhance this, we need to teach pgpool-II about &amp;quot;order of SELECTs and DMLs.&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fix memory leak in pool_config.c ===&lt;br /&gt;
: The module in charge of parsing pgpool.conf has memory leak problem. Usually pgpool reads pgpool.conf just once at the start up time, it is not a big problem. However reloading pgpool.conf will leak memory and definitely a problem. Also using memory leak check tools like valgrind emit lots of error messages and very annoying. So it would be nice to fix the problem in the future.&lt;br /&gt;
&lt;br /&gt;
=== Put together a definition of error codes into a single header file ===&lt;br /&gt;
: Currently most error codes used by pool_send_{error,fatal}_message() etc (e.g. &amp;quot;XX000&amp;quot;, &amp;quot;XX001&amp;quot;, &amp;quot;57000&amp;quot;) are hard-coded in different sources. They should be defined as constants in a single header together.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s latch module ===&lt;br /&gt;
: Pgpool already has similar module but PostgreSQL&#039;s one seems more sophiscated and reliable.&lt;br /&gt;
&lt;br /&gt;
=== Support multiple UNIX domain socket directories ===&lt;br /&gt;
: PostgreSQL already does this. See [https://www.pgpool.net/pipermail/pgpool-hackers/2016-February/001433.html pgpool-hackers: 1433].&lt;br /&gt;
&lt;br /&gt;
=== Implement &amp;quot;log_timezone&amp;quot; ===&lt;br /&gt;
: (From pgpool-genera: 5215) &lt;br /&gt;
&amp;lt;blockquote&amp;gt;I&#039;d like to propose that an addition be made to pgpool to allow for a log timestamp to be written to the log with a timezone other than the locally defined timezone.  Where this is helpful is when we use an external tool like logstashforwarder, where we want the logs to be absorbed with a timestamp with a UTC timezone.  Postgres offers this feature (&#039;log_timezone&#039;), which we use, and it would be nice to allow pgpool to behave in the same way.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset in memory query cache in the shared memory without restarting Pgpool-II ===&lt;br /&gt;
: See discussion: https://www.pgpool.net/pipermail/pgpool-general/2017-May/005565.html&lt;br /&gt;
&lt;br /&gt;
=== Do not prevent load balancing in explicit transactions in certain cases ===&lt;br /&gt;
: If write queries are issued in an explicit transaction, Following SELECTs are not load balanced, rather sent to primary node. This is intended to allow SELECTs to retrieve the latest data regardless the replication delay. Currently &amp;quot;write query&amp;quot; includes anything other than SELECTs. This is overkill for some class of queries: for example, Since SET command are sent to both primary and standby nodes, sending SELECTs to any of DB nodes could retrieve the latest data.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use comma separated IP address or host names in listen_addresses ===&lt;br /&gt;
: Currently only single IP or host name or &#039;*&#039; is allowed. PostgreSQL already allows multiple listen addresses.&lt;br /&gt;
&lt;br /&gt;
=== Add black/white table list for load balancing control ===&lt;br /&gt;
: If a table is in the black list, always send queries to primary server. Probably database.schema.table notion is preferable.&lt;br /&gt;
&lt;br /&gt;
=== Support Cert authentication between Pgpool-II and PostgreSQL ===&lt;br /&gt;
: Pgpool-II 4.0 added support for Cert authentication between frontend and Pgpool-II, but between Pgpool-II and backend is not yet supported.&lt;br /&gt;
&lt;br /&gt;
=== Detach the standby node with large replication lag ===&lt;br /&gt;
&lt;br /&gt;
: Now no loadbalance to the standby node with large replication lag. But if due to some reason of online-recovery the recoveroed standby node can&#039;t connect to primary node, the standby node should be detached.&lt;br /&gt;
&lt;br /&gt;
=== Allow to get primary node info in failback_command script. ===&lt;br /&gt;
&lt;br /&gt;
: Now we can get master node info in failback_command script, it will be more useful to get hostname, port and database cluster directory of new primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify whether a relcahe entery is for a global table ===&lt;br /&gt;
: Currently relcache is defined for each database. However some of relcache entry does not depend on databases: for example shared catalogs and misc info including PostgreSQL version. For such info having per-database relcache entry is not only waste of resource, but less efficient. It is desirable to be able to specify if a relcache entry does not depend on databases.&lt;br /&gt;
&lt;br /&gt;
=== Grouping config requires duplicate codings ===&lt;br /&gt;
: pool_config_variable.c manages each config variables along with its group to belong to. However each group definition also requires which config variable belongs to the group. This is redundant and should be avoided.&lt;br /&gt;
&lt;br /&gt;
=== Duplicate functionality: &amp;quot;show pool_status&amp;quot; and &amp;quot;pgpool show all&amp;quot; ===&lt;br /&gt;
: Both commands produce almost same output except that &amp;quot;show pool_status&amp;quot; lacks some variables because certain config variables were forgotten to be added. Probably we should keep &amp;quot;pgpool show all&amp;quot; only because it does not require to maintaining pool_process_reporting.c. To keep backward compatibility, if &amp;quot;show pool_status&amp;quot; is requested, &amp;quot;pgpool show all&amp;quot; could be called.&lt;br /&gt;
&lt;br /&gt;
=== Allow SSL in health check etc. ===&lt;br /&gt;
: Health check process connects to PostgreSQL backend without using SSL. This means password for connecting PostgreSQL database is flying on the wire in plain text. Same thing can be said to the streaming replication delay check worker process. It would be nice if health check and streaming replication delay check worker process use SSL if requested by backend.&lt;br /&gt;
&lt;br /&gt;
=== Add PCP command to invalidate particular query cache ===&lt;br /&gt;
: Sometimes it is not possible to invalidate cache entry because Pgpool-II fails to detect the table modification ebent; e.g. table modified by functions, triggers and rules. There should be some ways for administrators to invalidate such query cache entries.&lt;br /&gt;
&lt;br /&gt;
=== Use a hash table for the relation cache ===&lt;br /&gt;
: Currently we use a simple array for the relation cache. Apparently it will not scale if there are many cache entries. Using a hash table should provide quicker lookup.&lt;br /&gt;
&lt;br /&gt;
=== Support GSSAPI ===&lt;br /&gt;
: Pgpool-II does not support it yet. Moreover if client sends request with &amp;quot;gssencmode=prefer&amp;quot; Pgpool-II fails.&lt;br /&gt;
&lt;br /&gt;
=== Allow to reset statistics counters without restarting Pgpool-II  ===&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2021-February/007478.html&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use (pg_last_committed_xact()).timestamp instead of pg_current_wal_lsn(), and pg_last_xact_replay_timestamp() instead of pg_last_wal_replay_lsn(). One thing we need to care about is, to use pg_last_committed_xact(), track_commit_timestamp (available in PostgreSQL 9.5 or after) must be enabled. If this is not enabled, pg_last_committed_xact() raises an error. Also the function returns NULL if no transaction is committed since the system started.&lt;br /&gt;
&lt;br /&gt;
=== Allow to exec a command when quorum state is changed ===&lt;br /&gt;
: Possible use case is, when quorum is lost, admin wants to prevent applications to send queries through pgpool or directory to PostgreSQL. In this case a registered command is executed to shutdown the primary PostgreSQL.&lt;br /&gt;
&lt;br /&gt;
== TODOs already done ==&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify replication delay by time ===&lt;br /&gt;
: delay_threshold specifies replication delay upper limit in bytes. It will be more intuitive to specify the replication delay by time like &#039;10 seconds&#039;. For this purpose, we can use pg_stat_replication.replay_lag.&lt;br /&gt;
 One thing we need to care about is, it&#039;s only available in PostgreSQL 10.0 or after.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Track the point to flush messages ===&lt;br /&gt;
: When a flush message is received, all the pending messages should be flushed to frontend. For this purpose we should have information on all of the pending message. Currently we just flush message depending on the message kind.&lt;br /&gt;
: Related discussion: https://www.pgpool.net/pipermail/pgpool-general/2022-January/008026.html&lt;br /&gt;
: This has been implemented in Pgpool-II 4.4.&lt;br /&gt;
&lt;br /&gt;
=== Include other file in pgpool.conf file ===&lt;br /&gt;
: Add the feature pgpool.conf can include other file, which specify backendname and host specific setting values.&lt;br /&gt;
: This has been implemented in Pgpool-II 4.3.&lt;br /&gt;
&lt;br /&gt;
=== Allow to use schema qualifications in black_function_list and white_function_list ===&lt;br /&gt;
: Currently schema qualifications are silently ignored in these parameter.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=fc9fa8dbe21dd561b049ad94377a1c0246aec493&lt;br /&gt;
&lt;br /&gt;
=== Set ps status in extended query ===&lt;br /&gt;
: Currently ps status, sucuh as &amp;quot;SELECT&amp;quot; etc. is only set when simple query is executed. It would be nice if the ps status is set while executing extended queries.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Change relative path of ssl_key and ssl_cert to DEFAULT_CONFIGDIR ===&lt;br /&gt;
&lt;br /&gt;
: Currently the relative path of ssl_key and ssl_cert are the path to the directory to run pgpool. Change this relative path to DEFAULT_CONFIGDIR. And change default value to use absolute path.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Add support for an user/password input file to pg_md5 ===&lt;br /&gt;
: https://www.pgpool.net/mantisbt/view.php?id=422&lt;br /&gt;
 pg_md5 -m -f conf/pgpool.conf --input-file=users.txt&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Support for CRL (Certificate Revocation List) ===&lt;br /&gt;
: Our SSL support lacks this (PostgreSQL already has this) and supporting CRL should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
&lt;br /&gt;
=== Allow to send relation cache query to other than primary node ===&lt;br /&gt;
: Pgpool-II needs to access PostgreSQL&#039;s system catalog to obtain meta info. For now the query is always sent to primary. This is good because it could avoid replication delay for newly created tables. However if primary PostgreSQL is geographically distant, the query could take long time. It would be nice if there&#039;s a parameter to allow send such queries to other than primary node.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Automatically reattach a node in streaming master/slave configuration ===&lt;br /&gt;
:In streaming master/slave configuration there could be an option to automatically reattach a node if it&#039;s up-to-date with the master (0 bytes behind). It often happens that due to minor network outage a slave node is dropped off from pgpool and stays down even if the the node has resumed replication with master and is up-to-date.pgpool already knows how much slave is behind master so i guess this wouldn&#039;t be too difficult to implement? (from bugtrack #17)&lt;br /&gt;
: Another concern is whether the standby in question actually connects to the proper primary server or not. It is possible that the standby is up and running but is connected to different primary server. Simon Riggs suggested at the developer unconference on December 1st 2016 held in PGConf.ASiA 2016 in Tokyo that pg_stat_wal_receiver, which is new in PostgreSQL 9.6, can be used to safely judge that the standby in question is actually connected to appropriate primary server.&lt;br /&gt;
: pg_stat_replication provides ideal information for this purpose. By using it this will be supported in &lt;br /&gt;
: This has been implemented as &amp;quot;auto_failback&amp;quot; 4.1.&lt;br /&gt;
&lt;br /&gt;
=== Move relation cache to shared memory ===&lt;br /&gt;
: This will bring less inquiry to the system catalogue (thus better performance) and more real-time cache invalidation.&lt;br /&gt;
: This has been implemented in 4.1 as &amp;quot;enable_shared_relcache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info.&lt;br /&gt;
: As of 3.4, pgpool_status file is changed to a plain ASCII file and users could specify down node by using ordinary text editors.&lt;br /&gt;
&lt;br /&gt;
=== Ability to load balance based on Client IP, database, table etc. ===&lt;br /&gt;
: From bugid 26:  I have recently moved a database from Mysql to postgresql 9.1.5 which is behind a pgpool-II-3.1.4 . Everything went fine until i observed that some &amp;quot;tickets&amp;quot; are not created correctly by the application (OTRS) that populate the database.&lt;br /&gt;
: After some debugging i found/guess that the problem is the following:&lt;br /&gt;
: when a cron job wants to create a ticket he has to insert info in abut 10 tables, and i guess that the 2-nd, 3-rd ... inserts depends on the first. The problem was that this operation is not performed transactionally so after the first insert, when the app tries to perform the other inserts, first tries to select &amp;quot;the first insert&amp;quot;, but this first insert is still not propagated to all nodes, and the error occurs.&lt;br /&gt;
: I`m aware of the fact that if this entire operation would be performed transactionally (only on master) the issue is solved, but unfortunately i cannot modify the app.&lt;br /&gt;
: So i want to know if there is any way that i can tell to pgpool something like :&lt;br /&gt;
: any request from this ip do not load balance.&lt;br /&gt;
&lt;br /&gt;
: PS. temporary i have set the weight factor to 0 to the 2-nd and 3-rd postgresql slaves and it behaves ok, because reads and writes only from master.&lt;br /&gt;
&lt;br /&gt;
: P.P.S. there&#039;s also different request regarding load balance.&lt;br /&gt;
: https://www.pgpool.net/pipermail/pgpool-general/2014-June/003032.html&lt;br /&gt;
&lt;br /&gt;
: This item has been implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL&#039;s execption handling ===&lt;br /&gt;
: PostgreSQL&#039;s exception handling (elog family) is pretty good tool to make codes to be simple and robust. It would be nice if pgpool could use this. This has been already done in 3.4.&lt;br /&gt;
=== Allow to print user name in the logging ===&lt;br /&gt;
: This will be useful for audit purpose. (done and will appear in pgpool-II-3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Remove on disk query cache ===&lt;br /&gt;
: Old on disk query cache has almost 0 user and has sevior limitation, including no automatic cache invalidation. This has been already obsoleted since on memory query cache implemented. We should remove this (this has been already in git master and will appear in 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Restart watchdog process when it abnormaly exits ===&lt;br /&gt;
: It would be nice for pgpool main to restart watchdog process when it dies abormaly.&lt;br /&gt;
&lt;br /&gt;
=== Synchronize backend nodes information with watchdog when standby pgpool starts up ===&lt;br /&gt;
: For example, when a certain node is detached from active pgpool and then standby pgpool starts up, the standby pgpool can&#039;t recognized that the node is detached. Standby pgpool should get information about node information from other pgpool.&lt;br /&gt;
&lt;br /&gt;
=== Avoid multiple pgpools from executing failover.sh simultaneously.  ===&lt;br /&gt;
: In master-slave mode with watchdog, when a backend DB is down, all pgpools execute failover.sh. It might cause something wrong.&lt;br /&gt;
&lt;br /&gt;
=== Add new parameter for searching primary node timeout ===&lt;br /&gt;
: pgpool-II uses &amp;quot;recovery_timeout&amp;quot; for searching the primary node timeout after failover. Since this is an abuse of the parameter, we should add new parameter for searching the primary node.&lt;br /&gt;
&lt;br /&gt;
=== Allow to load balance even in an explicit transaction in replication mode ===&lt;br /&gt;
: Currently load balance in an explicit transaction is only allowed in master-slave mode. It should be allowed in the replication mode as well.&lt;br /&gt;
&lt;br /&gt;
=== Add testing framework ===&lt;br /&gt;
: PostgreSQL has nice regression test suite. It would be nice if pgpool-II has similar test suite. Problem is, such a suite could be very complex system because it should include not only pgpool-II itself, but also multiple PostgreSQL instances. Also don&#039;t forget about &amp;quot;watchdog&amp;quot;. Even such a test suite should be able to manage multiple pgpool-II instances.&lt;br /&gt;
&lt;br /&gt;
=== Add switch to control select(2) time out in connecting to PostgreSQL ===&lt;br /&gt;
: In connect_inet_domain_socket_by_port(), select(2) is issued to watch events on the fd created by non blocking connect(2). The time out parameter of select(2) is fixed to 1 second, which is not long enough in flakey network environment like AWS (https://www.pgpool.net/pipermail/pgpool-general/2014-May/002880.html).&lt;br /&gt;
: To solve the problem, new switch to control the time out is desired (done for pgpool-II 3.4.0).&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify which node is dead when starting up ===&lt;br /&gt;
: If we set longer health check timeout and/or many health check retries, starting up pgpool-II will take long time if some of DB nodes have been down because of health checking and retries in creating connection to backend.&lt;br /&gt;
: pgpool_status should help here but for the very first starting up, we cannot use it.&lt;br /&gt;
: It would be nice if we could tell pgpool-II about down node info (pgpool-II 3.4.0 chages the pgpool_status format to ASCII. Thus users can edit the file if needed).&lt;br /&gt;
&lt;br /&gt;
=== Remove parallel query ===&lt;br /&gt;
: Parallel query has severe restrictions such as certain queries cannot be used, nor in extended protocol (i.e. JDBC).&lt;br /&gt;
: Also it is pain to upgrade to newer version of PostgreSQL&#039;s SQL parser (yes, pgpool-II uses PostgreSQL&#039;s parser code). In short, parallel query gives us small gain comparing with the work needed to maintain/enhance. So I would like to obsolete parallel query in the future pgpool-II release. (related parameters have been removed from pgpool.conf in 3.4.0. pgpool-II 3.5.0 will remove actual code).&lt;br /&gt;
&lt;br /&gt;
=== Enhance pcp commands ===&lt;br /&gt;
: There are number of drawbacks in pcp commands including 1)the timeout parameter is not used any more and should be removed 2)error codes returned from the commands are completely useless 3)multiple commands can not be accepted simultaneously.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance performance of extended protocol case ===&lt;br /&gt;
: When extended protocl (i.e. JDBC etc.) used, pgpool-II&#039;s overhead is pretty large compared with simple query. Need to enhance it.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Import PostgreSQL 9.5&#039;s parser ===&lt;br /&gt;
: No need to say for this.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Watchdog feature enhancement ===&lt;br /&gt;
:Watchdog is a very important feature of pgpool-II as it is used to eliminate the single point of failure and provide HA. But there are few feature requests and bugs in the existing watchdog that require little more than a simple code fix, and requires the complete revisit of its core architecture.&lt;br /&gt;
:See the design proposal for watchdog enhancement [[https://pgpool.net/mediawiki/index.php/watchdog_feature_enhancement here]]&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify user name, password and database name for health check per backend base ===&lt;br /&gt;
: In some environment it is not allowed to access standard database i.e. postgres and template1. So users need to specify them per backend basis.&lt;br /&gt;
: Maybe we need backend_healthcheck_username0 etc? See https://www.pgpool.net/pipermail/pgpool-hackers/2015-June/000942.html for more details.&lt;br /&gt;
: This has been already done in 3.5.&lt;br /&gt;
&lt;br /&gt;
=== Enhance documents ===&lt;br /&gt;
: The current document for is plain HTML, which is a real pain to maintain. Like PostgreSQL, is SGML our direction?&lt;br /&gt;
: Pgpool-II 3.6 is going to change the document format to SGML. (This has been already implemented in 3.6. We employ SGML).&lt;br /&gt;
&lt;br /&gt;
=== Add SET commnad ===&lt;br /&gt;
: Pgpool specific SET command would be usefull. For example, using &amp;quot;SET debug = 1&amp;quot; could produce debug info on the fly for particular session.&lt;br /&gt;
: This is being discussed in pgpool-II 3.6 development. (This item has been implemented in 3.6)&lt;br /&gt;
&lt;br /&gt;
=== Send read query only to standbys even after fail over ===&lt;br /&gt;
: We can configure pgpool-II to not send read queries to the primary. However after a fail over, the role of the node could be changed.&lt;br /&gt;
: To solve the problem, we need new flag to specify that read queries always are sent to standbys regardless the fail over ([pgpool-general: 1621] backend weight after failover).&lt;br /&gt;
: (This has been already implemented in 3.4 as &amp;quot;database_redirect_preference_list&amp;quot; and &amp;quot;app_name_redirect_preference_list&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
=== Do not disconnect to clients when a fail over happens ===&lt;br /&gt;
: At this moment we don&#039;t know how to implement it but this is a desirable feature.&lt;br /&gt;
: This has been already implemented in 3.6.&lt;br /&gt;
&lt;br /&gt;
=== Create separate process for health checking ===&lt;br /&gt;
: To make main process more stable, it would be better to make separate process which is responsible for health checking.&lt;br /&gt;
: This has been already implemented in 3.7.&lt;br /&gt;
=== Health-check timeout for each backend node ===&lt;br /&gt;
&lt;br /&gt;
: In the current, timeout values specified by health_check_timeout means the total time for checking all the backend status. Hence, if it takes a long time to succeed to check a backend, when timeout occurs during checking the next backend, this node is regarded as failed and failovered even though this is healthy.To resolve this issue, we need health-check timeout for each backend.&lt;br /&gt;
: This has been implemented in 3.7.&lt;br /&gt;
&lt;br /&gt;
=== Support SCRAM authentication ===&lt;br /&gt;
: PostgreSQL 10.0 supports SCRM authentication. It seems there&#039;s fundamental difficulty with this.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-May/002331.html for more details.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to choose load balance behavior per SQL statement ===&lt;br /&gt;
: If a query string matches specified regular expression, send the query to either primary or standby.&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Allow to specify load balance weight ratio for load balance parameters ===&lt;br /&gt;
: Allow to specify load balance weight ratio for database_redirect_preference_list, and app_name_redirect_preference_list like: &amp;quot;postgres:primary(0.3)&amp;quot;.&lt;br /&gt;
: See https://www.pgpool.net/pipermail/pgpool-hackers/2017-December/002650.html&lt;br /&gt;
: This has been implemented in 4.0.&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL using ECDH ===&lt;br /&gt;
: ECDH is encryption algorithm. Our SSL support lacks this (PostgreSQL already has this) and supporting ECDH should make Pgpool-II more secure.&lt;br /&gt;
: This has been implemented in 4.1.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=51bc494aaa7fd191e14038204d18effe2efb0ec8&lt;br /&gt;
&lt;br /&gt;
=== Allow to set application name in log_line_prefix ===&lt;br /&gt;
: Currently, appication name (%a) can only be set if startup packet includes it. It would be nice if Pgpool-II traps &amp;quot;set application_nane...&amp;quot; in the current session and allows log_line_prefix to use it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=d8434662d63c115280779941af61252663f9c134&lt;br /&gt;
&lt;br /&gt;
=== Support for SSL passphrase ===&lt;br /&gt;
: Using passpharase encrypt private key is more sucure. PostgreSQL already has this. Pgpool-II should import it.&lt;br /&gt;
: This has been implemented in 4.2.&lt;br /&gt;
: commit: https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commit;h=6ea154454c38d3f1191772f6a3aa01aa60a69c86&lt;/div&gt;</summary>
		<author><name>Ishii</name></author>
	</entry>
</feed>