As the number of client connections accepted is growing, the number of Pgpool-II child process which can accept new connections from client is decreasing and finally reaches to 0. In this situation new clients need to wait until a child process becomes free. Under heavy load, it could be possible that the queue length of waiting clients is getting longer and longer and finally hits the system's limit (you might see "535 times the listen queue of a socket overflowed" error"). In this case you need to increase the queue limit. There are several ways to deal with this problem.
The obvious way to deal with the problem is increasing the number of child process. This can be done by tweaking num_init_children. However increasing child process requires more CPU and memory resource. Also you have to be very careful about max_connections parameter of PostgreSQL because once the number of child process is greater than max_connections, PostgreSQL refuses to accept new connections, and failover will be triggered.
Another drawback of increasing num_init_children is, so called "thundering herd problem". When new connection request comes in, the kernel wake up any sleeping child process to issue accept() system call. This triggers fight of process to get the socket and could give heavy load to the system. To mitigate the problem, you could set serialize_accept to on so that there's only one process to grab the accepting socket. However notice that the performance may be dropped when the number of concurrent clients is small.
In Pgpool-II 4.4 or later, it is possible to use process_management_mode for more efficient management. By setting process-management-mode to dynamic, when the number of concurrent clients is small, the number of child process of Pgpool-II can be decreased thus we can save the resource consumption. On the other hand when the number of concurrent clients gets larger, the number of child process increases so that it can respond to the more demand of connections. However, notice that the time for connection establishment could be increasing because new process need to be started to have more child process.
See also Section 3.3.3 for understanding process-management-mode.
Another solution would be increasing the connection request queue. This could be done by increasing listen_backlog_multiplier.
However, none of above solutions guarantees that the connection accepting the queue would not be filled up. If a client connection request arrives quicker than the rate of processing queries, the queue will be filled in someday. For example, if there are some heavy queries that take long time, it could easily trigger the problem.
The solution is setting reserved_connections so that overflowed connection requests are rejected as PostgreSQL already does. This gives visible errors to applications ("Sorry max_connections already") and force them retrying. So the solution should only be used when you cannot foresee the upper limit of system load.