[Pgpool-hackers] Patch to avoid crashes after adding new node in Master/Slave mode

Tatsuo Ishii ishii at sraoss.co.jp
Tue Aug 23 06:06:16 UTC 2011


> On Sat, Aug 20, 2011 at 12:19 AM, Tatsuo Ishii <ishii at sraoss.co.jp> wrote:
> 
>> > Thanks for analyzing the problem and patches.
>> >
>> >>     The ability to add new slave nodes, without requiring a restart was
>> >> added in commit [1].
>> >>
>> >>     One problem that was overlooked was that in main(), when
>> initializing
>> >> my_backend_status[], only NUM_BACKENDS number of elements of that array
>> are
>> >> initialized and used for the rest of the life of the pool_worker_child
>> >> process. Since the NUM_BACKEDS increases everytime a new slave backend
>> is
>> >> added, but the corresponding element in the private array is still NULL,
>> the
>> >> pool_worker_child crashes when trying to check VALID_BACKEND(i) in
>> >> establish_persistent_connection().
>> >>
>> >>     The attached micro patch avoids the crashes by initializing the
>> private
>> >> array's all MAX_NUM_BACKENDS instead of just NUM_BACKENDS.
>> >>
>> >>     This will also avoid crashes of pcp_attach_node when it tries to
>> check
>> >> VALID_BACKEND() in send_failback_request(). It crashes because, same as
>> >> above, its private array is initialized only up to NUM_BACKENDS and the
>> ID
>> >> of the new slave node being attached is greater than current
>> NUM_BACKENDS.
>> >>
>> >>     The bigger concern, which this patch does not address, is that of
>> >> keeping my_backend_status[] up-to-date in pool_worker_child process and
>> any
>> >> other auxiliary processes.
>> >
>> > Ok. For this I will add a flag on shared memory area which is set by
>> > main process. pool_worker_child process and other auxiliary processes
>> > (curretly it's only pcp process) will check the flag and restart
>> > itself if the flag is set.
>>
>> Included patches let pgpool main send SIGUSR1 signal to worker process
>> and pcp process to restart them whenever they are convenient.
>>
>>
> Hi Tatsuo,
> 
>     Thanks for the quick turnaround times.
> 
>     After posting my original concern
> 
>>>     The bigger concern, which this patch does not address, is that of
>>> keeping my_backend_status[] up-to-date in pool_worker_child process and
> any
>>> other auxiliary processes.
> 
>         I thought a bit more, and noticed that my_backend_status[] is an
> array of pointers. So as long as the shared memory that they point to does
> not get freed/reused we can rely on the my_backend_status[] elements to be
> perfectly up-to-date whenever the status of a backend is changed by anyone.
> 
>     I maybe wrong, but I think there's no need send signals to the PCP child
> and the worker backend since they can see the updated status values in
> my_backend_status[].

Thanks for comments. In check_replication_time_lag() of
pool_worker_child.c has a loop:

	for (i=0;i<NUM_BACKENDS;i++)
	{
		if (!VALID_BACKEND(i))
			continue;
			:
			:
		sts = do_query(slots[i]->con, query, &res, PROTO_MAJOR_V3);

Here slos[] is an array of connections to backends and has been
initialized before check_replication_time_lag() gets called. So in a
worst scenario, slots[] was initialized only for 2 backends. After it
and before the for loop begins number of backends is increased to 3,
do_query will look into uninitialized array element slots[2] and
fail. So I think VALUD_BACKEND should refer to private copy of backend
status e.g. my_private_status so that it does not return true if the
third backend is consulted(e.g. i == 2).

However I forgot to add initializing my_backend_status by calling
initialize_private_backend_status() and my_backend_status refers to
shared memory area, rather referring to private copy of it.

I have committed the fix.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


More information about the Pgpool-hackers mailing list