View Issue Details

IDProjectCategoryView StatusLast Update
0000482Pgpool-IIBugpublic2019-05-21 16:10
ReporternagataAssigned Tot-ishii 
PrioritynormalSeverityminorReproducibilityalways
Status closedResolutionopen 
Product Version3.6.15 
Target VersionFixed in Version3.6.17 
Summary0000482: Segfault when failback occurs during executing the query.
DescriptionIn streaming-repliction mode, I found another segfault.

 The location was MASTER_CONNECTION or TSTATE, and the segfault occured after pcp_attach_node instead of pcp_dettach_node.

My guess is as below:

1. When a new connection was accepted, the backend of node 0 was in down, so slot[0] is null.
2. During processing a query, failback happened and node 0 became a new primary.
3. Then, when accessing slots[MASTER_NODE_ID (=0], this caused the segfault.

The backtrace is here:
=======
(gdb) bt
#0 0x00005651d9733872 in backend_cleanup (backend=<optimized out>, frontend_invalid=frontend_invalid@entry=0 '\000', frontend=0x5651d9a7b960 <child_frontend>) at protocol/child.c:468
0000001 0x00005651d97366bd in do_child (fds=fds@entry=0x5651dac73340) at protocol/child.c:417
0000002 0x00005651d97105a7 in fork_a_child (fds=0x5651dac73340, id=1) at main/pgpool_main.c:659
0000003 0x00005651d97110dd in reaper () at main/pgpool_main.c:2690
0000004 0x00005651d9717f8f in PgpoolMain (discard_status=<optimized out>, clear_memcache_oidmaps=<optimized out>) at main/pgpool_main.c:451
0000005 0x00005651d970eb32 in main (argc=<optimized out>, argv=0x7ffd862f9da8) at main/main.c:349
(gdb) l
463 bool cache_connection = false;
464
465 if (backend == NULL)
466 return false;
467
468 sp = MASTER_CONNECTION(backend)->sp;
469
470 /*
471 * cach connection if connection cache configuration parameter is enabled
472 * and frontend connection is not invalid
======

... and another case:
===================
gdb) bt
#0 0x000055e08d299097 in ReadyForQuery (frontend=frontend@entry=0x55e08e961508, backend=backend@entry=0x7fe0fefdda18, send_ready=send_ready@entry=1 '\001',
    cache_commit=cache_commit@entry=1 '\001') at protocol/pool_proto_modules.c:1909
0000001 0x000055e08d29ae70 in ProcessBackendResponse (frontend=frontend@entry=0x55e08e961508, backend=backend@entry=0x7fe0fefdda18, state=state@entry=0x7ffe62767fdc,
    num_fields=num_fields@entry=0x7ffe62767fda) at protocol/pool_proto_modules.c:2904
0000002 0x000055e08d28cef9 in pool_process_query (frontend=0x55e08e961508, backend=0x7fe0fefdda18, reset_request=reset_request@entry=0) at protocol/pool_process_query.c:321
0000003 0x000055e08d2876aa in do_child (fds=fds@entry=0x55e08e960340) at protocol/child.c:414
0000004 0x000055e08d2615a7 in fork_a_child (fds=0x55e08e960340, id=5) at main/pgpool_main.c:659
0000005 0x000055e08d2620dd in reaper () at main/pgpool_main.c:2690
0000006 0x000055e08d268f8f in PgpoolMain (discard_status=<optimized out>, clear_memcache_oidmaps=<optimized out>) at main/pgpool_main.c:451
0000007 0x000055e08d25fb32 in main (argc=<optimized out>, argv=0x7ffe62775508) at main/main.c:349
(gdb) l
1904 return POOL_END;
1905
1906 /*
1907 * Set transaction state for each node
1908 */
1909 state = TSTATE(backend,
1910 MASTER_SLAVE ? PRIMARY_NODE_ID : REAL_MASTER_NODE_ID);
1911
1912 for (i = 0; i < NUM_BACKENDS; i++)
1913 {
=========
Steps To Reproduce1. detach the primary node (node 0) by pcp_detach_node to set the primary node id to 1.
2. connect to a pgpool and execute pg_sleep(10).
3. during executing the query, attach node 0 to get back the primary node id to 0 by pcp_attach_node.
Additional InformationDiscussed in [pgpool-hackers: 3258] Re: Segfault in a race condition.
See this thread for details.
TagsNo tags attached.

Activities

t-ishii

2019-04-02 11:16

developer   ~0002498

Last edited: 2019-04-02 11:17

View 2 revisions

I attached a patch in bug 481 which should fix the problem as well.

administrator

2019-05-21 16:10

administrator   ~0002617

Released in Pgpool-II 3.6.17.
http://www.pgpool.net/docs/latest/en/html/release.html

I am going to close this issue.

Issue History

Date Modified Username Field Change
2019-03-26 21:08 nagata New Issue
2019-04-01 16:08 t-ishii Assigned To => t-ishii
2019-04-01 16:08 t-ishii Status new => assigned
2019-04-02 11:16 t-ishii Note Added: 0002498
2019-04-02 11:16 t-ishii Status assigned => feedback
2019-04-02 11:17 t-ishii Note Edited: 0002498 View Revisions
2019-05-17 13:13 t-ishii Status feedback => resolved
2019-05-17 13:13 t-ishii Fixed in Version => 3.6.17
2019-05-21 16:10 administrator Status resolved => closed
2019-05-21 16:10 administrator Note Added: 0002617