[Pgpool-general] EXPLAIN statement fails on 'kind mismatch'

Tatsuo Ishii ishii at sraoss.co.jp
Mon Dec 7 00:40:21 UTC 2009


> >>> >> hello
> >>> >>
> >>> >> after 2 weeks pgpool-II (2.25) was working good,
> >>> >> I now have:
> >>> >> 'kind mismatch among backends. Possible last query was: "EXPLAIN
> >>> SELECT
> >>> >> *
> >>> >> FROM ycore.transfer_record WHERE
> >>> >> transfer_record_id>400" kind details are: 0[D] 1[C]'
> >>> >>
> >>> >> just when we wanted to move to production.
> >>> >>
> >>> >> all other things still work, also 'EXPLAIN' on other tables,
> >>> >> (also same query on this table worked until now)
> >>> >> I have full logs of pgpool + 2 postgres backends
> >>> >> how can I find out whats wrong ?
> >>> >
> >>> > Can you show me the outputs of EXPAIN which are executed directly on
> >>> > postgres backends? My bet is, there are dead tuples difference
> >>> between
> >>> > backends which cause execution plan differences.
> >>> > --
> >>> > Tatsuo Ishii
> >>> > SRA OSS, Inc. Japan
> >>> >
> >>>
> >>> backend 0:
> >>>
> >>> explain select * from ycore.transfer_record where transfer_record_id >
> >>> 400;
> >>>                                        QUERY PLAN
> >>> -----------------------------------------------------------------------------------------
> >>>  Bitmap Heap Scan on transfer_record  (cost=4.60..20.16 rows=45
> >>> width=356)
> >>>    Recheck Cond: (transfer_record_id > 400)
> >>>    ->  Bitmap Index Scan on idx_tr_transfer_record_id  (cost=0.00..4.59
> >>> rows=45 width=0)
> >>>          Index Cond: (transfer_record_id > 400)
> >>> (4 rows)
> >>>
> >>>
> >>> backend 1:
> >>>
> >>> explain select * from ycore.transfer_record where transfer_record_id >
> >>> 400;
> >>>                             QUERY PLAN
> >>> -------------------------------------------------------------------
> >>>  Seq Scan on transfer_record  (cost=0.00..24.48 rows=49 width=331)
> >>>    Filter: (transfer_record_id > 400)
> >>> (2 rows)
> >>>
> >>>
> >>>
> >>> I can see there is a difference, but i don't know how to check what
> >>> caused
> >>> it, and how to avoid it next time.
> >>
> >> Please make sure that you have idx_tr_transfer_record_id index on
> >> backend 1. If you already have, please try vacuum analyze on the
> >> database directly on backend 1.
> >> --
> >> Tatsuo Ishii
> >> SRA OSS, Inc. Japan
> >>
> >
> > I have 'idx_tr_transfer_record_id' index on both backends,
> >
> > i ran vacumm analyze as user postgres on backend 1:
> >
> > vacuumdb --verbose --analyze --table transfer_record spy
> > INFO:  vacuuming "ycore.transfer_record"
> > INFO:  scanned index "transfer_record_pkey" to remove 14 row versions
> > DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
> > INFO:  scanned index "idx_tr_core_req_req_id" to remove 14 row versions
> > DETAIL:  CPU 0.00s/0.00u sec elapsed 0.53 sec.
> > INFO:  scanned index "idx_tr_transfer_record_id" to remove 14 row versions
> > DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
> > INFO:  "transfer_record": removed 14 row versions in 6 pages
> > DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
> > INFO:  index "transfer_record_pkey" now contains 445 row versions in 4
> > pages
> > DETAIL:  14 index row versions were removed.
> > 0 index pages have been deleted, 0 are currently reusable.
> > CPU 0.00s/0.00u sec elapsed 0.00 sec.
> > INFO:  index "idx_tr_core_req_req_id" now contains 445 row versions in 4
> > pages
> > DETAIL:  14 index row versions were removed.
> > 0 index pages have been deleted, 0 are currently reusable.
> > CPU 0.00s/0.00u sec elapsed 0.00 sec.
> > INFO:  index "idx_tr_transfer_record_id" now contains 445 row versions in
> > 4 pages
> > DETAIL:  14 index row versions were removed.
> > 0 index pages have been deleted, 0 are currently reusable.
> > CPU 0.00s/0.00u sec elapsed 0.00 sec.
> > INFO:  "transfer_record": found 9 removable, 446 nonremovable row versions
> > in 19 pages
> > DETAIL:  1 dead row versions cannot be removed yet.
> > There were 53 unused item pointers.
> > 9 pages contain useful free space.
> > 0 pages are entirely empty.
> > CPU 0.00s/0.00u sec elapsed 0.55 sec.
> > INFO:  vacuuming "pg_toast.pg_toast_17250"
> > INFO:  index "pg_toast_17250_index" now contains 0 row versions in 1 pages
> > DETAIL:  0 index row versions were removed.
> > 0 index pages have been deleted, 0 are currently reusable.
> > CPU 0.00s/0.00u sec elapsed 0.00 sec.
> > INFO:  "pg_toast_17250": found 0 removable, 0 nonremovable row versions in
> > 0 pages
> > DETAIL:  0 dead row versions cannot be removed yet.
> > There were 0 unused item pointers.
> > 0 pages contain useful free space.
> > 0 pages are entirely empty.
> > CPU 0.00s/0.00u sec elapsed 0.00 sec.
> > INFO:  analyzing "ycore.transfer_record"
> > INFO:  "transfer_record": scanned 19 of 19 pages, containing 445 live rows
> > and 1 dead rows; 445 rows in sample, 445 estimated total rows
> >
> >
> > after this, everything remains the same.
> > (EXPLAIN output, kind mismatch)

Can you show me the vacuum analyze output on backend 0?
--
Tatsuo Ishii
SRA OSS, Inc. Japan


More information about the Pgpool-general mailing list