[Pgpool-general] Direct INSERT to backends - Parallel Mode

calimlimvl at nationalbookstore.com.ph calimlimvl at nationalbookstore.com.ph
Wed Sep 3 10:06:09 UTC 2008


>> Good morning to you all,
>>
>> I've been migrating MySQL to PostgreSQL data through Pgpool-II Server. I
>> noticed that it's faster to INSERT data directly to the backends most
>> probably due to the overhead. Since I already know where the data should
>> go anyway, is it possible to just INSERT the appropriate sets of data to
>> their respective backend nodes? And if I do that, would Parallel Query
>> Mode still work properly provided that the FUNCTION and dist_def
>> definitions created are correct?
>
> Yes.
>
>> And any suggestions on how to make the
>> migration faster because I've been INSERTing data for weeks and been
>> making Pgpool-II Parallel Query Mode work for months already. And
>> lastly,
>> again, when should I use replicate_def in Parallel Mode? Please help me
>> so
>> that I could finish my task and move on to another project. Thank you so
>> much in advance.
>
> Replicate_def can be used for smaller tables if you want to join a big
> table and a small table to enhance the performance.
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
>
>

What if I have 2 large tables say largetable1 and largetable2 ...
partitioned into 3 backends ... and i have a smalltable1 ... what if I
don't place smalltable1 in replicate_def and never define it, what's going
to be the difference? by the way, smalltable1 is like a lookup details of
fieldnames in largetable1 and 2. like in smalltable1 i have branchcode, in
largetable1 i have branchcode and branchname and when i join them, i get
both. Thanks again for your reply. :)

Viril



More information about the Pgpool-general mailing list