[pgpool-general: 2051] Re: Suggestion about two datacenters connected through WAN

Mistina Michal Michal.Mistina at virte.sk
Mon Aug 19 16:14:18 JST 2013

Hi Tatsuo.
Thank you for the reply.
>> Yes, I am using DRBD. But there is a layer - pacemaker, which should 
>> handle DRBD.
>> It works on what they called "resources". As a resource we understand 
>> service. In my case the services are: VirtualIP address, PostgreSQL, 
>> FileSystem (XFS), DRBD, LVM. Pacemaker controls the order of stopping 
>> and starting services on each node. The order of stopping services in 
>> my environment is like this (each node): Turn off VirtualIP address -> 
>> Stop PostgreSQL -> Dismount Filesystem -> Dismount LVM Virtual Groups 
>> . The DRBD device was in primary role on node 1. As the last thing 
>> (after everyghing is stopped) the DRBD is promoted to primary role on 
>> node 2 and degraded to secondary role on node 1. Then each service in 
>> reverse order is started on the node 2.
>> This worked until I attached streaming replication slave.
>> I read, the pgpool can run also as the resource, so it is started as 
>> the last service after PostgreSQL.

>So the PostgreSQL database cluster directory is on the file system managed
by Pacemaker? That means on the secondary node initially the file system is
not mounted, that makes PostgreSQL on the secondary node cannot start
because there's no file system. Am I missing something?

Yes, you are correct. PostgreSQL data directory resides on the file system
managed by Pacemaker. In one moment there is only one PostgreSQL running
only on one node and there is the file system mounted only on one node. File
system corruption can only occure when there are split-brain conditions and
both nodes are automatically promoted to drbd primary role while not aware
of other node is in primary role. If the Pacemaker did not stop PostgreSQL
server on the first node (it failed) it does not continue to start it on the
secondary node without mounted file system.

>> We needed some level of the "dumb resistence" which means if somebody 
>> accidentally turn off or reboot one node, the services automatically 
>> start on the node 2. If somebody then reboots node 2 and node 1 is up 
>> services are automatically booted on node 1.
>> Do you think 4 nodes can be handled by pgpool with some level of that 
>> "dumb resistence"? I saw pgpool has the option to set up weight on 
>> nodes. Is it a good option to set which servers are in the primary
technical center?
>> TC1: node 1, node 2
>> TC2: node 3, node 4

>The "weight" parameter is nothing to do with making particular server as
"the primary technical center". Do you have any well defined policy to
promote which node when the primary node goes down? I mean, you have 3
candidates (node2, 3, 4) to be promoted and you need to decide which one
should be promoted in the case. If you have a well defined policy, which a
computer program can implement, you could set up "dumb Resistance" system.

You mean within Pacemaker? The Pacemaker in TC1 does not know about
Pacemaker in TC2. Therefore TC1 does not influence function in the TC2.
Within TC1 stopping resources with Pacemaker stopped in failure on the step
of stopping PostgreSQL resource. Therefore it does not continue to stop
other resources (dismounting system, ...) and does not start resources on
the node 2.

If you mean project policies, the policies are written like this:
1. Automatic fail-over should at first occur within TC1.
2. If the whole TC1 goes down (e.g. it was flooded) the TC2 should take
over. If there is a possibility to make it automatically, we should use it.
If not, the automatic fail-over can do that.

I was searching for technologies to achieve aforementioned conditions and
would like to know if pgpool itself can somehow do that. But it seems manual
fail-over can be done only with streaming replication and automatic
fail-over can be done only with pgpool / streaming-replication. So
everything works only with streaming replication.

Best regards,
Michal Mistina

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3076 bytes
Desc: not available
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20130819/d29808a7/attachment-0001.p7s>

More information about the pgpool-general mailing list