No subject


Sun Nov 18 18:09:07 JST 2012


in front of TC1 sits Pgpool, so the Pgpool would be at all times available:
1. TC1 goes down.
2. Pgpool determines that the PostgreSQL master is unavailable and creates
trigger file which makes PostgreSQL in TC2 R/W.

But in my case I have installed PostgreSQL with Pgpool on the same server. 
With TC1
goes also Pgpool down.
I don't understand the procedure of fail-over by using secondary Pgpool
installed in TC2. Is there the same configuration of the Pgpool in TC2? Isn't 
this configuration prone to the split-brain situations when network failure on 
the WAN link occures? In that case Pgpool in TC2 makes PostgreSQL master and 
then there will be R/W master in TC1 and also in TC2. Or am I wrong?

Best regards,
Michal Mistina
-----Original Message-----
From: pgpool-general-bounces at pgpool.net
[mailto:pgpool-general-bounces at pgpool.net] On Behalf Of Mistina Michal
Sent: Monday, August 19, 2013 9:14 AM
To: Tatsuo Ishii
Cc: pgpool-general at pgpool.net
Subject: [pgpool-general: 2051] Re: Suggestion about two datacenters
connected through WAN

Hi Tatsuo.
Thank you for the reply.
>> Yes, I am using DRBD. But there is a layer - pacemaker, which should
>> handle DRBD.
>> It works on what they called "resources". As a resource we understand
>> service. In my case the services are: VirtualIP address, PostgreSQL,
>> FileSystem (XFS), DRBD, LVM. Pacemaker controls the order of stopping
>> and starting services on each node. The order of stopping services in
>> my environment is like this (each node): Turn off VirtualIP address
>> -> Stop PostgreSQL -> Dismount Filesystem -> Dismount LVM Virtual
>> Groups . The DRBD device was in primary role on node 1. As the last
>> thing (after everyghing is stopped) the DRBD is promoted to primary
>> role on node 2 and degraded to secondary role on node 1. Then each
>> service in reverse order is started on the node 2.
>>
>> This worked until I attached streaming replication slave.
>>
>> I read, the pgpool can run also as the resource, so it is started as
>> the last service after PostgreSQL.

>So the PostgreSQL database cluster directory is on the file system
>managed
by Pacemaker? That means on the secondary node initially the file system is
not mounted, that makes PostgreSQL on the secondary node cannot start
because there's no file system. Am I missing something?

Yes, you are correct. PostgreSQL data directory resides on the file system
managed by Pacemaker. In one moment there is only one PostgreSQL running
only on one node and there is the file system mounted only on one node. File
system corruption can only occure when there are split-brain conditions and
both nodes are automatically promoted to drbd primary role while not aware
of other node is in primary role. If the Pacemaker did not stop PostgreSQL
server on the first node (it failed) it does not continue to start it on the
secondary node without mounted file system.

>> We needed some level of the "dumb resistence" which means if somebody
>> accidentally turn off or reboot one node, the services automatically
>> start on the node 2. If somebody then reboots node 2 and node 1 is up
>> services are automatically booted on node 1.
>>
>> Do you think 4 nodes can be handled by pgpool with some level of that
>> "dumb resistence"? I saw pgpool has the option to set up weight on
>> nodes. Is it a good option to set which servers are in the primary
technical center?
>> TC1: node 1, node 2
>> TC2: node 3, node 4

>The "weight" parameter is nothing to do with making particular server
>as
"the primary technical center". Do you have any well defined policy to
promote which node when the primary node goes down? I mean, you have 3
candidates (node2, 3, 4) to be promoted and you need to decide which one
should be promoted in the case. If you have a well defined policy, which a
computer program can implement, you could set up "dumb Resistance" system.

You mean within Pacemaker? The Pacemaker in TC1 does not know about
Pacemaker in TC2. Therefore TC1 does not influence function in the TC2.
Within TC1 stopping resources with Pacemaker stopped in failure on the step
of stopping PostgreSQL resource. Therefore it does not continue to stop
other resources (dismounting system, ...) and does not start resources on
the node 2.

If you mean project policies, the policies are written like this:
1. Automatic fail-over should at first occur within TC1.
2. If the whole TC1 goes down (e.g. it was flooded) the TC2 should take
over. If there is a possibility to make it automatically, we should use it.
If not, the automatic fail-over can do that.

I was searching for technologies to achieve aforementioned conditions and
would like to know if pgpool itself can somehow do that. But it seems manual
fail-over can be done only with streaming replication and automatic
fail-over can be done only with pgpool / streaming-replication. So
everything works only with streaming replication.


Best regards,
Michal Mistina



------=_NextPart_000_0078_01CEA292.BF92FFD0
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIIZzCCBA8w
ggL3oAMCAQICAQEwDQYJKoZIhvcNAQEFBQAwSjELMAkGA1UEBhMCU0sxEzARBgNVBAcTCkJyYXRp
c2xhdmExEzARBgNVBAoTCkRpc2lnIGEucy4xETAPBgNVBAMTCENBIERpc2lnMB4XDTA2MDMyMjAx
MzkzNFoXDTE2MDMyMjAxMzkzNFowSjELMAkGA1UEBhMCU0sxEzARBgNVBAcTCkJyYXRpc2xhdmEx
EzARBgNVBAoTCkRpc2lnIGEucy4xETAPBgNVBAMTCENBIERpc2lnMIIBIjANBgkqhkiG9w0BAQEF
AAOCAQ8AMIIBCgKCAQEAkvYxwX2I/ZkBqdh78nF18THG83Vm+lEoRoSXeDS8bPy8RVmIJhhKxDcf
oUpEveNxBPVEF+I//EhYb1yeegm6UTciI2ZDIbA8ZKL4ahUOP+tR4VSp3QaZ15o8VIs5Az8Pxc7G
64NyAqgfcfMt+HUI22JM6PrO+edqH7ZrNYK64o8Wkn0FDGxGA13A7Wm/OsGKoOiO2blFKIcI7LTK
Fb6C3bVEiy2thgxoYm2FVvKsFGM6xtGZrDR4VkvPtq0/jIrXBOXjeEz1hqr1j/o9bHGjLcpn62h7
bjOpDIIoqExqIUAVIAwmW4PCqRYVwCSCXSsWrcpj9nQAsN9DxBBgVmdjRQIDAQABo4H/MIH8MA8G
A1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFI2ySWidcgglucAn9VCTVkhGcfmPMA4GA1UdDwEB/wQE
AwIBBjA2BgNVHREELzAtgRNjYW9wZXJhdG9yQGRpc2lnLnNrhhZodHRwOi8vd3d3LmRpc2lnLnNr
L2NhMGYGA1UdHwRfMF0wLaAroCmGJ2h0dHA6Ly93d3cuZGlzaWcuc2svY2EvY3JsL2NhX2Rpc2ln
LmNybDAsoCqgKIYmaHR0cDovL2NhLmRpc2lnLnNrL2NhL2NybC9jYV9kaXNpZy5jcmwwGgYDVR0g
BBMwETAPBg0rgR6Rk+YKAAAAAQEBMA0GCSqGSIb3DQEBBQUAA4IBAQBdNHRhTK872P+fbVg2HD0L
gQ0SK0YQgP3nPCfQesiptn50MDOjOop7dMB5eUKTbf+xKRSCqyGMLxf5PyYv9VnG74AGt5pJKezO
fnE8ahBBwPbTmrJ8WpGcwKxbyE1e9+FT/0N3/J5LZ2zX84PRoOB/Jd+4mAuaMjhsMKDz/wgVM/dQ
Sns+oz4gqdwvVoAK7UFQsMn07LLjJkQADm+eBrwillNwZcRQCkZrpC8ngRInE18QoXbOins36sM5
YQOVmDrnbIglCPx5aA2HfWL4tF/7xdhMvVi8P0Nb1B4BTTxjviPvjM1aULhoVPkKmTMRAOGewkZ3
gvVZBowhTIcJzeWoMIIEUDCCAzigAwIBAgIRAgqM/o5nehfPAAAAAABhpXwwDQYJKoZIhvcNAQEF
BQAwSjELMAkGA1UEBhMCU0sxEzARBgNVBAcTCkJyYXRpc2xhdmExEzARBgNVBAoTCkRpc2lnIGEu
cy4xETAPBgNVBAMTCENBIERpc2lnMB4XDTEzMDgwNTA5NDIwNVoXDTE0MDgwNTA5NDIwNVowTzEX
MBUGA1UEAwwOTWljaGFsIE1pc3RpbmExEjAQBgNVBAoMCVZpcnRlIGEuczETMBEGA1UEBwwKQnJh
dGlzbGF2YTELMAkGA1UEBhMCU0swggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDntwTC
pVw0ZxOmf31fL7K8LgDfrKfbSw6D6GFK+M/ZbaeQseNsL9YDWwWaMsnRhJNTxcUrvpIzq2N93JT1
h8PJeq8kAD/sCMOLqAH6imoMXwN2jGGDoxm7Yr+zWoAm1e12O0l3+3l/3bT6reDQVjL0eb0dCjlK
FuVr4YLvFrC+0RVNgG3Nh0Pn6N3gFsF5ZREeju8X0f3/arVZUbxmXvE83BT6UqvLFLaKGvjWqH79
DXD6VY0yAgX8vik9VRiG81F7XmQ15qX0FQ0+jZjTc09ik79McM2r3djgbJiKl37Wf9E1RIyIj9Yi
raaE9WuhQTz1hal5csYoVNikvstCR09tAgMBAAGjggEqMIIBJjAdBgNVHQ4EFgQUsMpX0RxDaX+R
mrNrAmgRrJCpOgIwHwYDVR0jBBgwFoAUjbJJaJ1yCCW5wCf1UJNWSEZx+Y8wCwYDVR0PBAQDAgTw
MGYGA1UdHwRfMF0wLaAroCmGJ2h0dHA6Ly93d3cuZGlzaWcuc2svY2EvY3JsL2NhX2Rpc2lnLmNy
bDAsoCqgKIYmaHR0cDovL2NhLmRpc2lnLnNrL2NhL2NybC9jYV9kaXNpZy5jcmwwHQYDVR0lBBYw
FAYIKwYBBQUHAwIGCCsGAQUFBwMEMBkGA1UdIAQSMBAwDgYMK4EekZPmCgAAAAEBMBEGCWCGSAGG
+EIBAQQEAwIFoDAiBgNVHREEGzAZgRdtaWNoYWwubWlzdGluYUB2aXJ0ZS5zazANBgkqhkiG9w0B
AQUFAAOCAQEAg2XbF6VJGHaNQWlFgQzI6sUkKLcb6trZLMVMdRVu3yM3IrQ4TSpEK8h16zvZtq4c
QSd/boxhXH0INBsj71G1ex7P54qrRM6cnAXbwpda+rkPpzr7Nhe7SKuQAGHR6V7S41VGrCzp36aV
/G850+eo4pXEYAjkKASdGjl5Qd1A83vYQdJhFIxPFVUoDmFpmAehheXe/GtgiF8hCL5A66B5wtP6
+fFAEeEgWEgh05tnU5/lYR0XHdhUesfBVyhbLsVLTxkuTQLF7RVge+BjeHcLMqdBWT5WgG9JXwbd
2CCmybTowYYlsR5y6Xw5Qn/anJY2vVPeBS68hdPBMQS75ThP2TGCA18wggNbAgEBMF8wSjELMAkG
A1UEBhMCU0sxEzARBgNVBAcTCkJyYXRpc2xhdmExEzARBgNVBAoTCkRpc2lnIGEucy4xETAPBgNV
BAMTCENBIERpc2lnAhECCoz+jmd6F88AAAAAAGGlfDAJBgUrDgMCGgUAoIIB1TAYBgkqhkiG9w0B
CQMxCwYJKoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xMzA4MjYxNzMwNDBaMCMGCSqGSIb3DQEJ
BDEWBBSWt7WRIGVKvnmIQRqHmRC7kYN4ZjBuBgkrBgEEAYI3EAQxYTBfMEoxCzAJBgNVBAYTAlNL
MRMwEQYDVQQHEwpCcmF0aXNsYXZhMRMwEQYDVQQKEwpEaXNpZyBhLnMuMREwDwYDVQQDEwhDQSBE
aXNpZwIRAgqM/o5nehfPAAAAAABhpXwwcAYLKoZIhvcNAQkQAgsxYaBfMEoxCzAJBgNVBAYTAlNL
MRMwEQYDVQQHEwpCcmF0aXNsYXZhMRMwEQYDVQQKEwpEaXNpZyBhLnMuMREwDwYDVQQDEwhDQSBE
aXNpZwIRAgqM/o5nehfPAAAAAABhpXwwgZMGCSqGSIb3DQEJDzGBhTCBgjALBglghkgBZQMEASow
CwYJYIZIAWUDBAEWMAoGCCqGSIb3DQMHMAsGCWCGSAFlAwQBAjAOBggqhkiG9w0DAgICAIAwDQYI
KoZIhvcNAwICAUAwBwYFKw4DAhowCwYJYIZIAWUDBAIDMAsGCWCGSAFlAwQCAjALBglghkgBZQME
AgEwDQYJKoZIhvcNAQEBBQAEggEAvIvxNbrrLjs9U0mIZJpjiXmNPAAOzDmbM9Z7scciTX+lSIMN
M1mQul7BTMX/lwBFepGYh1eJu0JOaODm7F1MjR2JaQvi9v/e35j2BVFZsSXmGsuDAJz/wIYPBGe8
0k1NxsQ8FXDfgPjfnHuq8r6WjQCJVH1vRcQkTQ9sCKdUODtqIMWJt8RgdVijSDcgjmn+wQQcEOhb
8NeK/UiNOiThiUze111AKP89nCJ1L9gttIL87mR4l1noFNtehlEWyQLsHSjxCcbe23Ys4AYf6l8R
/2PkcyxD6aX5EawFqT9pe4rlaERWF19mqqjUKoCtizKQa8B3EVG7i+oi8cdevFqIRwAAAAAAAA==

------=_NextPart_000_0078_01CEA292.BF92FFD0--


More information about the pgpool-general mailing list