View Issue Details

IDProjectCategoryView StatusLast Update
0000543Pgpool-IIEnhancementpublic2019-09-06 07:18
ReporterCarlos Mendez Assigned To 
PriorityurgentSeveritymajorReproducibilityalways
Status closedResolutionopen 
Product Version3.7.1 
Summary0000543: VIP unable to switch to another PGPOOL
DescriptionCurrently we have configured 3 pgpools
PGPOOL1 10.241.166.21
PGPOOL2 10.241.166.22
PGPOOL3 10.241.166.23
VIP 10.241.166.25

as a test we have stoped the services on node 1 but the VIP was not started in another PGPOOL NODE

In the logs we can see the following messages:

2019-09-02 16:03:03: pid 8465: LOG: I am the cluster leader node but we do not have enough nodes in cluster
2019-09-02 16:03:03: pid 8465: DETAIL: waiting for the quorum to start escalation process


Is it necessary to have a different configuration?
According to the messages the VIP can not be started because the process is waiting for a quorum

**********************************************************************************************
PGPOOL 1 LOG

2019-09-02 14:18:24: pid 29046: LOG: PCP process with pid: 29139 exit with SUCCESS.
2019-09-02 14:18:24: pid 29046: LOG: PCP process with pid: 29139 exits with status 256
2019-09-02 14:18:25: pid 29046: LOG: forked new pcp worker, pid=29143 socket=7
2019-09-02 14:18:25: pid 29143: FATAL: authentication failed for user "postgres"
2019-09-02 14:18:25: pid 29143: DETAIL: username and/or password does not match
2019-09-02 14:18:25: pid 29046: LOG: PCP process with pid: 29143 exit with SUCCESS.
2019-09-02 14:18:25: pid 29046: LOG: PCP process with pid: 29143 exits with status 256
2019-09-02 14:18:38: pid 28941: LOG: watchdog: lifecheck started
2019-09-02 14:38:16: pid 29046: LOG: forked new pcp worker, pid=31540 socket=7
2019-09-02 14:38:16: pid 28927: LOG: new IPC connection received
2019-09-02 14:38:16: pid 29046: LOG: PCP process with pid: 31540 exit with SUCCESS.
2019-09-02 14:38:16: pid 29046: LOG: PCP process with pid: 31540 exits with status 0
2019-09-02 16:02:54: pid 28927: LOG: Watchdog is shutting down
2019-09-02 16:02:54: pid 9298: LOG: watchdog: de-escalation started
2019-09-02 16:02:54: pid 9298: LOG: successfully released the delegate IP:"10.241.166.24"
2019-09-02 16:02:54: pid 9298: DETAIL: 'if_down_cmd' returned with success



***********************************************************************************************
PGPOOL NODE 2

2019-09-02 14:17:08: pid 9274: DETAIL: set SO_REUSEPORT
2019-09-02 16:02:54: pid 9268: LOG: remote node "10.241.166.21:9999 Linux pgpool_01" is shutting down
2019-09-02 16:02:54: pid 9268: LOG: watchdog cluster has lost the coordinator node
2019-09-02 16:02:54: pid 9268: LOG: unassigning the remote node "10.241.166.21:9999 Linux pgpool_01" from watchdog cluster master
2019-09-02 16:02:54: pid 9268: LOG: We have lost the cluster master node "10.241.166.21:9999 Linux pgpool_01"
2019-09-02 16:02:54: pid 9268: LOG: watchdog node state changed from [STANDBY] to [JOINING]
2019-09-02 16:02:58: pid 9268: LOG: watchdog node state changed from [JOINING] to [INITIALIZING]
2019-09-02 16:02:59: pid 9268: LOG: I am the only alive node in the watchdog cluster
2019-09-02 16:02:59: pid 9268: HINT: skiping stand for coordinator state
2019-09-02 16:02:59: pid 9268: LOG: watchdog node state changed from [INITIALIZING] to [MASTER]
2019-09-02 16:02:59: pid 9268: LOG: I am announcing my self as master/coordinator watchdog node
2019-09-02 16:03:03: pid 9268: LOG: I am the cluster leader node
2019-09-02 16:03:03: pid 9268: DETAIL: our declare coordinator message is accepted by all nodes
2019-09-02 16:03:03: pid 9268: LOG: setting the local node "pgpool_02:9999 Linux pgpool_02" as watchdog cluster master
2019-09-02 16:03:03: pid 9268: LOG: I am the cluster leader node but we do not have enough nodes in cluster
2019-09-02 16:03:03: pid 9268: DETAIL: waiting for the quorum to start escalation process
2019-09-02 16:03:03: pid 9267: LOG: Pgpool-II parent process received watchdog quorum change signal from watchdog
2019-09-02 16:03:03: pid 9268: LOG: new IPC connection received
2019-09-02 16:03:03: pid 9268: LOG: new IPC connection received


************************************************************************************************
PGPOOL 3 LOG


2019-09-02 16:02:54: pid 8465: LOG: remote node "10.241.166.21:9999 Linux pgpool_01" is shutting down
2019-09-02 16:02:54: pid 8465: LOG: watchdog cluster has lost the coordinator node
2019-09-02 16:02:54: pid 8465: LOG: unassigning the remote node "10.241.166.21:9999 Linux pgpool_01" from watchdog cluster master
2019-09-02 16:02:54: pid 8465: LOG: We have lost the cluster master node "10.241.166.21:9999 Linux pgpool_01"
2019-09-02 16:02:54: pid 8465: LOG: watchdog node state changed from [STANDBY] to [JOINING]
2019-09-02 16:02:58: pid 8465: LOG: watchdog node state changed from [JOINING] to [INITIALIZING]
2019-09-02 16:02:59: pid 8465: LOG: I am the only alive node in the watchdog cluster
2019-09-02 16:02:59: pid 8465: HINT: skiping stand for coordinator state
2019-09-02 16:02:59: pid 8465: LOG: watchdog node state changed from [INITIALIZING] to [MASTER]
2019-09-02 16:02:59: pid 8465: LOG: I am announcing my self as master/coordinator watchdog node
2019-09-02 16:03:03: pid 8465: LOG: I am the cluster leader node
2019-09-02 16:03:03: pid 8465: DETAIL: our declare coordinator message is accepted by all nodes
2019-09-02 16:03:03: pid 8465: LOG: setting the local node "pgpool_03:9999 Linux pgpool_03" as watchdog cluster master
2019-09-02 16:03:03: pid 8465: LOG: I am the cluster leader node but we do not have enough nodes in cluster
2019-09-02 16:03:03: pid 8465: DETAIL: waiting for the quorum to start escalation process
2019-09-02 16:03:03: pid 8465: LOG: new IPC connection received
Tagspgpool 3.7.1, settings

Activities

t-ishii

2019-09-05 09:19

developer   ~0002819

Seems like duplicate issue as 0000541. If so, please close this.

Carlos Mendez

2019-09-05 23:56

reporter   ~0002821

Hi Buddy
Yes the issue was fix on ticket 0000541, this can be closed

Regards

Issue History

Date Modified Username Field Change
2019-09-03 06:07 Carlos Mendez New Issue
2019-09-03 06:07 Carlos Mendez Tag Attached: pgpool 3.7.1
2019-09-03 06:07 Carlos Mendez Tag Attached: settings
2019-09-05 09:19 t-ishii Note Added: 0002819
2019-09-05 09:19 t-ishii Status new => feedback
2019-09-05 23:56 Carlos Mendez Note Added: 0002821
2019-09-05 23:56 Carlos Mendez Status feedback => new
2019-09-06 07:18 t-ishii Status new => closed