View Issue Details

IDProjectCategoryView StatusLast Update
0000305Pgpool-IIBugpublic2018-03-21 22:23
ReporterxrensgoryAssigned ToMuhammad Usama 
PrioritynormalSeverityblockReproducibilityalways
Status feedbackResolutionreopened 
Product Version3.6.2 
Target VersionFixed in Version 
Summary0000305: inconsistent node status with watchdog configuration
DescriptionI have 2 servers with postgresql and pgpool on each.
Watchdog configured with delegated ip. All steps was done according this tutorial http://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave/en.html

                  delegated_ip
                     /\
PgPool_primary<---Heartbeat--->PgPool_standby
PgSql_primary<---Streaming_replication--->PgSql_standby

This configuration works well, except one situation

if server A that running PgSql_primary and PgPool as watchdog coordinator will powered off, the second server B doesnt executed failover command as well, and standby node stay in standby state.

Steps To Reproduce1. Setup master-slave configuration with two PgPools with watchdog.
2. Shutdown a node that running primary db instance and MASTER watchdog

[root@calculate ~]# echo 'show pool_nodes;' | psql -h 10.38.164.34 -U postgres -p 9999 -d zabbix -x
-[ RECORD 1 ]-----+--------------
node_id | 0
hostname | fsrumosdt0033
port | 5432
status | up
lb_weight | 0.500000
role | standby
select_cnt | 0
load_balance_node | true
replication_delay | 0
-[ RECORD 2 ]-----+--------------
node_id | 1
hostname | fsrumosdt0034
port | 5432
status | down
lb_weight | 0.500000
role | standby
select_cnt | 0
load_balance_node | false
replication_delay | 0
Tagsmaster slave, streaming replication, watchdog

Activities

xrensgory

2017-04-20 22:26

reporter  

pgpool.conf (34,732 bytes)

xrensgory

2017-04-20 22:27

reporter  

pcp.conf (902 bytes)

xrensgory

2017-04-20 22:27

reporter  

failover.sh (448 bytes)

xrensgory

2017-04-20 22:54

reporter   ~0001441

Initial Step:

[root@calculate ~]# echo 'show pool_nodes;' | psql -h 10.38.164.34 -U postgres -p 9999 -d zabbix -x
-[ RECORD 1 ]-----+--------------
node_id | 0
hostname | fsrumosdt0033
port | 5432
status | up
lb_weight | 0.500000
role | primary
select_cnt | 0
load_balance_node | true
replication_delay | 0
-[ RECORD 2 ]-----+--------------
node_id | 1
hostname | fsrumosdt0034
port | 5432
status | up
lb_weight | 0.500000
role | standby
select_cnt | 0
load_balance_node | false
replication_delay | 0

./pool_status.sh
Current Watchdog Coordinator: [fsrumosdt0033]
Current Primary DB node: [fsrumosdt0033]
Total number of db nodes: [2]
Number of live db nodes: [2]
Is primary exist: [1]

[root@fsrumosdt0033 pgpool-II]# poweroff

[root@calculate ~]# echo 'show pool_nodes;' | psql -h 10.38.164.34 -U postgres -p 9999 -d zabbix -x
-[ RECORD 1 ]-----+--------------
node_id | 0
hostname | fsrumosdt0033
port | 5432
status | down
lb_weight | 0.500000
role | standby
select_cnt | 0
load_balance_node | false
replication_delay | 0
-[ RECORD 2 ]-----+--------------
node_id | 1
hostname | fsrumosdt0034
port | 5432
status | up
lb_weight | 0.500000
role | primary
select_cnt | 0
load_balance_node | true
replication_delay | 0

At this moment step0_pgpool.log captured.

Next poweron old_primary_node

[root@fsrumosap0021 ~]# ./vmware-vsphere-cli-distrib/apps/vm/vmcontrol.pl --operation poweron --vmname DT0033

virtual machine 'DT0033' under host fsrumose0014.fs01.vwf.vwfs-ad powered on

Then we start to recover old_promary_node

export PGUSER=
export PGPASSWORD=
export PCPPASSFILE=/etc/pgpool-II/.pcppass
source /opt/rh/rh-postgresql94/enable

PGDATA='/var/opt/rh/rh-postgresql94/lib/pgsql/data'

sudo /etc/init.d/rh-postgresql94-postgresql stop

rm -rf ${PGDATA}/*
source /opt/rh/rh-postgresql94/enable
pg_basebackup -P -R -X stream -c fast -h 10.38.164.31 -D ${PGDATA}

cat << EOF > /var/opt/rh/rh-postgresql94/lib/pgsql/data/recovery.conf
standby_mode = 'on'
primary_conninfo = 'user=replicator password=replicator host=10.38.164.31 port=5432 sslmode=prefer sslcompression=1 krbsrvname=postgres'
trigger_file = '/var/opt/rh/rh-postgresql94/lib/pgsql/data/failover'
EOF

sed -i.bak 's/10.38.164.31/10.38.164.30/' ${PGDATA}/postgresql.conf
sudo /etc/init.d/rh-postgresql94-postgresql start
pcp_attach_node -h 10.38.164.34 -p 9898 -U postgres -w


[root@calculate ~]# echo 'show pool_nodes;' | psql -h 10.38.164.34 -U postgres -p 9999 -d zabbix -x
-[ RECORD 1 ]-----+--------------
node_id | 0
hostname | fsrumosdt0033
port | 5432
status | up
lb_weight | 0.500000
role | standby
select_cnt | 0
load_balance_node | false
replication_delay | 0
-[ RECORD 2 ]-----+--------------
node_id | 1
hostname | fsrumosdt0034
port | 5432
status | up
lb_weight | 0.500000
role | primary
select_cnt | 0
load_balance_node | true
replication_delay | 0

At this moment nodes are switched they roles. Everything works well.

[root@fsrumosdt0033 pgpool-II]# ./pool_status.sh
Current Watchdog Coordinator: [fsrumosdt0034]
Current Primary DB node: [fsrumosdt0034]
Total number of db nodes: [2]
Number of live db nodes: [2]
Is primary exist: [1]


[root@fsrumosdt0034 ~]# poweroff


-[ RECORD 1 ]-----+--------------
node_id | 0
hostname | fsrumosdt0033
port | 5432
status | up
lb_weight | 0.500000
role | standby
select_cnt | 0
load_balance_node | true
replication_delay | 0
-[ RECORD 2 ]-----+--------------
node_id | 1
hostname | fsrumosdt0034
port | 5432
status | down
lb_weight | 0.500000
role | standby
select_cnt | 0
load_balance_node | false
replication_delay | 0
2017-04-20 16:47:07: pid 4982: LOG: execute command: /etc/pgpool-II/failover.sh 1 0 fsrumosdt0033 /var/opt/rh/rh-postgresql94/lib/pgsql/data
+ export PCPPASSFILE=/etc/pgpool-II/.pcppass
+ PCPPASSFILE=/etc/pgpool-II/.pcppass
+ FALLING_NODE=1
+ OLDPRIMARY_NODE=0
+ NEW_PRIMARY=fsrumosdt0033
+ PGDATA=/var/opt/rh/rh-postgresql94/lib/pgsql/data
+ TRIGGER_FILE=failover
+ '[' 1 = 0 ']'
+ exit 0
2017-04-20 16:47:07: pid 4984: LOG: new IPC connection received
2017-04-20 16:47:07: pid 4984: LOG: received the failover command lock request from local pgpool-II on IPC interface
2017-04-20 16:47:07: pid 4984: LOG: local pgpool-II node "fsrumosdt0033:9999 Linux fsrumosdt0033.fs01.vwf.vwfs-ad" is requesting to release [FAILOVER] lock for failover ID 0
2017-04-20 16:47:07: pid 4984: LOG: local pgpool-II node "fsrumosdt0033:9999 Linux fsrumosdt0033.fs01.vwf.vwfs-ad" has released the [FAILOVER] lock for failover ID 0
2017-04-20 16:47:07: pid 4984: LOG: new IPC connection received
2017-04-20 16:47:07: pid 4984: LOG: received the failover command lock request from local pgpool-II on IPC interface
2017-04-20 16:47:07: pid 4984: LOG: local pgpool-II node "fsrumosdt0033:9999 Linux fsrumosdt0033.fs01.vwf.vwfs-ad" is requesting to release [FOLLOW MASTER] lock for failover ID 0
2017-04-20 16:47:07: pid 4984: LOG: local pgpool-II node "fsrumosdt0033:9999 Linux fsrumosdt0033.fs01.vwf.vwfs-ad" has released the [FOLLOW MASTER] lock for failover ID 0
2017-04-20 16:47:07: pid 4982: LOG: failover: set new primary node: -1
2017-04-20 16:47:07: pid 4982: LOG: failover: set new master node: 0
2017-04-20 16:47:07: pid 5796: LOG: worker process received restart request


failover command was executed,
but OLDPRIMARY_NODE somehow defined as standby_node_id

xrensgory

2017-04-20 22:56

reporter  

fsrumosdt0034_step0_pgpool.log (12,461 bytes)

Muhammad Usama

2017-04-20 23:46

developer   ~0001442

can you please share the pgpool log files preferably with debug enabled

Muhammad Usama

2017-04-20 23:55

developer   ~0001443

Ok thanks I see you have already attached the log

Muhammad Usama

2017-04-21 00:09

developer   ~0001444

As per the log pgpool-II is correctly detecting the backend node failure and executing the failover, However if the standby was not promoted even after the execution of failover command, You need to verify that the failover command (failover.sh) was actually able to create the trigger file and the recovery.conf settings on PostgreSQL standby was expecting the trigger file on the same location and with same name.

xrensgory

2017-04-21 00:45

reporter  

fsrumosdt0033_step1_pgpool.log (221,255 bytes)

xrensgory

2017-04-21 00:49

reporter   ~0001445

I attached pgpool.log from old_primary_node
The point is

2017-04-20 16:47:07: pid 4982: LOG: execute command: /etc/pgpool-II/failover.sh 1 0 fsrumosdt0033 /var/opt/rh/rh-postgresql94/lib/pgsql/data
+ export PCPPASSFILE=/etc/pgpool-II/.pcppass
+ PCPPASSFILE=/etc/pgpool-II/.pcppass
+ FALLING_NODE=1
+ OLDPRIMARY_NODE=0
+ NEW_PRIMARY=fsrumosdt0033
+ PGDATA=/var/opt/rh/rh-postgresql94/lib/pgsql/data
+ TRIGGER_FILE=failover
+ '[' 1 = 0 ']


+ FALLING_NODE=1
+ OLDPRIMARY_NODE=0

So script doesn't create a trigger file on standby node

xrensgory

2017-04-22 00:11

reporter   ~0001452

Hello
Any ideas ?

Muhammad Usama

2017-04-22 00:23

developer   ~0001453

Looking at the script the most probable cause of it not creating the trigger file is that you might not have configured the password less ssh connection, since the script is using the SSH to create the trigger file and it would require a password less ssh enabled on the host.

So the one think to do is to verify that you can successfully execute the create trigger file ssh command (Command in the failover.sh script )without it requiring the password,
And secondly since the standby is on the same host as of Pgpool-II so you can also modify the failover script, and make it not to use SSH when it needs to create the trigger file on the same host.

xrensgory

2017-04-22 01:31

reporter   ~0001454

SSH with no password working well between servers

[root@fsrumosdt0033 ~]# su - postgres
Last login: Fri Apr 21 19:12:04 MSK 2017 from 10.38.164.30 on pts/1
Last login: Fri Apr 21 19:13:04 MSK 2017 on pts/0
-bash-4.1$ ssh -T postgres@10.38.164.30 hostname
fsrumosdt0033.fs01.vwf.vwfs-ad
-bash-4.1$ ssh -T postgres@10.38.164.31 hostname
fsrumosdt0034.fs01.vwf.vwfs-ad


[root@fsrumosdt0034 ~]# su - postgres
Last login: Thu Apr 20 16:29:59 MSK 2017 on pts/0
Last login: Fri Apr 21 19:13:34 MSK 2017 on pts/0
-bash-4.1$ ssh -T postgres@fsrumosdt0033 hostname
fsrumosdt0033.fs01.vwf.vwfs-ad
-bash-4.1$ ssh -T postgres@fsrumosdt0034 hostname
fsrumosdt0034.fs01.vwf.vwfs-ad

I also simplu modified failover script
#!/bin/bash -x

FALLING_NODE=$1
OLDPRIMARY_NODE=$2 # %P
NEW_PRIMARY=$3 # %H
PGDATA=$4 # %R
TRIGGER_FILE=failover

if [ $FALLING_NODE = $OLDPRIMARY_NODE ]; then
    if [ $UID -eq 0 ]
    then
        su postgres -c "ssh -T postgres@$NEW_PRIMARY touch $PGDATA/$TRIGGER_FILE"
    else
        ssh -T postgres@$NEW_PRIMARY touch $PGDATA/$TRIGGER_FILE
    fi
    exit 0;
fi;
exit 0;

Now lets check:

-[ RECORD 1 ]-----+--------------
node_id | 0
hostname | fsrumosdt0033
port | 5432
status | up
lb_weight | 0.500000
role | primary
select_cnt | 0
load_balance_node | false
replication_delay | 0
-[ RECORD 2 ]-----+--------------
node_id | 1
hostname | fsrumosdt0034
port | 5432
status | up
lb_weight | 0.500000
role | standby
select_cnt | 0
load_balance_node | true
replication_delay | 0

[root@fsrumosdt0033 pgpool-II]# ./pool_status.sh
Current Watchdog Coordinator: [fsrumosdt0033]
Current Primary DB node: [fsrumosdt0033]
Total number of db nodes: [2]
Number of live db nodes: [2]
Is primary exist: [1]

1. fsrumosdt0033 is primary DB node and Watchdog coordinator

So kill'em
[root@fsrumosdt0033 pgpool-II]# poweroff

Broadcast message from root@fsrumosdt0033.fs01.vwf.vwfs-ad
    (/dev/pts/0) at 19:23 ...

The system is going down for power off NOW!


Pgpool at fsrumosdt0034 executed failover command

2017-04-21 19:24:29: pid 13658: LOG: execute command: /etc/pgpool-II/failover.sh 0 0 fsrumosdt0034 /var/opt/rh/rh-postgresql94/lib/pgsql/data
+ FALLING_NODE=0
+ OLDPRIMARY_NODE=0
+ NEW_PRIMARY=fsrumosdt0034
+ PGDATA=/var/opt/rh/rh-postgresql94/lib/pgsql/data
+ TRIGGER_FILE=failover
+ '[' 0 = 0 ']'
+ '[' 26 -eq 0 ']'
+ ssh -T postgres@fsrumosdt0034 touch /var/opt/rh/rh-postgresql94/lib/pgsql/data/failover
+ exit 0
2017-04-21 19:24:29: pid 13660: LOG: new IPC connection received
2017-04-21 19:24:29: pid 13660: LOG: received the failover command lock request from local pgpool-II on IPC interface
2017-04-21 19:24:29: pid 13660: LOG: local pgpool-II node "fsrumosdt0034:9999 Linux fsrumosdt0034.fs01.vwf.vwfs-ad" is requesting to r
elease [FAILOVER] lock for failover ID 0
2017-04-21 19:24:29: pid 13660: LOG: local pgpool-II node "fsrumosdt0034:9999 Linux fsrumosdt0034.fs01.vwf.vwfs-ad" has released the [
FAILOVER] lock for failover ID 0
2017-04-21 19:24:29: pid 13660: LOG: new IPC connection received
2017-04-21 19:24:29: pid 13660: LOG: received the failover command lock request from local pgpool-II on IPC interface
2017-04-21 19:24:29: pid 13660: LOG: local pgpool-II node "fsrumosdt0034:9999 Linux fsrumosdt0034.fs01.vwf.vwfs-ad" is requesting to r
elease [FOLLOW MASTER] lock for failover ID 0
2017-04-21 19:24:29: pid 13660: LOG: local pgpool-II node "fsrumosdt0034:9999 Linux fsrumosdt0034.fs01.vwf.vwfs-ad" has released the [
FOLLOW MASTER] lock for failover ID 0
2017-04-21 19:24:29: pid 13658: LOG: failover: set new primary node: -1
2017-04-21 19:24:29: pid 13658: LOG: failover: set new master node: 1
2017-04-21 19:24:29: pid 14485: LOG: worker process received restart request
2017-04-21 19:24:29: pid 13660: LOG: new IPC connection received
2017-04-21 19:24:29: pid 13660: LOG: received the failover command lock request from local pgpool-II on IPC interface
2017-04-21 19:24:29: pid 13660: LOG: local pgpool-II node "fsrumosdt0034:9999 Linux fsrumosdt0034.fs01.vwf.vwfs-ad" is requesting to r
esign from a lock holder for failover ID 0
2017-04-21 19:24:29: pid 13660: LOG: local pgpool-II node "fsrumosdt0034:9999 Linux fsrumosdt0034.fs01.vwf.vwfs-ad" has resigned from
the lock holder


In PGDATA directory we see recovery.done file with current timestamp in my timezone
-rw------- 1 postgres postgres 226 Apr 21 19:20 recovery.done

Pomote action said that postgres is not in standby mode
-bash-4.1$ pg_ctl promote -D /var/opt/rh/rh-postgresql94/lib/pgsql/data
pg_ctl: cannot promote server; server is not in standby mode

Seems OK.

From client side
[root@calculate ~]# echo 'show pool_nodes;' | psql -h 10.38.164.34 -U postgres -p 9999 -d zabbix -x
-[ RECORD 1 ]-----+--------------
node_id | 0
hostname | fsrumosdt0033
port | 5432
status | down
lb_weight | 0.500000
role | standby
select_cnt | 0
load_balance_node | false
replication_delay | 0
-[ RECORD 2 ]-----+--------------
node_id | 1
hostname | fsrumosdt0034
port | 5432
status | up
lb_weight | 0.500000
role | standby
select_cnt | 0
load_balance_node | true
replication_delay | 0



I believe that problem in this record from pgpool log

2017-04-21 19:24:29: pid 13658: LOG: failover: set new primary node: -1
2017-04-21 19:24:29: pid 13658: LOG: failover: set new master node: 1

Thank you in advance

xrensgory

2017-04-24 20:13

reporter   ~0001457

Could you please reproduce issue according this tutorial ?
http://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave/en.html

xrensgory

2017-04-25 20:30

reporter   ~0001462

Muhammad,
is there any news ?

Thank you

Muhammad Usama

2017-04-25 22:14

developer   ~0001466

Hi Sorry for some delay, I was little bit stuck in some other issues.

I have tried to reproduce this issue using the same configuration as mentioned in the tutorial but still no luck. I will perform one more test tonight to make sure I have converged all scenarios.
Meanwhile can you please check if the PostgreSQL server log on fsrumosdt0034, reports any warning/errors when it is promoted to primary from standby.
And also if you can share the pgpool.log with setting the log_min_messages = DEBUG2, it would be helpful in getting to bottom of this issue.

xrensgory

2017-04-26 03:40

reporter  

fsrumosdt0034_pgpool.log (1,757,775 bytes)

xrensgory

2017-04-26 03:46

reporter   ~0001472

Hello,
I attached pgpool.log from fsrumosdt0034 with log_min_messages = DEBUG2 option enabled.

From postgresql side there is no any issues

LOG: database system is ready to accept read only connections
LOG: started streaming WAL from primary at C/49000000 on timeline 62
LOG: replication terminated by primary server
DETAIL: End of WAL reached on timeline 62 at C/490003F0.
FATAL: could not send end-of-streaming message to primary: no COPY in progress
    
LOG: record with zero length at C/490003F0
LOG: started streaming WAL from primary at C/49000000 on timeline 62
LOG: replication terminated by primary server
DETAIL: End of WAL reached on timeline 62 at C/49000490.
FATAL: could not send end-of-streaming message to primary: no COPY in progress
    
LOG: record with zero length at C/49000490
FATAL: could not connect to the primary server: could not connect to server: No route to host
        Is the server running on host "10.38.164.30" and accepting
        TCP/IP connections on port 5432?
    
FATAL: could not connect to the primary server: could not connect to server: No route to host
        Is the server running on host "10.38.164.30" and accepting
        TCP/IP connections on port 5432?

In my test scenario i completely shutting down master node by poweroff

Current Watchdog Coordinator: [fsrumosdt0033]
Current Primary DB node: [fsrumosdt0033]
Total number of db nodes: [2]
Number of live db nodes: [2]
Is primary exist: [1]


[root@fsrumosdt0033 pg_log]# poweroff

xrensgory

2017-05-10 19:08

reporter   ~0001502

hello.
Any updates ?

xrensgory

2017-05-19 21:03

reporter   ~0001520

Did you succeed in reproducing the problem?

xrensgory

2017-05-31 02:30

reporter   ~0001526

hello.
Any updates ?

Muhammad Usama

2017-06-01 03:20

developer   ~0001529

Hi,
Sorry I got stuck in some issues, I am looking into this will update you tomorrow

Thanks

Muhammad Usama

2017-06-06 20:42

developer   ~0001530

Hi

I have regenerated the complete setup again and performed the tests. But every time the standby node is successfully promoted to master node. I still suspect that the problem is with the failover.sh and most probably around its the permissions to create the trigger file. The reason I am suspecting that is because you PostgreSQL log of standby server you shared above does not have any log message saying it has found the trigger file and exiting from recovery.

For example below is the PostgreSQL log of standby after the failover is performed by Pgpool-II.

....
...
2017-06-06 16:28:02.809 PKT [17555] FATAL: could not connect to the primary server: could not connect to server: Connection refused
        Is the server running on host "localhost" (::1) and accepting
        TCP/IP connections on port 5432?
    could not connect to server: Connection refused
        Is the server running on host "localhost" (127.0.0.1) and accepting
        TCP/IP connections on port 5432?
cp: cannot stat ‘/home/work/community/installed/pg/stream/archive/00000001000000000000000B’: No such file or directory
2017-06-06 16:28:07.815 PKT [17561] FATAL: could not connect to the primary server: could not connect to server: Connection refused
        Is the server running on host "localhost" (::1) and accepting
        TCP/IP connections on port 5432?
    could not connect to server: Connection refused
        Is the server running on host "localhost" (127.0.0.1) and accepting
        TCP/IP connections on port 5432?
cp: cannot stat ‘/home/work/community/installed/pg/stream/archive/00000001000000000000000B’: No such file or directory
2017-06-06 16:28:12.823 PKT [17539] LOG: trigger file found: /home/work/community/installed/pg/stream/standby/trigger . <<==============(TRIGGER FOUND LOG)
2017-06-06 16:28:12.823 PKT [17539] LOG: redo done at 0/B000028
cp: cannot stat ‘/home/work/community/installed/pg/stream/archive/00000001000000000000000B’: No such file or directory
2017-06-06 16:28:12.829 PKT [17539] LOG: restored log file "00000002.history" from archive
2017-06-06 16:28:12.831 PKT [17539] LOG: restored log file "00000003.history" from archive
cp: cannot stat ‘/home/work/community/installed/pg/stream/archive/00000004.history’: No such file or directory
2017-06-06 16:28:12.833 PKT [17539] LOG: selected new timeline ID: 4
cp: cannot stat ‘/home/work/community/installed/pg/stream/archive/00000001.history’: No such file or directory
2017-06-06 16:28:12.869 PKT [17539] LOG: archive recovery complete
2017-06-06 16:28:12.873 PKT [17538] LOG: database system is ready to accept connections

Now since in your setup the standby PostgreSQL server never finds the trigger file and it never comes out of recovery mode hens the Pgpool-II does not finds a master node,
So you may need to verify your setup and debug why the trigger file is either not created or not detected by PostgreSQL standby server.

Also as far as Pgpool-II is concerned it just have to execute the user configured failover_command with the correct arguments at the time of PostgreSQL backend node failure and actual promotion of the standby node to master node is the responsibility of the failover_command, And as per the Pgpool-II logs, it is correctly performing these all tasks. So please verify the pgpool.conf, revocery.conf, failover.sh scripts and verify the permission since I can't find another problem in Pgpool-II that could cause this.

Please let me know If I am missing or overlooking something

Regards
Muhammad Usama

xrensgory

2018-03-21 22:23

reporter   ~0001976

Please delete this issue from bt

Issue History

Date Modified Username Field Change
2017-04-20 22:26 xrensgory New Issue
2017-04-20 22:26 xrensgory File Added: pgpool.conf
2017-04-20 22:26 xrensgory Tag Attached: failover
2017-04-20 22:26 xrensgory Tag Attached: master slave
2017-04-20 22:26 xrensgory Tag Attached: streaming replication
2017-04-20 22:26 xrensgory Tag Attached: watchdog
2017-04-20 22:27 xrensgory File Added: pcp.conf
2017-04-20 22:27 xrensgory File Added: failover.sh
2017-04-20 22:54 xrensgory Note Added: 0001441
2017-04-20 22:56 xrensgory File Added: fsrumosdt0034_step0_pgpool.log
2017-04-20 23:46 Muhammad Usama Note Added: 0001442
2017-04-20 23:55 Muhammad Usama Note Added: 0001443
2017-04-21 00:09 Muhammad Usama Note Added: 0001444
2017-04-21 00:45 xrensgory File Added: fsrumosdt0033_step1_pgpool.log
2017-04-21 00:49 xrensgory Note Added: 0001445
2017-04-22 00:11 xrensgory Note Added: 0001452
2017-04-22 00:23 Muhammad Usama Note Added: 0001453
2017-04-22 01:31 xrensgory Note Added: 0001454
2017-04-24 20:13 xrensgory Note Added: 0001457
2017-04-25 20:30 xrensgory Note Added: 0001462
2017-04-25 22:14 Muhammad Usama Note Added: 0001466
2017-04-26 03:40 xrensgory File Added: fsrumosdt0034_pgpool.log
2017-04-26 03:46 xrensgory Note Added: 0001472
2017-05-09 14:34 t-ishii Assigned To => Muhammad Usama
2017-05-09 14:34 t-ishii Status new => assigned
2017-05-10 19:08 xrensgory Note Added: 0001502
2017-05-19 21:03 xrensgory Note Added: 0001520
2017-05-31 02:30 xrensgory Note Added: 0001526
2017-06-01 03:20 Muhammad Usama Note Added: 0001529
2017-06-06 20:42 Muhammad Usama Note Added: 0001530
2017-08-29 09:51 pengbo Status assigned => closed
2018-03-21 22:20 xrensgory Tag Detached: failover
2018-03-21 22:23 xrensgory Status closed => feedback
2018-03-21 22:23 xrensgory Resolution open => reopened
2018-03-21 22:23 xrensgory Note Added: 0001976