[pgpool-general: 6305] Re: pgpool-general Digest, Vol 85, Issue 17
mandy
dw_qiuchunxiao at sina.com
Thu Nov 22 12:09:41 JST 2018
Hello,
Are you sure all the standby nodes are alive?
if they are ,maybe you can stop the pgpool firstly, and then use the command 'pgpool -C -D ' to start the pgpool,
'-D ' means that re-created the pgpool status file.
Hope it works
mandy
From: pgpool-general-request
Date: 2018-11-22 11:00
To: pgpool-general
Subject: pgpool-general Digest, Vol 85, Issue 17
Send pgpool-general mailing list submissions to
pgpool-general at pgpool.net
To subscribe or unsubscribe via the World Wide Web, visit
http://www.sraoss.jp/mailman/listinfo/pgpool-general
or, via email, send a message with subject or body 'help' to
pgpool-general-request at pgpool.net
You can reach the person managing the list at
pgpool-general-owner at pgpool.net
When replying, please edit your Subject line so it is more specific
than "Re: Contents of pgpool-general digest..."
Today's Topics:
1. [pgpool-general: 6304] Re: PGPool does not balance between
slaves (Tatsuo Ishii)
----------------------------------------------------------------------
Message: 1
Date: Thu, 22 Nov 2018 10:31:24 +0900 (JST)
From: Tatsuo Ishii <ishii at sraoss.co.jp>
To: franklinbr at gmail.com
Cc: pgpool-general at pgpool.net
Subject: [pgpool-general: 6304] Re: PGPool does not balance between
slaves
Message-ID: <20181122.103124.1332066456578854588.t-ishii at sraoss.co.jp>
Content-Type: Text/Plain; charset=us-ascii
Probably status of PostgreSQL servers are out of sync with what
Pgpool-II recognizes. Stop Pgpool-II and remove "pgpool_status" file,
which should be located under "logdir" directory specified in
pgpool.conf. Then start Pgpool-II again. The pgpool_status file is
automatically re-created upon starting up.
Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp
> Hello Guys.
>
> I have a pgpool with five postgresql servers, one master and three slaves
> in mode sync.
> I chose streaming replication mode of pgpool based on file
> $prefix/etc/pgpool.conf.sample-stream.
> But when i execute on psql show pool_nodes i have this result
>
>
> postgres=# show pool_nodes;
> node_id | hostname | port | status | lb_weight | role | select_cnt |
> load_balance_node | replication_delay
> ---------+-------------+------+--------+-----------+---------+------------+-------------------+-------------------
> 0 | 10.0.58.124 | 5432 | up | 0.250000 | primary | 0 |
> true | 0
> 1 | 10.0.58.123 | 5433 | down | 0.250000 | standby | 0 |
> false | 0
> 2 | 10.0.58.130 | 5433 | unused | 0.250000 | standby | 0 |
> false | 0
> 3 | 10.0.58.132 | 5433 | unused | 0.250000 | standby | 0 |
> false | 0
> (4 registros)
>
>
> Then i maked a shell script for for mass execution of querys, but executed
> on the master only (node_id 0).
>
>
> relevant parts of the configuration
> ----------------------------------------------------------------
> backend_hostname0 = '10.0.58.124'
> backend_port0 = 5432
> backend_weight0 = 1
> backend_data_directory0 = '/storage1/data'
> backend_flag0 = 'ALWAYS_MASTER'
>
> backend_hostname1 = '10.0.58.123'
> backend_port1 = 5433
> backend_weight1 = 1
> backend_data_directory1 = '/storage1/data'
> backend_flag1 = 'ALLOW_TO_FAILOVER'
>
> backend_hostname2 = '10.0.58.130'
> backend_port2 = 5433
> backend_weight2 = 1
> backend_data_directory2 = '/storage1/data'
> backend_flag2 = 'ALLOW_TO_FAILOVER'
>
> backend_hostname3 = '10.0.58.132'
> backend_port3 = 5433
> backend_weight3 = 1
> backend_data_directory3 = '/storage1/data'
> backend_flag3 = 'ALLOW_TO_FAILOVER'
>
> load_balance_mode = on
> master_slave_mode = on
> master_slave_sub_mode = 'stream'
> ----------------------------------------------------------------
>
>
> Any Tips ?
>
>
> --
> foobar
------------------------------
_______________________________________________
pgpool-general mailing list
pgpool-general at pgpool.net
http://www.pgpool.net/mailman/listinfo/pgpool-general
End of pgpool-general Digest, Vol 85, Issue 17
**********************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.pgpool.net/pipermail/pgpool-general/attachments/20181122/53cb4d8f/attachment.htm>
More information about the pgpool-general
mailing list