[pgpool-general: 2756] Re: bash script can't see env variables when run by pgpool

Gintautas Sulskus gingaz at gmail.com
Fri Apr 11 19:40:52 JST 2014


Hello Yugo,

no worries, your replies are much appreciated :)

The problem was in the recovery process on a remotely failed node. If I can
remember clearly, scp operation asked for
host authorisation ("yes/no") and hung the script. A more verbose message
providing clues to pinpoint the problem quickly
would be really great.

I updated to pgpoolAdmin 3.3.1 and pgpool 3.3.3. and have noticed an issue:
1. If I update pgpool.conf via pgpoolAdmin, it erases wd_lifecheck_method,
wd_interval, wd_heartbeat_port, wd_heartbeat_keepalive entries.
2014-04-11 10:38:59 ERROR: pid 32410: pool_config: wd_lifecheck_method must
be either "heartbeat" or "query"
2014-04-11 10:38:59 ERROR: pid 32410: Unable to get configuration.
Exiting...
I have to go to edit them in file. Does it occur to anyone else?

pgpoolAdmin questions:
1. Could you please explain me, what disconnect/return button should do? I
assumed, that it would invoke failover/failback functionality.
E.g. I could "disconnect" server and then on "return" it would
resynchronise with the cluster automatically (cia pcp_recovery_node).

If my assumptions are wrong, is it possible to call pcp_recovery_node via
interface?

2. I could not find any documentation how to enable stop/start/restart
buttons in pgpoolAdmin. Currently they are disabled.
I am not certain what settings should enable this.

Thanks for your patience! :)

Cheers,
Gintautas



On Thu, Apr 10, 2014 at 8:38 AM, Yugo Nagata <nagata at sraoss.co.jp> wrote:

> Hi Gintautas,
>
> I'm sorry for replying late.
>
> Recovery scripts are executed by PostgreSQL, so clues would be in log
> outputs
> of backend server. Does the scripts have the permission to be executed by
> postgres user?
>
> On Sat, 29 Mar 2014 19:35:32 +0000
> Gintautas Sulskus <gingaz at gmail.com> wrote:
>
> > Hi Yugo,
> >
> > thanks for clarifying! Set up is working.
> >
> > I guess that a environment variable defiend in /etc/enviromment is
> reffered
> > > in pg_ni_up.sh but this doesn't work well, right?
> >
> >
> > > How about to try 'echo $PATH > /tmp/test ' in pg_ni_up.sh?
> > > Is the $PATH (or other value) defined in /etc/environment output to
> > > /tmp/test or not?
> >
> >
> >  Even $PATH was not displayed correctly. I presume it was issue with
> > permissions, although it is not entirely clear to me what has caused this
> > issue.
> >
> >
> > I am still struggling with pgpool configuration though. Hopefully it is
> the
> > last question. Could please anyone give any clues on this problem:
> >
> > I have tested my online recovery steps (1st and 2nd recovery) by manually
> > running scripts. They work just fine. Everything gets logged properly.
> >
> > However, when I try to run pcp_recovery_node -d I get:
> > DEBUG: send: tos="R", len=46
> > DEBUG: recv: tos="r", len=21, data=AuthenticationOK
> > DEBUG: send: tos="D", len=6
> > DEBUG: recv: tos="e", len=20, data=recovery failed
> > DEBUG: command failed. reason=recovery failed
> > BackendError
> > DEBUG: send: tos="X", len=4
> >
> > *pgpool log output:*
> > (pgpool started in debug mode with debug_level=10)
> > CHECKPOINT in the 1st stage done
> > starting recovery command: "SELECT pgpool_recovery('pg_1st_recovery',
> > 'failed_node_ip_address', '/data/postgres/main/')"
> > exec_recovery: pg_1st_recovery command failed at 1st stage
> >
> > *pg_1st_recovery logs:*
> > none
> >
> > *my pgpool configuration:*
> > recovery_1st_stage_command=pg_1st_recovery
> > (I expect $1 $2 $3 parameters from pgpool as in examples)
> >
> > Could you please give me any hints what could be wrong?
> > Even a rough direction instead of "BackendError" would be extremely
> > valuable.
> >
> > Thanks,
> > Gintas
> >
> >
> > On Wed, Mar 26, 2014 at 7:02 AM, Yugo Nagata <nagata at sraoss.co.jp>
> wrote:
> >
> > > Hi,
> > >
> > > On Fri, 21 Mar 2014 00:55:51 +0000
> > > Gintautas Sulskus <gingaz at gmail.com> wrote:
> > >
> > > > Hello,
> > > >
> > > > more problems regarding watchdog:
> > > > On one of the servers I see a log entry: "wd_create_hb_send_socket:
> > > > setsockopt(SO_BINDTODEVICE) requies root privilege".
> > > > Any clues what this may be related to? I assume it's permission
> problem.
> > >
> > > You can ignore this message because SO_BINDTODEVICE is not necessary.
> > >
> > > >
> > > > Much appreciated!
> > > >
> > > > Gintautas
> > > >
> > > >
> > > > On Fri, Mar 21, 2014 at 12:48 AM, Gintautas Sulskus <
> gingaz at gmail.com
> > > >wrote:
> > > >
> > > > > Hello,
> > > > >
> > > > > ifconfig_path = '/home/ubuntu/apps/scripts'
> > > > > PgpoolAdmin description of ifconfig_path is: The path of a command
> to
> > > > > switch the IP address. I understand it as the path for if_up_cmd
> > > > > and if_down_cmd commands.
> > > > >
> > > > > if_up_cmd = 'pg_ni_up.sh up eth0:1 10.0.1.244 255.255.255.0'
> > > > > if_down_cmd = 'pg_ni_up.sh down eth0:1'
> > >
> > > I guess that a environment variable defiend in /etc/enviromment is
> reffered
> > > in pg_ni_up.sh but this doesn't work well, right?
> > >
> > > How about to try 'echo $PATH > /tmp/test ' in pg_ni_up.sh?
> > > Is the $PATH (or other value) defined in /etc/enviroment output to
> > > /tmp/test or not?
> > >
> > > > >
> > > > >
> > > > > PS. Is this mailing list the right place to discuss about
> PgpoolAdmin?
> > > > > In the latest PgpoolAdmin version "if_*up*_cmd " is described as
> "The
> > > > > command to bring *down* the virtual IP" and "if_*down*_cmd" as "The
> > > > > command to bring *up* the virtual IP". Clearly descriptions are
> mixed
> > > up.
> > > > >
> > > > > Gintautas
> > > > >
> > > > >
> > > > > On Mon, Mar 10, 2014 at 2:46 AM, Yugo Nagata <nagata at sraoss.co.jp>
> > > wrote:
> > > > >
> > > > >> Hi,
> > > > >>
> > > > >> On Sun, 9 Mar 2014 03:00:52 +0000
> > > > >> Gintautas Sulskus <gingaz at gmail.com> wrote:
> > > > >>
> > > > >> > Hello,
> > > > >> >
> > > > >> > I am trying to set up pgpool watchdog. For virtual IP control my
> > > plan
> > > > >> is to
> > > > >> > use bash scripts (if_up_cmd/if_down_cmd). In my script I use
> some
> > > > >> > environment variables.
> > > > >> >
> > > > >> > A strange thing occurs here. No matter under what user I run
> pgpool,
> > > > >> script
> > > > >> > can't pick up my custom environment variables from
> /etc/environment
> > > > >> > (including customised PATH). It still sees standard binaries
> like
> > > > >> ifconfig
> > > > >> > though.
> > > > >>
> > > > >> How do you configure pgpool.conf about if_up_cmd, if_down_cmd,
> > > > >> ifconfig_path?
> > > > >> pgpool see ifconfig commands on the path specified by these
> options.
> > > > >>
> > > > >> >
> > > > >> > Same script, when run manually by me, works under all users. Any
> > > ideas
> > > > >> what
> > > > >> > can be wrong?
> > > > >> >
> > > > >> > The only solution I have come up is to redefine env variables
> in the
> > > > >> script.
> > > > >> >
> > > > >> > Thanks.
> > > > >> >
> > > > >> > Best Regards,
> > > > >> > Gintas
> > > > >>
> > > > >>
> > > > >> --
> > > > >> Yugo Nagata <nagata at sraoss.co.jp>
> > > > >>
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Best Regards,
> > > > > Gintautas Sulskus
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Best Regards,
> > > > Gintautas Sulskus
> > >
> > >
> > > --
> > > Yugo Nagata <nagata at sraoss.co.jp>
> > >
> >
> >
> >
> > --
> > Best Regards,
> > Gintautas Sulskus
>
>
> --
> Yugo Nagata <nagata at sraoss.co.jp>
>



-- 
Best Regards,
Gintautas Sulskus
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.sraoss.jp/pipermail/pgpool-general/attachments/20140411/10ec0fad/attachment-0001.html>


More information about the pgpool-general mailing list