[Pgpool-general] replication changes for support of a mut-mastering. ( ignore tuple mismatch and zero config )

Day, David dday at redcom.com
Fri Nov 4 17:28:57 UTC 2011


Hi,

I am a bit new to Postgres(9.03)  and PGPOOL (3.1.  After a bit of experimentation with these components,
I am considering making some  changes to PGPOOL and would appreciate any comments from the pgpool
community.

Given:
A node could consist of  application software and optionally PGPOOL and/or POSTGRES.   The application on startup
opens a "permanent" database session.  The database has decision  making tables  where consistency is desired but
not necessarily vital.  The other area of the database is for logging  where consistency would not be important at  all.
  E.g   A report could be created from the sum of reports of all nodes log tables.
 Eventual consistency of important tables would be achieved through  audit methods
outside of PGPOOL though perhaps triggered by PGPOOL reporting tuple inconsistency.


Therefore 2 Changes I  am considering.

First : Change degeneration of node on TUPLE  mismatch:
I want to have a new flag and/or  per command over-ride that commits write updates despite finding tuple
differences between the nodes,  and without degenerating the node status.   Degeneration for replication
would only occur if the connection to the backend becomes unavailable.

Second: Determination of backend nodes dynamically through zero configuration (Avahi )  rather then pg_pool.conf
It assumes  postgres installed with a  zeroconfig avahi patch which announces itself.

pgpool.conf would have a new variables added.
zeroconf_browse_service_type = _postgresql._tcp
zeroconf_browse_service_name = some qualifying name from the real postgres server(s).
zeroconf_publish_service_type =_ postgresql_._tcp
zeroconf_publish_service_name = some name that is important to the application software zeroconf browser.

Pgpool on starting up with zeroconf service enabled would immediately publish itself while awaiting client connection.
It would simultaneously browse for service type  and service name as declared. ( the real postgres(s)).  On discovering
candidates,  it would add them to a dynamic growing and shrinking backend list.   pcp_attach_node and pcp_remove_node
would be used to grow/shrink the backend pool upon discovery/loss of the browsed service type.
When a actual  connection request comes to pgpool it would connect that request to all current backends and
any backend s that are subsequently discovered with matching browse criteria.


That's my thoughts in a nutshell.
It certainly stretches the notion of multi-mastering as consistency requirements are relaxed.
Hopefully someone that is more familiar with the current internals could  comment on the potential scope of change,
the general usefulness,  or suggest some other tools that might be more easily adapted,
 and/or that I have not yet fully understood how PGPOOL works :+)



Thanks


Dave








-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://pgfoundry.org/pipermail/pgpool-general/attachments/20111104/37ff4c0b/attachment-0001.html>


More information about the Pgpool-general mailing list