Building and Configuring Slony
Pages: 1, 2
Once the nodes can listen to each other for events, start
Each database participating in the replication needs a
slon a chance to start itself and its threads.
The output in our example goes to two logs, which you can
tail to watch the
activity and look for errors.
slon is essentially an event and messaging system. The events
involve the replication of data and administrative information to facilitate the
replication of data.
#!/bin/sh # # 02: Start up Slon processes # # # Start Slony for each node # slon gbcluster "dbname=gb user=postgres port=5434 host=localhost" > slon_gb_74.out 2>&1 & slon gbcluster "dbname=gb user=postgres port=5430 host=localhost" > slon_gb_80.out 2>&1 &
Replication in Slony-I works by subscribing to sets of tables. The set usually should comprise the group of related tables for an application or an entire schema.
To make this work, first define a set and designate the origin for the set. Then add the tables by naming the set ID, the origin of the set, a table ID, the fully qualified table name, and an optional alternate key. Make sure to enter the origin of the set as it was in the set creation (redundantly). All of the tables participating in the replication must have a primary key. If the table does not have one, you can have Slony-I add one for replication purposes only.
Be careful when setting the ID number of a table; it also
designates the order in which Slony will lock the tables. This means that
master tables should have IDs lower than those of detail tables. The relationship
hierarchy of your schema should help you determine the order of the numbers. If
the ordering of the table IDs is backward or incorrect, there may be problems
with deadlocking the
slon process or PostgreSQL.
In our example, the
issues table is the topmost master, followed
articles. Each of the other tables are lookup tables for
those, so their numbers are higher, accordingly.
Figure 3. General Bits schema
You can create a set only once, without any active subscribers. To add
tables to replication set, create a new set. You can later combine two sets by
MERGE SET command.
#!/bin/sh # # 03: Create Set # slonik << _END_ # # Define cluster namespace and node connection information # cluster name = gbcluster; node 1 admin conninfo = 'dbname=gb host=localhost port=5434 user=postgres'; node 2 admin conninfo = 'dbname=gb host=localhost port=5430 user=postgres'; create set (id=1, origin=1, comment='gb tables'); echo 'Set created'; set add table (set id=1, origin=1, id=1, full qualified name = 'public.issues', comment='Issues table'); set add table (set id=1, origin=1, id=2, full qualified name = 'public.articles', comment='Articles table'); set add table (set id=1, origin=1, id=3, full qualified name = 'public.arttext', comment='Article Text table'); set add table (set id=1, origin=1, id=4, full qualified name = 'public.sqlbase', comment='Full SQL keywords'); set add table (set id=1, origin=1, id=5, full qualified name = 'public.whoall', comment='All contributors'); set add table (set id=1, origin=1, id=6, full qualified name = 'public.contrib', comment='Contributors by Article'); set add table (set id=1, origin=1, id=7, full qualified name = 'public.keywords', comment='Keywords by Article'); set add table (set id=1, origin=1, id=8, full qualified name = 'public.sqlkw', comment='Subset of SQL keywords'); echo 'set 1 of gb tables created'; _END_
Subscribing to sets
The nodes can now subscribe to the newly created sets. To subscribe to a set, identify the set, the node that can provide the set, the receiver of the set, and whether the receiver of this set should be able to forward the set to another node. In our case, the origin node of the set is the same as the provider of the set, but for cascading subscriptions that is not necessarily the case.
Even though this replication system has only two nodes, we are saying that the receiving node may forward the set. This is for the case in which we may want to switch masters or add other nodes to the cluster. Here, node 2 is subscribing to set 1. originating on node 1 and provided by node 1.
#!/bin/sh # # gb_subscribeset.sh # slonik << _END_ # # Define cluster namespace and node connection information # cluster name = gbcluster; node 1 admin conninfo = 'dbname=gb host=localhost port=5434 user=postgres'; node 2 admin conninfo = 'dbname=gb host=localhost port=5430 user=postgres'; subscribe set (id=1, provider=1, receiver=2, forward=yes); echo 'set 1 of gb tables subscribed by node 2'; _END_
Of course, you should assume that these scripts have no typos and that you've run them exactly as intended. Yeah, right. Fortunately, you can recover from mistakes.
By this time, you probably have made a typo or two and need to know how to start over. The simplest way of undoing is to start fresh. There are subtler ways of correcting mistakes by updating the underlying tables. However, I don't recommend those unless you have intimate knowledge of the underlying tables.
- To terminate the
slonprocesses, list their process IDs and use
kill -TERMto terminate the oldest of the processes for each node.
To completely remove all Slony-I definitions from your database, uninstall each node:
#!/bin/sh # gb_uninstallnode.sh slonik << _END_ # # Define cluster namespace and node connection information # cluster name = gbcluster; node 1 admin conninfo = 'dbname=gb host=localhost port=5434 user=postgres'; node 2 admin conninfo = 'dbname=gb host=localhost port=5430 user=postgres'; echo 'Cluster defined, nodes identified'; # # UnInstall both nodes # uninstall node (id=1); uninstall node (id=2); echo 'Nodes 1 and 2 Removed'; _END_
UNINSTALL NODE removes all definitions, and you
must start cleanly after that.
The underlying tables for Slony-I are fairly straightforward. The cluster
name is the name of the schema in the database in which the Slony tables
set search_path in
psql.) You can verify
your commands to add nodes, listens, paths, and so on by examining these
tables. It also looks tempting to "fix" things by just changing the
underlying tables. Resist doing so, however. Use Slonik
so that it can trigger the appropriate events to perform the updates in an orderly
fashion across all nodes.
Figure 4. Slony schema
- General Bits Slony Articles on Tidbits
- The Slony-I Project documentation on GBorg
- Jan Wieck's Original Slony-I Talk and Scripts July 2004 in Portland, Oregon, sponsored by Affilias Global Registry Services
- Information from IRC #slony on freenode.net
- Mailing List: Slony1firstname.lastname@example.org
A. Elein Mustain has more than 15 years of experience working with databases, 10 of those working exclusively with object relational database systems.
Return to ONLamp.com.