Modifying Slony Clustersby A. Elein Mustain
Replication clusters have much forethought applied to their creation. However, in the course of systems development, some changes are always necessary sooner or later. This article walks through the steps required to:
- Add a node to the cluster.
- Switch data providers of a table set.
- Promote a replication to master.
- Apply schema changes to the replication cluster.
These examples will use the replication scheme originally set up in "Introducing Slony."
Adding a Node
Suppose that in addition to the replica you created for the
database, you want another replica of the same database for reporting. Here's
how to add a replica of the
gb table set on a second database in
the 8.0 installation. The 7.4 Node 1, database
gb, will originate
the data set and replicate it directly to Node 2, also database
gb, and Node 3, database
Before starting, be sure to create
gb2 in the 8.0 installation,
seeding it with the same empty schema as the other two databases in this
cluster. You do not want a dump of the schema of
gb as is it now,
but rather as it was before you defined the Slony-I cluster.
Next, define Node 3 and ensure there are paths from Node 3 to and from Nodes 1 and 2. From there, enable listening along each path mirroring the expected table set replication. The listening of 2 and 3 via Node 1 reflects this mirroring, rather than having a direct listen path between 2 and 3.
This is a really good time to remember that the connection information in
store path command pertains to the
This is also a pretty good time to look up
drop path and
drop listen, two more
#!/bin/sh slonik << _END_ # # Define cluster namespace and node connection information # cluster name = gbcluster; node 1 admin conninfo = 'dbname=gb host=localhost port=5434 user=postgres'; node 2 admin conninfo = 'dbname=gb host=localhost port=5430 user=postgres'; node 3 admin conninfo = 'dbname=gb2 host=localhost port=5430 user=postgres'; echo 'Cluster defined, nodes identified'; # # Initialize the cluster and create the second node # store node (id=3, comment='gb2 8.0 5430'); # # create paths # store path (server=1, client=3, conninfo='dbname=gb host=localhost port=5434 user=postgres'); store path (server=2, client=3, conninfo='dbname=gb host=localhost port=5430 user=postgres'); store path (server=3, client=1, conninfo='dbname=gb2 host=localhost port=5430 user=postgres'); store path (server=3, client=2, conninfo='dbname=gb2 host=localhost port=5430 user=postgres'); # # Enable listening along each path # store listen (origin=1, receiver=3, provider=1); store listen (origin=3, receiver=1, provider=3); store listen (origin=2, receiver=3, provider=1); store listen (origin=3, receiver=2, provider=1); _END_
Now you are ready to start a new
slon process for Node 3, the
8.0 installation with the
gb2 database. Because there are two
replica databases on the 8.0 installation:
#! /bin/bash # # varlena Slony Start Up # ============================== slon gb "dbname=gb2 host=localhost port=5430 user=postgres" > slon_gb_3.out 2>&1 &
Once the new
slon process is up and running, you can subscribe
Table Set 1, originating at Node 1, to Node 3. At this point the log files are
tail -f the log files to watch for progress and
errors. The log files in this case are slon_gb_1.out,
slon_gb_2.out, and slon_gb_3.out. If you see any problems, you may
have to remove the paths and/or listens, replacing them with corrected
The log file slon_gb_1.out looks like:
CONFIG main: slon version 1.0.5 starting up CONFIG main: local node id = 1 CONFIG main: loading current cluster configuration CONFIG storeNode: no_id=2 no_comment='Node 2 dbname=gb host=localhost port=5430 user=postgres' CONFIG storePath: pa_server=2 pa_client=1 pa_conninfo="dbname=gb host=localhost port=5430 user=postgres" pa_connretry=10 CONFIG storeListen: li_origin=2 li_receiver=1 li_provider=2 CONFIG main: configuration complete - starting threads CONFIG enableNode: no_id=2 CONFIG storeSet: set_id=1 set_origin=1 set_comment='' CONFIG storeNode: no_id=3 no_comment='gb2 8.0 5430' CONFIG enableNode: no_id=3 CONFIG storePath: pa_server=3 pa_client=1 pa_conninfo="dbname=gb2 host=localhost port=5430 user=postgres" pa_connretry=10 CONFIG storeListen: li_origin=3 li_receiver=1 li_provider=3
The one for Node 2, slon_gb_2.out, looks very similar except the paths and listens are appropriate for Node 2.
Now subscribe Node 3 to the table set.
#!/bin/sh slonik << _END_ # # Define cluster namespace and node connection information # cluster name = gbcluster; node 1 admin conninfo = 'dbname=gb host=localhost port=5434 user=postgres'; node 2 admin conninfo = 'dbname=gb host=localhost port=5430 user=postgres'; node 3 admin conninfo = 'dbname=gb2 host=localhost port=5430 user=postgres'; subscribe set (id=1, provider=1, receiver=3, forward=yes); echo 'set 1 of gb tables subscribed by node 3'; _END_
Verification of data is reassuring at this point.
Pages: 1, 2