Using NFS for Networked Backupsby Glenn Graham
Arguments abound between system administrators as to the correct way to back up a network of Unix hosts. Some argue that tapes are the answer, while others lean toward more modern means, such as rewritable CD-ROMs. No matter the method, the end goal remains: to back up hosts over a network, in a manner that is conveniently indexed and easily retrieved.
During the late 80s, I decided to cut costs by using a spare server running NFS along with a single 500MB tape drive loaded with ¼" DC6525 tape cartridges. The idea was simple and performed well. I dedicated three tapes per host and performed a revolving backup of each host three times a week. The only problem was that the media was cumbersome and, in many cases, unreliable. Gone are the days of DC6525s; they've been replaced by more expensive, complex devices.
Over the past year, I've been revamping my network of 24 hosts. Before I began, I considered writable CD-ROMs and other commercial products. After comparing the cost of commercial products to mass storage drives, I concluded it would be more economical to construct a central NFS server running Linux. Based on my original model, I replaced the tape drive with two hot-swappable SCSI drives running RAID. This allows for real-time backup, removable data, and future expansion.
Throughout this article I will discuss some very basic methods used for NFS. Most are common knowledge, but are frequently put aside as being too complicated, antiquated, or difficult to implement. We will touch only the tip of the iceberg. These examples are only common guidelines, as NFS is infinitely customizable.
Generally speaking, performing any type of backup requires some thought,
planning, and in-depth knowledge of the files, peripherals, and role of each
host. Within a network, no two backups are alike. The core backup consists of
configuration files found under
cron files, and system-specific binaries. In the case of a server, we would back up
/var/mail, web directories, or
/var/db, depending on
the server's role.
The first step in planning your backup is to carefully examine the target system(s) by mapping system-dependent files and including them in your scripts.
Before we begin, let's introduce some basic NFS concepts. NFS is supported
on UDP and was first introduced by Sun Microsystems in March 1989 under RFC1094. Many
consider NFS insecure and vulnerable to a variety of exploits, especially the
RPC portmapper. Despite its insecurities, NFS is still an accepted method of
mounting and backing up remote file systems, especially across local networks.
That said, when using NFS on an open network it is important to understand
firewalling techniques. At a minimum, use TCP wrappers or whitelists provided
/etc/hosts.allow configurations. Never run NFS
unprotected, and always to kill
while not in use.
NFS operates on the principle of a remote mount and runs the Unix
mountd (the server used for NFS mount
requests) daemons on each host.
/etc/exports controls access
to the server, containing a list of authorized hosts, along with user
permissions and allowable directories.
When mountd is started, it loads the export host addresses and options into the kernel using the mount system call. After changing the exports file, a hangup signal should be sent to the mountd daemon to get it to reload the export information. After sending the SIGHUP, check the syslog output to see if mountd logged any parsing errors in the exports file. If mountd detects that the running kernel does not include NFS support, it will attempt to load a loadable kernel module containing NFS code.
By default, most BSD kernels have NFS built in and configurable under
/etc/rc.conf, whereas Linux requires a loaded module or kernel
NFS Backup Principles
The theory of NFS backup is relatively simple: mount each host to the NFS server, write a .tar backup script, and check that the following permission sets conform on both machines:
- Read Write access.
- Mount permissions.
Exports and Permissions
/etc/exports file controls various options allowed by the
NFS server. In this example,
jack.foo.com would be mounted on NFS
/jack, and allowed to read and write to
# File /etc/exports on NFS Server # When backing up an entire system, including root files (UID 0) # you would use the no_root_squash option as outlined below * # For files belonging to jack, the only option required is 'rw' /jack jack.foo.com(rw,no_root_squash)
The UID/GID set can be somewhat confusing, especially if the NFS server and
hosts are different Unix flavors, or if they use different password schemas.
The owner and group are UID/GID dependent, and must be identical on the client
and server. If user
jack has UID 2020 on client
he must also have UID 2020 on the NFS server.
In case UID 2020 already exists on the NFS server as say, user
UID 2020 from the client to the server would appear on the server as
sam. Also, if UID 2020 doesn't already exist, the files would
simply appear as owned by UID 2020, without user identification. (We often see
this scenerio when untarring a file from a remote FTP download where user
joe, UID 1010, may inadvertently be another user on our system, so
the file untars and appears to be owned by that user.)
When restoring, it's important to ensure the UID/GID matches the restoration on the target host for the same reason stated above.
Very often, it is not desirable that the root user on a client machine be treated as root when accessing files on the NFS server. To this end, uid 0 is normally mapped to a different id: the so-called anonymous or nobody uid. This mode of operation (called "root squashing") is the default, and can be turned off with no_root_squash. Use caution when configuring this option.
Be sure to back up password and group files as a standard procedure. Any time you add or delete a user from the client, update the group and password files.
Two Simple NFS Server Scripts
In this example, we start
rpc.nfsd and write out a PID for each process to
#!/bin/bash # Script to start NFS /usr/sbin/rpc.portmap sleep 1 /usr/sbin/rpc.mountd sleep 1 /usr/sbin/rpc.nfsd /sbin/pidof /usr/sbin/rpc.portmap > /var/run/nfs.pid /sbin/pidof /usr/sbin/rpc.nfsd >> /var/run/nfs.pid /sbin/pidof /usr/sbin/rpc.mountd >> /var/run/nfs.pid echo "NFS READY" # EOF
To stop NFS, we use the following script to kill everything listed in
nfs.pid, killing all three processes:
#!/bin/bash # Script to stop NFS kill `cat /var/run/nfs.pid` /bin/rm /var/run/nfs.pid echo "NFS HALTED"
The Synchronized Backup
We can synchronize our backups by setting a
cron job on each host and
server to start, stop, and use NFS at a specific time. After a backup
window, each host shuts down its NFS services. In order to synchronize
properly, each host should be timed to a
central NTP server for accuracy.
The NFS server is initiated from
cron by a script similar to
the one shown above, and remains active for a specified period of time, depending
on the period set for a full backup. For a large backup, 30 minutes is usually
fine. Next, each client mounts a drive on the server, writes its files, then
umounts. This example uses
tar. Other methods include
#!/bin/sh # File NFSBackup Script for jack.foo.com echo 'Initialize NFS Backup' mount nfs.foo.com:/jack /mnt/jack df echo "Is nfs.foo.com mounted?" read ans; echo "Full Backup of Jack to follow" sleep 4 cd /mnt/jack tar -cvvf jack.tar /vmlinuz /System.map /root /mydocs /etc /bin /sbin /var/local /var/spool /usr/sbin /usr/bin home /usr/home/glenn /usr/home/ifconfig sleep 5 umount /mnt/jack echo "finished"
In almost every instance, restoration is a manual process, and even today, a full restore can be confusing, especially where the target system is used as a multipurpose host. Under typical circumstances, I restore a system by mounting the host to the NFS server, and perform the backup by reading from the remote filesystem.
nfs.foo.com using the
mount nfs.foo.com:/jack /mnt/jack;
To restore, simply
tar -xvf jack.tar
Copy the necessary files from
/mnt/jack to the appropriate
This is only one of many available options.
Here are some basic rules to follow:
- In the case of a complete re-installation, make sure the freshly-installed OS version is the same as the original. This is especially important to ensure that all configuration files match.
- Ensure all users have the same UID/GID as before. Check these IDs against
the original password and group files. It is important to note each UID/GID
corresponding to the individual remote system(s). It's not enough to simply
restore everything as
- Remember to restore the original kernel during a restoration. In the case
of a Linux system, for example, run
grubto load the kernel.
- Remember to archive the backups and store them in a safe place, preferably off-site. Redundancy can save your data in case of a physical disaster.
Throughout my career working with Unix, I've seen fifty methods of backup
that include sophisticated hardware, media, and variations of GUIs and Web-based
interfaces. While some still exist, many have faded into the background. In
my opinion, combining NFS with
dump) is still
the best option. It takes some knowledge to configure an NFS backup properly,
but in the end, it is definitely one of the most reliable means of protecting
Glenn Graham has been working with telecommunications since 1977.
Return to the Linux DevCenter.