oreilly.comSafari Books Online.Conferences.


Building a BSD Netboot Server
Pages: 1, 2

TFTP Server

There are three pieces of information that the clients need to know to download the appropriate booting code from the TFTP server:

  • option root-path: The location of its root file system on the remote NFS server.
  • next-server: The address of the TFTP server from which clients must receive files to continue booting.
  • filename: The name of the file to fetch from the TFTP server.

A little later, they also need the root-path option to load the kernel and mount the root file system. For now, the PXE LAN adapter of the station automatically tries to use two of the other parameters to continue booting. The pxeboot(8) file is a modified version of the loader(8), which runs on the third stage of the FreeBSD booting process (see the handbook for more details about the boot procedure). As the diskless client expects to fetch this program via TFTP, it must be exist on the server at

The FreeBSD distribution includes a TFPT server, so you do not need to install additional software. The daemon is tftpd and usually starts from inetd. To start TFTP server, create a special directory of /tftpboot and copy pxeboot there:

server# mkdir /tftpboot
server# cp /boot/pxeboot /tftpboot

Add the following line into /etc/inetd.conf:

tftp dgram udp wait root /usr/libexec/tftpd tftpd -l -s /tftpboot

The -l switch turns on the logging of TFTP operations. The -s switch specifies the root directory for tftpd after it calls chroot(). For more details about tftpd and chroot(), see man tftpd(8) and chroot(2).

The server is ready to run after you restart inetd:

server# killall -HUP inetd 

If everything is okay, the command

server# sockstat -4l | grep 69 

will return results similar to this:

root inetd 556 5 udp4 *:69 *:* 

NFS Server

After the client successfully downloads the pxeboot file, according to the root-path option, it'll try to connect over NFS to the /diskless_ro directory of the server to find the root filesystem with an appropriate kernel there.

You can also configure pxeboot to upload a kernel with TFTP. This will allow you to boot different diskless stations with different kernels. In that case, you need to recompile pxeboot with the option LOADER_TFTP_SUPPORT=YES in /etc/make.conf. See also the handbook and /usr/share/examples/etc/make.conf.

I confined the example network to using one kernel for all the workstations. As a consequence, I set up the NFS server to export the proper directories. As the name implies, the directory /diskless_ro should export as read-only. The diskless_rw directory contains subdirectories specific to each client for its writing. Each subdirectory must itself contain the special etc and var directories. For example, the test diskless workstation will have its own directory, diskless_rw/, with two subdirectories named /diskless_rw/ and /diskless_rw/

The directory /diskless_ro should be empty, while /diskless_rw contains something like:


Besides these two exports, the diskless station will use /usr from the server in read-only mode.

In order to let the diskless station use all of these directories, you must configure the NFS server accordingly. Add the following lines to the file /etc/exports on the server:

# file systems accessible only for reading:
/usr -ro -maproot=0 -network -mask
/diskless_ro -ro -maproot=0 -network 

# file systems accessible for writing. All the resources
# given to every diskless station are specified by one line:
# Diskless-10
/diskless_rw/ /diskless_rw/ \

# ...

# Diskless-101
/diskless_rw/ /diskless_rw/ \

# ...

# Diskless-254
/diskless_rw/ /diskless_rw/ \

Then change /etc/rc.conf to start the NFS server while the system boots:


You may also need to change the nfs_server_flags variable:

nfs_server_flags="-u -t -n 48 -h" 

The -n switch is very important here. It specifies the number of nfsd daemons that regulate the NFS, which influences the number of the NFS clients that can connect simultaneously. Tune this parameter according to the number of clients. The -u and -t switches turn on UDP and TCP. -h binds the daemon to a network interface.

Now, start the NFS server by hand (so as not to have to reboot the server):

server# rpcbind
server# nfsd -u -t -n 48 -h
server# mountd -r

After the NFS server starts correctly, check the exported file systems:

server# showmount -e
Exports list on localhost:

Notes on Mounting

It is not a good idea to place diskless_rw and diskless_ro within the same physical file system because NFS doesn't export the directory but the whole file system. In /etc/exports, every line represents of the export of one server file system to one or several clients. For each exported file system, you can specify the same client only once.

For example, if diskless_rw and diskless_ro occupy different file systems, then this /etc/exports will be correct:

/diskless_ro -ro

A mistaken /etc/exports might be:

/usr/diskless_ro -ro

If diskless_rw and diskless_ro are directories of the same file system /usr, an error will occur while exporting them to the same clients, which will prevent them from mounting diskless_ro. The rules demand that you specify both resources, /usr/diskless_rw and /usr/diskless_ro on one line, so you have to decide whether to make them both accessible for reading only or for both reading and writing.

Nevertheless, you can deceive the mountd daemon by using IP address ranges instead of hostnames. For example, here's another mistaken version of /etc/exports that will execute successfully:

/usr/diskless_rw -network -mask
/usr/diskless_ro -ro -network -mask

In this case, the server will successfully export /usr/diskless_rw and /usr/diskless_ro. As this configuration handles the whole file system and not only the directories, the subnet will be able to mount both /usr/diskless_rw and /usr/diskless_ro in read-write mode, so there are security risks.

Mikhail Zakharov is presently the senior UNIX Administrator in a Moscow banks where he administers a wide spectrum of servers running various UNIX-like operating systems.

Return to the BSD DevCenter

Sponsored by: