AddThis Social Bookmark Button

Print

Building a Web Cluster with FreeSBIE

by Alexander Prohorenko
07/01/2004

What is FreeSBIE? Quoting from its developers:

What is FreeSBIE? Simply: It's a live system on compact disk (CD), or an operating system that is able to load directly from a bootable CD, without any installation process, without any hard disk. It's based on the FreeBSD operating system.

Great news! There were a lot of different so-called LiveCD projects based on FreeBSD, but as far as I know, none has released a stable and public version. FreeSBIE 1.0 was the first one, so it sounds like a good choice.

Is there much need for FreeSBIE? Currently, the price of hard disk drives is pretty low, so it's easy to equip all our servers and workstations with hard disks. The answer is lifecycle. A regular hard drive will break after an indefinite period of time; this is unavoidable. Hard drives work for a lot of read/write operations with a heavy load. The CD's lifecycle is much longer. It normally works only for read operations, was designed as a better storage alternative, and it's also worth mentioning that the cost of one blank compact disk is hundred times less than the cost of a hard disk drive.

There are a lot of ways that running a UNIX operating system from a standalone CD can save you time and money. I'd like to describe one situation where the use of LiveCD is a very easy and cost-effective solution — a clustering solution for diskless stations.

A Clustering Example

Suppose that we need to build a cluster of web servers to serve HTTP and HTTPS connections. Why do we need a cluster? First of all, our web services are heavily loaded; having one or two CPU systems and a lot of RAM is not enough anymore. Secondly, our services need 24x7 availability and this requires an excellent fail-over backup system, which should be completely transparent for customers and normal web surfers. Nobody cares how many servers and sites we have; everyone only wants to see the requested web page.

It's also much easier to kill one server with Denial of Service (DoS) attacks, but having multiple servers will keep us safe longer. More than that, we can not rely on only one hardware server. That would be stupid. Hardware faults can happen anytime and we should not risk our business because of this. To crown it all, I should say there are a lot of different arguments why a cluster can make sense.

Related Reading

BSD Hacks
100 Industrial Tip & Tools
By Dru Lavigne

Our clustering solution includes:

  • Load-balancing hardware or software. For hardware, we can use F5 BIG-IP or anything of this kind. This hardware can handle HTTP connections and route them to the correct server. It also knows many tricks and can cache HTTPS. Also, I like it because it runs BSD OS as a managing OS. I'll call it lb01.

    There are a lot of different load-balancing systems. As a rule they are high configurable but it's usually easy to understand their syntax and write your own configuration. Our example needs to cache SSL and forward plain HTTP to our servers. It should choose the server with the least load and should remove broken ones from the list. All modern load balancers know how to do this well.

  • Data storage. Personally, I like the NetApp NFS server from Network Appliance. It runs BSD, supports the NFS protocol, and works very well. I'll call it as nfs01.
  • Diskless workstations, equipped only with network adapters and CD-ROM drives. Since we'll only use them for web services, I'll call them web01-10. They will run the Apache Web Server and whatever else needed.

Before I start exploring the build process, I'd like to explain why I chose this cluster configuration. There are many different possible cluster configurations. The most popular configuration involves equipping the web servers with only network adapters and making them boot through PXE-BOOT. This configuration uses an additional DHCP server. Personally I think it's a good approach, but it has some weaknesses. For starters, it can be somewhat expensive. It also has a potential quality-of-service drawback; all cluster machines depend on the DHCP server, which may be heavily loaded and has the potential of network-killing hardware failure. On the other hand, this configuration requires less managing time.

Building the FreeSBIE CD

Let's start from the FreeSBIE building process. To build a FreeSBIE LiveCD, we need a FreeBSD station. Preferably, it should run the same hardware configuration as all our cluster machines. In my case, this is an AMD Duron 1200 CPU with 128 MB of RAM. The build station also has a Maxtor 6Y120P0 UDMA 100 hard drive. Our cluster machines do not need this.

Finally, the build station also needs a CD-RW or DVD-RW disk drive to burn our CD ISO image, after we build a system, and spare, blank CD-R disks. (CD-RW may be better until you're familiar with the building and burning processes).

FreeSBIE has a FreeBSD port at /usr/ports/sysutils/freesbie. I used the version from 7 February 2004, freesbie-20040207.tar.bz2, with a size of 151,799 bytes. The FreeSBIE port is "a collection of scripts that help a user create CDs/DVDs containing a complete operating system based on FreeBSD. It is used as 'live-CD' and boots straight from CD." FreeSBIE uses compression to store a lot of software on the LiveFS file system.

The FreeSBIE site has pre-built LiveCD ISO images. When I wrote this article they had the following:

Latest ISO: FreeSBIE-1.0-i386
FilenameTypeSizeDate
FreeSBIE-1.0-i386.iso ISO image 565,504 Kb 02/27/2004 12:22:00
FreeSBIE-1.0-i386.iso.md5 MD5 signature 1 Kb 02/27/2004 12:28:00
FreeSBIE-1.0-i386.pkg_info.txt Package description 20 Kb 02/27/2004 12:21:00

If the default functions and configuration is OK for you, you can use these images. For our configuration we'll need to customize the defaults.

The first step is to build an ISO image to burn to a blank CD-R or CD-RW disk. The install process requires the following files:

FilenameSizeDescription
freesbie-20040207.tar.bz2 151,799 bytes The collection of scripts.
cdrtools-2.00.3.tar.gz 1,638,876 bytes A collection of software needed to build an ISO image.
cloop_2.01-1.tar.gz 21,862 bytes Compressed file-system support.

Installation is as simple as:

$ cd /usr/ports/sysutils/freesbie
$ make
$ su
# make install

The main part of this package is the collection of scripts that live under /usr/local/share/freesbie. The README suggests to run a user interface for the scripts, built using Savio Lam's dialog program. Let's run that ./freesbie. Start it from /usr/local/share/freesbie, since all the scripts use relative, not absolute, paths.

The first run of the script, shown in Figure 1, proposes a startup configuration.

FreeSBIE startup configuration
Figure 1: FreeSBIE startup configuration.

We need to set paths and file names about the LiveFS file system to create. Let's put it in /usr/local/livefs. If the directory doesn't exist, the installer will prompt you to create it:

ATTENTION PLEASE!

The path you have entered does not seems a valid path or the directory does not
exist.

Do you wish to create it?   [ Yes ] No

We also need to set FreeSBIE's home directory, where the installer will create the kernel configuration file used to build a system. Use the current directory, /usr/local/share/freesbie.

Next, set the path to the ISO image to create. The default is /usr/local/share/freesbie/FreeSBIE.iso. I personally would rather not build a custom system in the system-wide directories, though. This will lead you to the main menu, shown in Figure 2.

the installer main menu
Figure 2: The installer main menu.

The first section, Configure, we have already passed in the previous block. So we can move forward. Unfortunately, if you need a help system and press F1 as it advises at the bottom of the screen, it'll take you to a shell. You probably don't need the help, but beware.

Let's tackle the menu options in order.

Rmdir - Clean the FreeSBIE FS

This options runs the 0.rmdir.sh script to clean the LiveFS file system directory.

Mkdir - Create a New FreeSBIE FS

This options starts the 1.mkdir.sh script, which tries to create a directory structure for the new file system. After this happens, the /usr/local/livecd directory will resemble:

drwxr-xr-x   2 root  wheel   512 23 Feb 17:44 cdrom
drwxr-xr-x   2 root  wheel   512 23 Feb 17:44 home
drwxr-xr-x   2 root  wheel   512 23 Feb 17:44 mfs
drwxr-xr-x  37 root  wheel  1024 23 Feb 17:44 mnt
drwxr-xr-x   2 root  wheel   512 23 Feb 17:44 stand
drwxr-xr-x   3 root  wheel   512 23 Feb 17:44 usr
drwxr-xr-x   3 root  wheel   512 23 Feb 17:44 var

./mnt contains the directory structure for the various supported file systems:

drwxr-xr-x  2 root wheel  512 23 Feb 17:44 dos.1
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 dos.2
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 dos.3
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 dos.4
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 dos.ext.1
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 dos.ext.2
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 dos.ext.3
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 dos.ext.4
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ext2fs.1
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ext2fs.2
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ext2fs.3
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ext2fs.4
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ext2fs.ext.1
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ext2fs.ext.2
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ext2fs.ext.3
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ext2fs.ext.4
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 floppy
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ntfs.1
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ntfs.2
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ntfs.3
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ntfs.4
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ntfs.ext.1
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ntfs.ext.2
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ntfs.ext.3
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ntfs.ext.4
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 temp
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 tmp
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ufs.1
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ufs.2
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ufs.3
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ufs.4
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ufs.5
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ufs.6
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ufs.7
drwxr-xr-x  2 root wheel  512 23 Feb 17:44 ufs.8

The whole directory structure takes about 94 Kbytes now.

Pages: 1, 2

Next Pagearrow