Xen is the new virtualization kid on the block. It's gaining visibility and importance at a speed only projects such as Linux and Apache have seen before. Xen has been around for a couple of years: it was originally part of the Xenoserver platform, which aimed to build a public infrastructure for wide-area distributed computing. Ian Pratt, the principal investigator of the Xenoserver project at the University of Cambridge Computer Laboratory, still leads the development team.
Xen ended up being much more than a part of this project. Now many Linux distributions and some hardware vendors are picking it up. As with many important open source projects these days, it even has a company--Xensource--backing commercial versions and providing support for corporate customers. Xensource also employs several industry veterans. In short, Xen(source) has everything a good open source platform need to becomes an extremely important player in the industry.
At X-Tend, one of our main problems was that we didn't have enough machines to test all new distributions and applications. Basically there was no financially realistic way to provide our users with a quick test environment. My guess is that half of the planet has similar problems.
Ages ago, we used UserModeLinux, but most new users found it too complex to use. Later, we bootstrapped Qemu instances from our central imaging server. That worked. If we wanted to do some tests on an isolated environment, we quickly started Qemu with the distribution we needed. The only annoyance was that we actually wanted an environment that was constantly online and, in the event of a power outage (this is a test environment, not production), we didn't have to spend too much time getting it back. It had to be scriptable and automatable, and preferably would not require X.
With the arrival of Xen last year, all of that changed. This article describes how we tackled our problem and how we actually now have a stable and performant environment to test everything we want. It's so stable, we now use Xen for production environments!
Xen is a virtual machine monitor for x86 that supports the execution of multiple guest operating systems with unprecedented levels of performance and resource isolation. Xen is open source software, released under the terms of the GNU General Public License.
Xen has become one of the most popular virtualization platforms during the last six months. Although it's not such a young project, it is now gaining acceptance in the corporate world as a valuable alternative to VMWare.
Adding Xen to your machine changes it from an ordinary x86 machine to a totally new platform. It's not an x86 anymore. It's a Xen machine. All the operating systems that you want to run on your machine won't work anymore if they know only about x86; they need to know about Xen. Of course, the Xen and x86 architecture are really similar, so for the end user and the applications that run on a platform ported to Xen, there is almost no difference.
When Xen is activated, it will also need to boot its first virtual machine, called Domain0. Domain0 has more privileges than the other virtual machines and typically is used only for managing the other (less privileged) virtual machines. Domain0 is also responsible for managing the hardware. Porting a platform to Xen changes almost nothing to the drivers, which means that most drivers supported in traditional Linux kernels also work in Xen.
Within Domain0, the
xend daemon handles the management of the virtual machines. Control it via the
xm command-line utility.
From there, you can create other virtual machines, or domains.
We've been running Xen on different platforms ranging from an "antique" Suse 8.2 with a 2.4 series kernel, a Debian box, and Fedora Core 4 with a fresh 2.6 kernel. Unlike some other projects, Xen currently doesn't care whether you use 2.4 or 2.6, so people who are comfortable with a 2.4 kernel can still benefit from the Xen features. However, future releases probably won't have 2.4 support. People claim that installing Xen is difficult, but it's not, certainly if you compare it with other similar tools. GetXen.org is the place to start; it contains a tarball with most of the required binaries and tools, a demo CD, and pointers to the source code. Some distributions such as Fedora include prebuilt packages. As of this writing, the official stable Xen release is 2.0.7, but most people are already working with the 3.0 betas. 3.0 might be out by the time you actually read this.
It's really easy to start. Here's how we deployed a Debian virtual machine on a Fedora Core 4 install. We opted for a minimal FC4 install. After the installation, we updated, upgraded, and installed Xen with a couple of small commands:
$ yum update $ yum install xen $ yum install kernel-xen0 $ yum install kernel-xenU
Can it get easier? You should now carefully inspect your grub.conf file and find a part similar to:
title Xen 2.0 / XenLinux 2.6.9 kernel /boot/xen.gz dom0_mem=131072 module /boot/vmlinuz-2.6.9-xen0 root=/dev/hda1 ro console=tty0
Your version numbers may vary. If that's there, then it's time to reboot into that new entry. Voilá, you now have your first virtual machine up and running. Yes, at first sight the regular Linux version you have just booted into isn't running on a regular x86 anymore but is running on a Xen.
If you already started
xend at boot time, run
xm list to see output similar to:
HOSTA:/etc/xen/scripts # xm list Name Id Mem(MB) CPU State Time(s) Console Domain-0 0 123 0 r---- 41.2
Your next step is to create another virtual machine. The easiest way to do this is either to download an existing chroot image of the distribution you like or to build one yourself. Xen can use file-backed virtual block devices (dd if=/dev/zero of=vmdisk bs=1k seek 2048k count=1), physical devices (the actual /dev/hda9), LVM volumes (phy:VolumeGroup/root_volume), or an NFS root for your virtual machines. I prefer to use logical volumes on my machines, as they are really flexible to work with. With an existing disk /dev/sda5 available, I created logical volumes to use in my virtual machine:
$ pvcreate /dev/sda5 $ vgcreate vm_volumes /dev/sda5 $ vgchange -a y vm_volumes $ lvcreate -L4096 -nroot.dokeos vm_volumes $ lvcreate -L2048 -nvar.dokeos vm_volumes $ lvcreate -L256 -nswap.dokeos vm_volumes $ lvcreate -L1024 -nwww.dokeos vm_volumes
I usually create a directory /vhosts on my dom0 host where I mount my partitions. From there, I install the first FC4 base packages in a chroot on the actual future root device.
$ yum --installroot=/vhosts/root.dokeos/ -y groupinstall Base
You need to make a couple of quick fixes to make sure that you can open your initial console and so forth:
$ MAKEDEV -d /path/dev -x console $ MAKEDEV -d /path/dev -x null $ MAKEDEV -d /path/dev -x zer
It's almost ready. Now you need the configuration file for this virtual machine. Most of Xen's config files live in /etc/xen. You need a separate config file for each virtual machine you want to deploy on your host. They look like:
[root@xen xen]# cat dokeos.x-tend.be kernel = "/boot/vmlinuz-2.6.11-1.1366_FC4xenU" memory = 128 name = "dokeos.x-tend.be" nics = 1 extra = "selinux=0 3" vif = ['ip = "10.0.11.13", bridge=xen-br0'] disk = ['phy:vm_volumes/root.dokeos,sda1,w' ,'phy:vm_volumes/var.dokeos,sda3,w' ,'phy:vm_volumes/www.dokeos,sda4,w' ,'phy:vm_volumes/swap.dokeos,sda2,w' ] root = "/dev/sda1 ro"
The config file is rather straightforward, and the Xen packages include examples. Now start your virtual machine with the command
xm create config file. Add a
-c to that command to see the machine booting. You should get a login prompt within seconds. That's how fast a physical machine should also boot, but I'll keep on dreaming for a couple of years.
If you create a symlink to the /etc/xen/auto directory, your virtual machines will start at boot time, if you enable the xendomains script at boot time.
This is Fedora on Fedora, but I promised to give you a Debian on a Fedora. This is where you need
debootstrap. It comes in an RPM, but if you want a correct installation you need to find an up-to-date config script for Sarge, which you can easily find on any Debian box you have around. From there, again it's a matter of creating a new LVM entry, mounting it, and using
debootstrap to populate the Debian instance:
$ debootstrap --arch i386 sarge root.hope/ http://ftp.be.debian.org/debian
The Debian box has a similar config file:
kernel = "/boot/vmlinuz-2.6.11-1.1366_FC4xenU" memory = 128 name = "newhope.x=tend.be" nics = 1 vif = ['ip = "10.0.11.14", bridge=xen-br0'] disk = ['phy:vm_volumes/root.hope,sda1,w' ,'phy:vm_volumes/var.hope,sda3,w' ,'phy:vm_volumes/cvsroot.hope,sda4,w' ,'phy:vm_volumes/swap.hope,sda2,w' ,'phy:vm_volumes/home.hope,sda5,w' ,'phy:vm_volumes/svnroot.hope,sda6,w' ] root = "/dev/sda1 ro"
xm create newhope, you should get a listing like:
[root@xen xen]# xm list Name Id Mem(MB) CPU State Time(s) Console Domain-0 0 891 0 r---- 62.3 dokeos.x-tend.be 1 127 1 -b--- 24.6 9601 newhope.x-tend.be 2 127 1 -b--- 177.2 9602
Basic Xen virtual machine management is simple. Use
xm shutdown and
xm destroy, respectively, to do a clean shutdown or an immediate domain kill. For console access:
$ xm console $id # and $ xencons localhost $port
are similar and give you a better console than just a telnet localhost
$port. (The port is 9600 + the
$id of the particular virtual machine.)
Of course, you probably want to connect your virtual machines to the network. You first need to understand the bridging tool
brctl. Xen provides one or more virtual network interfaces to your guest hosts, but in your Domain0 you will also see some changes to your default network config.
For each interface you define in a virtual machine, Xen will create a
vifx.y interface, where x is the domain ID and y is the number of the interface in your virtual machine. For example,
vif1.1 refers to
eth1 in the domain with ID 1.
There are different ways of getting networking active, but I will show only one. Suppose that your network is 192.168.11.0 and your physical machine usually is at 192.168.11.2. You want to add your first virtual machine on 192.168.11.3. Log in to your virtual machine and configure the
eth0 in the virtual machine to have the appropriate IP address, just as you would do if it were a physical machine.
Now you want to have physical network interface and the interface of your virtual machine (
vifx.y) in a bridge. If you haven't already created
brctl addbr xen-br0 to do so, and then add both interfaces to it. Where
vif1.0 is the first Ethernet device in your first domain, the commands are:
$ brctl addif xenbr0 eth0 $ brctl addif vif1.0 HOSTA:~ # brctl show bridge name bridge id STP enabled interfaces xen-br0 8000.000bdb90c517 no eth0 vif1.0
vif1.0 don't need an IP address; you can put your 192.168.11.2 on the
xen-br0 interface and route all the traffic for the 192.168.11.0 network to that
xen-br0 interface. With this done and IP forwarding enabled, you should be able to reach the network. You should now be able to go from your virtual machine to other machines on the network and vice versa.
With Intel's announcement that it will ship its VT CPUs soon, operating systems without freely available source will soon run in a Xen environment. The 3.0 release happened in early December 2005, and every major Linux distribution is planning on using Xen somehow. Red Hat wants it in the kernel, and Suse already has added options in Yast for it.
The virtualization and consolidation market is fast changing, and Xen is playing an important role there!
Kris Buytaert is a Linux and open source consultant operating in the Benelux. He currently maintains the openMosix HOWTO.
Return to the Linux DevCenter.
Copyright © 2009 O'Reilly Media, Inc.