Linux DevCenter    
 Published on Linux DevCenter (
 See this if you're having trouble printing code examples

Linux Network Administration

Building High Performance Linux Routers


I recently attended an interesting and thought-provoking short course on IP router architecture led by Gísli Hjálmtýsson. Gísli is engaged in research in the field of active networks and has developed a Linux-based prototype active router called "Pronto." In describing this and other of his work, Gísli offered some insight into the issues impacting router performance, especially in a Linux environment. In this article I thought I'd take a couple of Gísli's key observations and translate them into some practical guidelines to assist in the construction of Linux-based routers with a focus on performance rather than functionality.

The fast path

The basic process of routing the IP protocol is deliberately simple. For high performance routing, you want the datagram to be passed as quickly as possible from one interface to another. The process that does this forwarding for the vast majority of datagrams is sometimes called the "fast path."

This process occurs for each and every datagram forwarded by an IP router. The time taken for this process to complete is critical to the overall performance of the router. This process is called the fast path because there are slower processes possible. In practice, in modern routers there are a number of other tests that may be performed within the fast path. Features like firewall and network address translation each have tests associated with them.

Speed bumps

The fast path looks fairly straightforward, but already there are hints that there are speed bumps, at least in some lanes. I've already mentioned that performance is about time: time-in to time-out, how long a datagram takes to traverse a router. There are a number of factors that can influence the performance of an IP router, some of them obvious, some not.

A number of tests are performed in the fast path: tests for whether the datagram is for the local host or to be routed, tests for IP options, tests for firewall, tests for network address translation. Each test incurs a performance cost, but any actions that might have to be taken as a result incur an additional cost. If you want high performance, functionality costs. The less you mess with the datagram on the way through, the faster it will travel.

A number of factors relate to the hardware. The four most important of these relate to the use of interrupts, the bus bandwidth, the speed of the CPU, and the bandwidth of the memory.

The time taken to process interrupts can be quite significant for high performance routing. Any amount of time taken between when the hardware generates an interrupt and when the relevant data is read is a direct contributor to latency within the router. Additionally, especially in the case of serial devices, the rate that interrupts are serviced plays a part in determining the upper limit of the speed of the network connection. If it takes 1 mS to service an interrupt and your serial hardware is generating one interrupt per received character, you'll not be able to handle more than 1000 characters per second, roughly 10 kbps. In the same fashion, even ignoring all other factors, if that same 1mS interrupt latency were applied a datagram at a time, you'd be limited to 1000 datagrams per second for that interface. To a large extent, interrupt latency is in the hands of the kernel hackers, but the type of hardware plays some part too.

The bandwidth of the bus in the router host is very important. Just about everything that happens in the hardware happens across the bus, whether it be data being read from an Ethernet card, written to a FDDI card, written to a serial device, or read from an IRDa device. The machine bus is the common communications channel that nearly all hardware uses to communicate. While the PCI bus now dominates the industry, there are still a lot of alternatives out there, ISA, EISA, and MCA being three. Non-Intel architectures had their own bus standards. A bus is controlled by a clock, and the overall aggregate bus bandwidth is a product of the bus clock and the data width, the number of bits that may be read or written in a single cycle. When you're attempting to route between a number of separate devices, it's possible for the bus to become a bottleneck in performance.

Something in the router has to do the work of shuffling all the bits around. The CPU doesn't really do all that much in the fast path: a bit of bit twiddling, some reads and writes, and a couple of calculations of the IP header checksum. The moment you start deviating from it, though, the CPU begins to work harder. For example, if you have lots of firewall or NAT rules, or you do lots of IP option processing, the CPU will be used more for each datagram and will play a larger role in the router performance. In the bigger picture, CPU plays a more significant role when your average datagram size is small. So if you have lots of data in small datagrams, your CPU will work harder per Mbps than if the datagrams are large. This is because the majority of CPU work is done on a per packet basis, rather than a per byte basis.

The one hardware related factor that is most heavily influenced by the total volume of data is the memory bandwidth. Every read from memory, every write to memory takes time. Any operation on a datagram that requires data to be copied in memory takes considerably more time. Care has been taken in the design of the Linux kernel to ensure that data copies are kept to a minimum, but some operations, such as IP datagram fragmentation or reading from or writing to some device drivers, require the datagram, or portions of it, to be copied in memory. While this may seem trivial, in practice it becomes an issue when processing high volumes of large datagrams.


If you're building a high performance Linux-based router, there are some choices you can make that will help ensure you're not disappointed. Inevitably, you'll make compromises somewhere, but you'll at least be doing it with some knowledge of the potential impact.

So what then are the rules of thumb that I promised? Here they are, split into two categories:



More reading

There is lots of reading to be done on this subject. I've provided a few references to related information. The Linux Router Project mailing list occasionally has discussions relating to Linux router performance and might be a good place to ask specific questions if you have any. If you have the opportunity to attend Gísli's talk at any time, I recommend it. In the meantime, happy routing.

Linux Router Project

A purpose-built Linux distribution aimed at router construction.

Pronto Router

The "PROgrammable Network of TOmorrow," a Linux-based active router.

DARPA Active Networking

The DARPA provides sponsorship of a number of active network projects; you might be interested in reading about some of them.


Provides practical advice on selecting Ethernet cards.


Provides pointers and advice on selecting Linux-compatible hardware.

Emerging Technologies Inc.

A well-known vendor of high performance, Linux-supported serial hardware.

Sangoma Technologies

A well-known vendor of high performance Linux-supported serial hardware.

Terry Dawson is the author of a number of network-related HOWTO documents for the Linux Documentation Project, a co-author of the 2nd edition of O'Reilly's Linux Network Administrators Guide, and is an active participant in a number of other Linux projects.

Read more Linux Network Administration columns.

Discuss this article in the O'Reilly Network Linux Forum.

Return to the Linux DevCenter.


Copyright © 2009 O'Reilly Media, Inc.