ONLamp.com    
 Published on ONLamp.com (http://www.onlamp.com/)
 See this if you're having trouble printing code examples


Automating Network Administration, Part Two

by Luke A. Kanies
01/03/2002

We've already discussed the merits of automation and how you should always plan your automation. Now we're going to discuss where it all starts: server install time.

Start At the Beginning

Remember how planning and automation both feed on themselves so that the more of them you do, they easier they become? The opposite is also true: the longer you neglect them, the harder they become. That in itself is a problem, but when you don't plan or automate the installation of your servers themselves, you put yourself in a catch-22 where out of the gate you have a lack of consistency, understanding, documentation, and everything else that planning and automation provide you. The minute you first deploy your servers you're already behind in automating them, because they were built without a plan, which almost by definition guarantees that they were not built in the best manner, and they were also built with all the same potential for human error and lack of consistency that the lack of automation provides in an existing network.

The biggest problem with not planning and automating your system installs, though, is that you usually have absolutely no excuse not to do so. For most systems, you get adequate warning; so you can spend some time planning the install, and Unix is scriptable enough that you should be able to automate a significant amount of any server installation. You don't have all of the standard barriers of not wanting to work on production systems or being short on time -- system administration is usually full of lulls and crunch times, and if you automate during the lulls, then you will be able to significantly reduce the crunch times.

Planning and automating your server installs rewards you in ways that they usually don't with other aspects of your network, though. I can't count the number of times I've seen companies do a vanilla install of everything that comes with their OS, because when they tried a minimal install the compiler didn't work, and they didn't want to take the time to figure out what little OS package to add to fix the problem. This full install takes up more hard-drive space, results in more software to be patched, and often ends up with a less-secure system. Not having an automated server install also makes you less likely to install appropriate tools onto the server, because of the added effort. Basically, a lack of automation in the install process means that you are likely to spend the same amount of time on each server but end up with a machine that doesn't fit your needs, whereas automation requires significant time up front but results in a shorter turnaround time for each individual build and a resulting server which exactly meets your needs by having everything you want and nothing you don't.

Getting To the Point

Previously in this series:

Automating Network Administration, Part One -- Any task that is carried out by a system administrator more than once is a candidate for being automated. Luke A. Kanies explains how important automation and planning are to a sysadmin.

Now that I've convinced you that automation and planning are the key to maintaining your network, I'll get to the real point of this article. If you research and plan your server and then build an automated system to execute that plan, you will end up with a more secure, more stable server which you will understand more thoroughly and on which it is much easier to automate ongoing maintenance. And remember that our whole goal now is to automate our network, so the ability to automate should be a defining characteristic of our network.

What is there to plan about building a server? Plenty. In addition to deciding exactly what OS software to install (you really don't want to install the entire OS on a server), you also need to plan all of the configuration details (what is your patching scheme? naming scheme? IP addressing scheme?), your filesystem layout, and then you need to plan exactly what non-OS software you want to install. Do you need ssh? (Yes, you do.) Tripwire? TCP-wrappers? What management tools will you need to do your job? What about security auditing, performance monitoring, scripting, and reporting? Are you going to install a compiler, or are you going to have a central compiling and packaging server?

Related Reading

Running LinuxRunning Linux
By Matt Welsh, Matthias Kalle Dalheimer & Lar Kaufman
Table of Contents
Index
Sample Chapters
Full Description
Read Online -- Safari

These are all questions you should ask yourself anyway; I'm merely saying that you should ask these questions before you build any servers, so that all of the servers you deploy will conform to your plan. Of course, most of us started work at companies which already had servers, and we won't get the opportunity to rebuild them any time soon. This plan is still important, though, because it gives us an ideal to shoot for, and we can slowly massage existing servers towards that ideal. Hopefully, the methods we use to do this massaging will the be the same methods we use to install the servers when we finally get the opportunity. Planning what you want your servers to look like is never a bad idea, even if you can't do it right now. And once you have an installation plan and the tools to implement it quickly, you will be much more likely to be given permission to rebuild a server to fit into your planned network.

Installing

OS

The first thing you have to do with your server is install the OS. Because I have more experience with Solaris than with anything else, and because I really love its install process, I am going to use that OS when I need an example. All of the principles in this article still apply to other Unixes, but your final solution will probably change -- most free OSes come with a lot more software than Solaris does, and it's often easier to add the missing software, but you may find it more difficult to remove installed software, and it may be nearly impossible to find a Jumpstart equivalent to automate the entire process. You should still be able to automate a significant amount of the install process, but you may not be able to achieve full automation.

Most OSes give you multiple configuration options during the installation process, including how to partition the filesystems, what software to install (usually in the form of software clusters and smaller software packages), and basic system configuration. How you are going to address all of these questions is part of your installation plan, but there are at least two other aspects of OS installation: what parts of the OS do you have to remove, and what configurations do you have to fix? I haven't yet seen an OS that allows you to install exactly what you need and configure it exactly as you like on the first pass, so I expect that you'll have to modify the OS once it's installed; I know you will for Solaris.

In the case of a Solaris install, we have the minor configuration details like host name and IP address, but we also have filesystem layout, which software cluster to install, and which additional software to install. As with most OSes, with Solaris it's easier to pick a software group with less software and then add the extra software later, rather than installing a bunch of software you know you won't need and then deleting that extra software later. For servers running Solaris, I always choose the smallest software cluster and install everything else I need individually. Because this is often around 40 packages, you will find it a serious pain if you decide not to automate this.

Our goal here is to end up with not a single piece of software that doesn't fit into our plan -- a process called OS minimization -- because a minimized OS has fewer potential security holes, less software to patch, and takes up less hard-drive space. To reach this goal, you will probably have to remove software installed in that basic set; I haven't yet used a server with PCMCIA slots, but support for it is part of the Core cluster on Solaris, it gets started by default, and it creates a world-writeable directory in /tmp owned by root. Yep, a prime candidate for removal from your server.

Once we have a plan for how we'll be laying out the filesystems (see my previous article for some pointers on filesystem layout), what OS group we'll install, what of that we'll remove, what other OS pieces we'll add, and how we're going to configure the OS itself (host name, IP address, etc.), it's time to automate all of it.

Fortunately, the automation portion of a Solaris install is relatively easy: Sun provides its Jumpstart tool as part of Solaris, is also (finally) providing the JASS toolkit to give you a leg up on securing your Solaris install, and even has a new book on Jumpstart. JASS provides just about all of the functionality of tools like titan, and it builds a decent foundation for all of the extra configuration you want do do with Jumpstart. I find it more difficult to use than it could be, and it doesn't really ship with much help outside of security, but it's a good tool and will save you lots of time.

The combination of Jumpstart and JASS will get your OS installed, configured, patched, and secured, but although they provide an example minimization script, you'll be basically on your own for minimizing most of your services. There are still some things you'll have to do yourself, such as installing Disksuite and mirroring your root disk (although see my Jumpstart page for a script that does this for you, and for some example minimization scripts), but mostly what's left now is to install the extra software and configure it.

Everything Else

If you're building a Web server, it's really tempting to install a compiler on it and just compile Apache and whatever else you need. I used to work with a guy who compiled Apache about twice a week, because he always found it easier to compile it on every machine than to build a package out of it and install just that package. This is what the Programming Perl book calls "false laziness," in that you are being lazy today but it will end up being more work for you in the end. You've just ruined any possibility of truly automating your install, and you have to perform more work per server than if you had packaged the software and installed the package on every server. Furthermore, you've not only installed software you don't really need on your server (the compiler), this software will foster your false laziness by allowing you to compile other software on your server, and having a compiler handy does make your server a bit more vulnerable, because crackers can compile necessary tools locally rather than having to download binaries.

It's probably enough to precompile everything, toss it into a tarball, and use a script to install it, but to make your systems truly manageable, you should turn all of your software into packages. It's not always a trivial process, but sunfreeware.com has some handy scripts and tutorials to help you turn your software into Solaris packages. If you just use gzip and tar to install your software, then you can't do things like query your OS to find out what software is installed and what version the software is. If all of the software is installed as a package, though, your OS keeps a record of what you have installed and what version is installed, which makes it significantly easier to automate upgrades and maintenance.

Even better, JASS comes with a script which already automates installing packages, so if you turn all of your software into packages, then you can just use that script to install your software and you're done.

Conclusion

This article is specifically targeted towards servers, but most of the points still apply to workstations; in fact, workstations should be much easier than servers, and you'll likely find that your workstations easily fit within any system for building servers.

Keep an eye my Jumpstart page for tools, tips, and tutorials to help you build your Solaris box right the first time.

If you've followed along and built a plan for installing all of your servers according to how this article lays it out, you now have an automated system which will do the following:

I have had such a system in various states of functionality for about the last four years, and it has saved me tons of time, not just when building a system (I can usually build any server running any configured service in about two hours of clock time but only about five minutes of my time), but also when maintaining it. It's an honest pleasure to be able to explore a server and find nothing you don't expect, nothing you didn't plan, and nothing you don't understand, but it also makes firefighting, upgrading, patching, and daily maintenance easier.

Hopefully, this article will get you thinking about how you build your servers, and how you can automate and simplify that process while at the same time ending up with a more maintainable network. After all, that's what gets you home on time at the end of every day.

Luke A. Kanies is an independent consultant and researcher specializing in Unix automation and configuration management.


Previously in this series:

Automating Network Administration, Part One -- Any task that is carried out by a system administrator more than once is a candidate for being automated. Luke A. Kanies explains how important automation and planning are to a sysadmin.

Return to ONLamp.com.

Copyright © 2009 O'Reilly Media, Inc.