BSD DevCenter
oreilly.comSafari Books Online.Conferences.

advertisement


Big Scary Daemons

Dealing with Full Disks

09/27/2001

So, your daily message shows that your partitions are getting full. (You do read your daily status mail, right? Of course you do.) While various desktop environments have nifty point-and-click interfaces that show you exactly where your disk space went, they don't help much when your GUI-less server starts having trouble. We're going to look at some basic disk measuring tools, with the goal of finding that missing few gigabytes of space.

First off, you need an overview of how much space each partition has left. df(1) is our best tool for that. The output from a vanilla df command isn't that easy to read, however. When hard disks peaked out at 10 MB or 40 MB, it wasn't so bad. But when a disk can easily hit a 100 GB, you can go cross-eyed shifting decimal points. The -h and -H flags both tell df to generate human-readable output. The small h uses base 2 to create a 1,024-byte megabyte, while the large H uses base 10 for a 1,000-byte megabyte. Most FreeBSD tools do not give you the option to use base 10; base 2 is undoubtedly more correct in the computer world, so we'll use it for our examples.

We should also check the available inodes on a partition. Having lots of disk space is utterly moot if you run out of inodes and cannot create any more files! The -i option gives us that information.

So, the current disk usage is:

#df -hi
Filesystem  Size Used Avail Capacity iused   ifree %iused Mounted on
/dev/ad0s1a  97M  43M   46M    48%    1368   23718    5%   /
/dev/ad0s1f 4.9G 2.7G  1.8G    60%  184468 1117034   14%   /usr
/dev/ad0s1e 194M  12M  166M     7%     794   49380    2%   /var
procfs      4.0K 4.0K    0B   100%      41    1003    4%   /proc
#

This would be plenty, if I didn't need to copy a 2-GB file onto the laptop. Not long ago, a 2-GB hard drive was more than adequate. Today, some large commercial software packages come as 2-GB tarballs. I have almost enough space.

The biggest problem is discovering where bloat lives. If your systems are like mine, disk usage somehow keeps growing for no apparent reason. You can use ls -l to identify individual large files on a directory-by-directory basis, but doing this on every directory in the system is impractical. The actual decision on what to keep and what to delete is highly personal, but there are more sophisticated tools to help you identify your biggest programs and directories.

Your first tool is du(1), which displays disk usage. Its initial output is intimidating, however, and can scare off a new system administrator.

# cd $HOME
# du
1       ./bin/RCS
21459   ./bin/wp/shbin10
5786    ./bin/wp/shlib10/fonts
13011   ./bin/wp/shlib10
19      ./bin/wp/wpbin/.wprc
7922    ./bin/wp/wpbin
2       ./bin/wp/wpexpdocs
1       ./bin/wp/wpgraphics
2       ./bin/wp/wplearn
10123   ./bin/wp/wplib
673     ./bin/wp/wpmacros/us
681     ./bin/wp/wpmacros
53202   ./bin/wp
53336   ./bin
5       ./.kde/share/applnk/staroffice_52
6       ./.kde/share/applnk
...

This goes on and on, displaying every subdirectory and giving its size in blocks. On my system, $BLOCKSIZE is set to k. The total of each subdirectory is given -- for example, the contents of $HOME/bin totals 53,336 KB, or roughly 52 MB. Of that, the $HOME/bin/wp directory contains 53,202 blocks of that. I could sit and let du list every directory and subdirectory, but then I'd have to dig through much more information than I really want to. And blocks aren't that convenient a measurement, especially not when they're printed left-justified. Let's clean this up. First, du supports a -h flag much like df.

# du -h
1.0K    ./bin/RCS
 21M    ./bin/wp/shbin10
5.7M    ./bin/wp/shlib10/fonts
 13M    ./bin/wp/shlib10
 19K    ./bin/wp/wpbin/.wprc
7.7M    ./bin/wp/wpbin
2.0K    ./bin/wp/wpexpdocs
1.0K    ./bin/wp/wpgraphics
2.0K    ./bin/wp/wplearn
9.9M    ./bin/wp/wplib
673K    ./bin/wp/wpmacros/us
681K    ./bin/wp/wpmacros
 52M    ./bin/wp
 52M    ./bin
5.0K    ./.kde/share/applnk/staroffice_52
...

Also in Big Scary Daemons:

Running Commercial Linux Software on FreeBSD

Building Detailed Network Reports with Netflow

Visualizing Network Traffic with Netflow and FlowScan

This is a little better, but I don't need to see the contents of each subdirectory. A total size of everything in the current directory would be nice. We can control the number of directories deep we display with du's -d flag. -d takes one argument, the number of directories deep you want to show. A -0 will just give you a simple subtotal of the files in a directory.

#du -h -d0 $HOME
1.0G    /home/mwlucas
#

I have a GB in my home directory? Let's look a layer deeper and see where the heck it is.

#du -h -d 1
 52M    ./bin
1.4M    ./.kde
 24K    ./pr
 40K    ./.ssh
2.0K    ./.cvsup
812M    ./mp3
1.0K    ./.links
5.0K    ./.moonshine
...

The big offender here is the mp3 directory. Oh. Ahem, well, that can be copied to another machine if I must. This is a good opportunity to clean up my home directory anyway. I tried KDE for a week, and still hated it, so .kde can go. So can .moonshine and related stuff. When I'm done, the home directory is down about 200 KB. Much better.

Pages: 1, 2

Next Pagearrow





Sponsored by: