advertisement

Print

Network Forensics: Tapping the Internet
Pages: 1, 2

Analyzing the Data

After you've taken measures to collect the information, your next big decision will be the analysis tools that you can bring to the table. If you have built your own system, your primary analysis tools will be tcpdump and the strings command. You can use tcpdump to display the individual packets or filter a few packets out of a large data set. The strings command, meanwhile, will give you a rough transcript of the information that passed over the network. Snort will allow you to define particular conditions that generate alarms or traps. If you purchase a commercial system, your analysis will be pretty much limited to the capabilities the system provides. That's OK, though, because analysis is really the strength of the commercial offerings.



In a world in which strong encryption was ubiquitous, the monitoring performed by these network forensics systems would be restricted to what's called "traffic analysis" -- every IP packet contains the address of its destination and the address of its sender. By examining the flow of packets over time, it's possible to infer when a person is working, who they are communicating with, what Web sites they are visiting, and other sorts of tantalizingly vague information. Traffic analysis is the stuff that a lot of military intelligence is built upon, and it can be very powerful.

Unfortunately, we do not live in a world in which strong encryption is ubiquitous. Largely as a result of the U.S. government's war on encryption in the 1980s and 1990s, the vast majority of personal, sensitive, and confidential information sent over the Internet today is sent without encryption, open to eavesdropping, analysis, and misuse.

Using a network forensics tool you can spy on people's email, learn passwords, determine Web pages viewed, even spy on the contents of a person's shopping cart at Amazon.com. The tremendous power these systems have over today's networks makes them subject to abuse.

If you install a monitoring system, you should have a policy regarding who has access to use the system, under what circumstances it should be used, and what can be done with the information collected. In fact, you should have such policies even if you do not install an NFAT, since every UNIX workstation is a potential network wiretapping tool.

Indeed, none of these network forensics tools -- not even the FBI's Carnivore -- provide capabilities that are fundamentally new. Back in the 1980s, packet capture programs were available for DOS and UNIX. Using these programs, it was possible to eavesdrop on people's email, learn passwords sent without encryption, and otherwise covertly monitor information sent over networks. This vulnerability to covert monitoring is a fundamental property of most communications systems, including telegraph wires, long-range microwave links, and even semaphore.

But while monitoring was always possible in a networked environment, NFAT tools make monitoring considerably easier than ever before. On a gigabit network it is simply not possible for a human to examine each passing packet to see if it contains useful information. The power of these tools is their ability to rapidly distill down a large data set into manageable chunks.

As such, these systems are a double-edged sword for security and privacy. On the one hand, a powerful NFAT makes it possible to put a spotlight on a particular subject. You can, for example, covertly monitor all of the email messages sent between a pair of users. But on the other hand, these systems also make it possible to conduct surveillance of a network being used by thousands of people and limit the information captured and disclosed to external intrusions, system glitches, or one or two individuals under surveillance. Of course, this selective capability makes it far more likely that these surveillance capabilities will actually be used.

For example, in 1996 the FBI obtained its first Internet search warrant for the Internet backbone at Harvard University. The FBI was investigating a series of computer break-ins all over the world; they were all originating at Harvard from a variety of different machines belonging to the faculty of Arts and Sciences. But rather than record the contents of every TCP/IP connection, which would have subjected Harvard's entire community to unacceptable monitoring, the FBI used a program called I-Watch (developed by the Automated Systems Security Incident Support Team at the Defense Information Systems Agency in Washington, D.C.) that could be programmed to only capture TCP/IP connections that contained a particular keyword.

It turned out that the hacker was breaking into other computers and setting up a program called "sni256." So by only recording TCP/IP connections that contained the letters "sni256," the FBI was able to restrict the data collection to those TCP/IP connections made by the attacker. (As it turns out, during the monitoring period, two other TCP/IP connections belonging to legitimate users contained the same keyword and were inadvertently captured.)

Ultimately, the monitoring capabilities made possible by an NFAT are not a tremendously big deal to anyone who has spent time working as a system administrator, since these are exactly the same sort of capabilities granted to a person with UNIX "root" or Windows System Administrator privileges. Most system administrators regard being able to read people's email and look into their files more as an unwanted responsibility than a right. It is a necessary capability that occasionally needs to be used, but generally administrators have better things to do than to nose around through other people's business. And while there are exceptions, generally people who abuse positions of trust do not retain those positions.

From a legal point of view, your right to monitor (or to be free from monitoring) depends on who you are, where you are working, and who is doing the monitoring. Corporations generally have free rein to monitor their own networks, provided that employees and network users are told in advance that the monitoring may be taking place. (It is not necessary to inform the employees before each specific instance of monitoring, however, so most corporations generally inform their employees with a posted policy and leave it at that.)

ISPs are required under the Electronic Communications Privacy Act (ECPA) to protect the privacy of their customers' electronic communications -- they can't eavesdrop on communications or disclose intercepted contents -- unless one of the parties to the communication has given consent, or if the monitoring is needed to maintain system operations, or in cases of a court-authorized intercept.

Generally speaking, most ISPs require their users to give implicit consent to any and all monitoring as part of their "terms of service" agreement, so for most practical purposes the ECPA doesn't give ISP users any privacy at all. Law enforcement agencies have the right to monitor without the consent or the knowledge of the individuals being monitored, provided they can obtain authorization from a court. However, they have the added restriction of minimization -- they can only capture and record information specified in their warrant.

Today there is gaping disconnect between the level of privacy that most users expect and what is both technically possible and legal. That is, most users expect that their computer use is largely anonymous and untracked. At the same time, computers are getting better at monitoring, more products are being introduced specifically for the purpose of monitoring, and legislation such as the USA PATRIOT Act is making monitoring even easier than it was in the past.

Conclusions

Full-content network monitoring is no longer the province of spooks and spies -- it's increasingly a practice that serves a variety of goals for both computer security and overall network policy. These days the underlying hardware is certainly up to the task, and some of the software that's out there, both commercial and free, is exceedingly good.

What hasn't caught up is our understanding of what to do with this technology -- what it is good for, and what uses should be declared out of bounds. In particular, few of the commercial or free offerings have facilities for watching the watchers -- that is, for logging the ways the systems have been used in an attempt to prevent misuse. Likewise, few organizations have developed policies for the appropriate use of this technology, other than catch-all policies that simply allow the organization to monitor anything for any purpose whatsoever.

Although there has been a lot of public opposition about monitoring technology in general, and about the FBI's Carnivore project in particular, ultimately these systems will be used by organizations because organizations need to understand what information is moving over their network connections. As such, it behooves us as technologists to understand how these systems work, to publicize their capabilities and their limitations, and to develop standards for their ethical use.

Editor's note: Simson L. Garfinkel is Chief Technology Officer of Sandstorm Enterprises, which develops and markets the NetIntercept network monitoring tool. Garfinkel is also the author or coauthor of numerous books on computer security, most recently Web Security, Privacy & Commerce (O'Reilly & Associates, 2001).


Return to the O'Reilly Network.