Editor's Note: This is a follow-up to Andy Oram's interview, Symbiot on the Rules of Engagement. Paco Nathan is the chief scientist and vice president of research and development for Symbiot.
As the number and range of attacks on computer systems have grown exponentially and conventional firewalls and intrusion detection systems have proven inadequate for the task, security researchers have started to talk about employing "countermeasures" to preserve security. Symbiot in particular places countermeasures at the center of its product line. It's seductive for observers to talk of using countermeasures to "strike back" at attackers. Whether cited with approval or disdain, this terminology polarizes the online community, prompting an array of Wild West analogies and the labeling as vigilantes those who discuss the engineering of countermeasures. But the reality of countering network-based attacks is that many factors must be considered before a response is taken. Ultimately, the risks involved in any counterstrike must be assessed, and that assessment involves decision-makers whose jobs include evaluating and mitigating the risks of such counterattacks.
When computer security professionals speak about countermeasures, the implications are more subtle than the general public might imagine. Consider that hundreds of machines–routers, switches, firewalls–had to negotiate with each other just for you to read this web page: Access had to be authorized, rates had to be established, data had to be retrieved. At any step, those machines might have decided to regard you as some kind of a problem, or even a threat. Why? Because someone had to pay money for those machines and for the decision-making at each step along the way. Real countermeasures involve people – system administrators, corporate executives, attorneys, judges, senators, military officers, diplomats, journalists, accountants, insurance agents–along with those machines, to make effective decisions. Who are the legitimate users? Who are the probable abusers? What costs are unreasonable? What responses are appropriate?
Definitions of "reasonable" costs change for different types of networks. For example, a nonprofit charity might appreciate having its entire web site content scanned daily by a remote spider, since that could help boost its page rank in a search engine. A firm selling subscription-based market research online might define that same behavior as an intrusion, since it fits the profile of a competitor trying to plagiarize content. Then again, the charity probably relies on a relatively low-bandwidth connection. From its perspective, the same network behavior that indicates a denial of service attack might be something that isn't even noticed on the market research publisher's logs.
The definitions of "appropriate" responses are even more complex. A system administrator at the charity might reconfigure its firewall to block or rate-limit the remote addresses that seem to launch denial of service attacks. However, at the market research publisher, more people than just the system administrator will have a say in any firewall policy changes that affect customer agreements, notably the vice president of sales and the legal department. The publisher might decide to serve legal notice against the Internet service providers upstream from a source of intrusions. Those are two simple, effective responses, driven by policy. However, when we start to consider government agencies, universities, banks, and so on, the policy issues begin to run a much broader spectrum. For example, on some federal government networks, the firewall policies for public access can be subject to constraints based on constitutional law. In contrast, some networks used for defense or intelligence work may have the legal authority to retaliate directly against probable attacks, at the discretion of their security officers.
When all factors surrounding a network intrusion are considered, a "counterstrike" is but one form of countermeasure that can be taken in response. Usually, it is a last resort in a wide range of options that involve far more than technology.
Countering network attacks is not entirely a technical issue. For technically oriented people, it may seem appealing to imagine aggressive technical solutions, and much of the press coverage tries to magnify this facet of the problem, but the technical issues hardly constitute the complete picture.
Countering network attacks is not entirely a legal issue either. For legal processes to work, there must first be sufficient, tangible evidence of wrongdoing. In the context of network security, that is rarely the case. Also, a purely legal response would make the Internet too expensive for almost all consumers and would most certainly limit our individual freedoms online.
Therefore, countermeasures present a social problem, for which there exists a social solution–one that computers can help make less costly and more effective. Specifically, it is a risk-management problem. Examples of how people and organizations handle risk-management problems every day can be found in the standard practices of insurance underwriting, hospital administration, investment banking, military planning, chemical plant operations, and fire departments, to name a few.
Perhaps the best analogy is found in consumer credit. Consider this: If you are late in making a payment on your credit card, will the lender immediately apply countermeasures that cause you to be evicted from your home or placed in jail? No, of course not. Will it apply countermeasures that place a note into your credit report that can make your next home mortgage more expensive? Quite probably. The practice is certainly frustrating to the consumer, but it is necessary, and the banks and credit companies have a legal right to do it. It contributes to the ultimate goal of reducing credit card abuse.
How can these same real-world risk-mitigation techniques be applied to reduce computer network abuse? At Symbiot, our strategists, researchers, and engineers have been working on exactly that problem for years. We have established ties with other firms and government agencies involved in risk management in order to share ideas and to work together to build solutions. The details of how those solutions operate are complex but highly accountable. Some aspects of the solutions, such as portions that operate on public networks, are being provided to the public through an open source project and efforts to promote open standards. Other aspects, such as the parts that resemble confidential credit scoring, are kept private but are subject to review by federal government officials and commercial organizations responsible for managing critical infrastructure.
The open source project is called OpenSIMS, and it provides a way for tying together the open source tools used for security management into a common infrastructure. The reference implementation for this project is based on the Tomcat application server and other projects from the Apache Software Foundation. There is also a cross-platform AgentSDK that interfaces with other security components, and a real-time animated Flash GUI. So far, that code has been tested on Gentoo, RedHat, Fedora, and Debian, with Mac OS X and FreeBSD currently in testing. The web site provides source, documentation, and developer discussions, and soon there will be example live feeds from networks running the software.
One dirty little secret of information security is that corporations have been using "tiger teams" for years in order to launch highly aggressive counterstrikes against attackers. Why? Because many more corporations get attacked and extorted through computer intrusions than the popular press will ever report. The counterstrike capabilities of the U.S. Defense Department are even more advanced than corporate practices, and it is envisioned that the next large-scale military conflict will involve substantial exchanges of network-based attacks. It is also feared that terrorist strikes may begin to target computer networks and affect the services we depend on everyday. The time to redevelop our critical infrastructure to become proactive against attacks is now–not after major attacks have already occurred.
What does this imply for a consumer? Does it mean that if your grandmother's PC gets a virus, it could be accidentally "neutralized" and all her special cookie recipes obliterated? No. It does mean that if she neglects to clean up a bunch of viruses on her hard drive, she might encounter difficulties shopping online. Furthermore, if your grandmother chooses to go online through a cut-rate ISP with a history of sheltering attackers, she will probably have her bandwidth limited by web sites that take security seriously. At the same time, it also means that new consumer protection features will be able to help her select among online merchants by comparing risk metrics.
Over the course of the next few years there will be a major shift in the thinking around how the Internet is managed. The engineering details of that shift have far more to do with the advanced mathematics used by financial analysts and the protocols used by military operations than most of today's computer programmers and network administrators may imagine. The end result, however, will be readily familiar to the consumer: It will be an online world where privacy and freedoms are protected, and risk is managed by the rules of credit cards and shopping.
Far from the recklessness of vigilantism, the point of countermeasures is to promote collaboration and open standards that emphasize careful evaluation, balanced decision-making, and more complete accountability.
Paco Nathan is the Chief Scientist and Vice President of Research and Development for Symbiot.
Return to the Security DevCenter
Copyright © 2009 O'Reilly Media, Inc.