Security DevCenter
oreilly.comSafari Books Online.Conferences.

advertisement




Open Source Security: Still a Myth

by John Viega, coauthor of Secure Programming Cookbook for C and C++
09/16/2004

Open source may have many benefits over closed systems, but don't count security among them--yet. This article looks at why open source software may currently be less secure than its commercial counterparts.

Several years ago I wrote a few articles on whether open source software has security benefits, including The Myth of Open Source Security. Since then I've had open source developers tell me over and over again that now there is no myth of open source security. Some of those people will say, "Building secure software is difficult, open source or not; neither open source nor proprietary code should be considered a crutch." Others will say, "Open source developers are more clued in to security issues due to a better sense of community, and their software is more secure as a result."

Occasionally, people in the first camp will insist that there's no widespread belief that open source software is more secure. That's clearly wrong. For instance, David Wheeler, who publishes an online book offering guidance on secure programming, lists security as a technical advantage of open source software in his essay Why Open Source Software/Free Software? Look at the Numbers!

Related Reading

Secure Programming Cookbook for C and C++
Recipes for Cryptography, Authentication, Input Validation & More
By John Viega, Matt Messier

In the meantime, plenty of commercial and governmental organizations are still concerned that open source software is usually less secure than proprietary software. They are worried that open source developers are too much "hacker" and too little "engineer," cobbling together solutions without going through a structured software engineering process (such as requirements, specification, and analysis).

This article briefly examines both sides of the argument, focusing particularly on how the open source development mind-set is out of alignment with the general software market. While I really believe it's possible to judge relative security only on a case-by-case basis, we will talk about ways in which the open source community as a whole could stand for some serious improvement. That's not to say that similar problems don't plague commercial software; the open source community has a different set of challenges, one that so far it has largely failed to recognize.

The Argument for Open Source Security

The original argument for open source security is Eric Raymond's "many eyeballs" maxim: with many eyeballs, all bugs are shallow. The gist of Raymond's argument is that the number of bugs found in a piece of software correlates to the number of people looking at the code. The hope is that more people looking at the code will unearth more of the lingering issues.

Additionally, I commonly hear people insist that open source developers tend to be better at security because they have a better, more supportive community. Part of this may be a reinforcement of the "many eyeballs" argument, but part of it is a cultural elitism, in which the open source community believes it is better at writing code in general, and secure code in particular.

Don't Believe the Hype

Outside the open source community, the notion that open source is good for security--particularly when developers are diligent--often meets with extreme skepticism. In fact, many people worry that exactly the opposite may be true: that open source software may tend to be less secure. One can certainly make a reasonable claim.

The "many eyeballs" argument can look good on first glance. Let's give it closer scrutiny.

For most applications it does seem reasonable to expect that proprietary software will generally have fewer eyeballs trained on the source code. However, can the average developer who looks at open source software do a good job of finding security vulnerabilities? While I do believe the answer to this could someday be yes, the answer is not at all clear-cut right now.

Most people who look at the source code for open source software don't explicitly look for security bugs. Instead they likely have in mind a particular piece of functionality that they want to augment, and they'll look at a subset of the code to scope it out and maybe write a fix. Generally this does not require a thorough audit of the entire program. This kind of person might casually notice a security bug in the software, but it's probably best to assume it's a long shot.

There are people who really will look at code for security problems. There are altruistic types who simply want to see a safer world, but most people who do this are trying to promote themselves or their company. Either way, both groups want to make the biggest impact possible, and as a result, what tends to attract the eyeballs in the open source world is the popular, widely adopted software.

Most of these people who look for security problems will start by looking for the low-hanging fruit, focusing on the potential problems that could have monumental impact. In practice, this means that people tend to look for straightforward instances of common problems such as buffer overflows, format string problems, and SQL injection.

Less sexy risks tend to get ignored completely. For instance, plenty of open source programs use SSL improperly and are subject to network-based eavesdropping and tampering attacks (a problem I'll explore in more detail soon). People who publish security advisories aren't really publishing risks like this. From my experience, that's largely because people are far more interested in finding more immediately demonstrable problems.

Just looking for the common problems can be incredibly difficult and time consuming. For instance, even though buffer overflows are a well-understood, straightforward problem, in plenty of instances they've remained in heavily audited code for years. For example, WU-FTPD had several complex buffer overflows that survived for more than a decade, despite the code base being tiny (around 10,000 lines). This code was heavily audited and was popular as a test bed for early static vulnerability analysis tools due to its size and its history of security issues.

In the real world, it's rare that someone reviewing code for security will perform a thorough audit. Line-by-line review is often not feasible, simply because the human mind can't retain a detailed understanding of a large code base. Generally, people have tools to support them. Those tools are a starting point for manual inspection, which focuses on the findings of the tool and looks to see whether there's actually anything to the problem.

"Real" analysis tools are just starting to hit the market. The tools people use tend to be simple ones that don't do sophisticated analysis--grep-like tools such as RATS and flawfinder. A few commercial companies offer "web scanners" that look for common vulnerabilities in an application using a fuzz-like approach (you pick the inputs you think might exercise a common problem, give it a go, and see what happens). The problem with black-box testing for security is that most programs are complex and have states that an automated crawler isn't likely to find. Security problems are often buried in complex systems. Finding them with such an approach would require heavy user interaction to put the system into a large number of different states.

With both the grep-like tools and the black-box testing tools, you will almost always have a large number of false positives to sift through. Most potential auditors throw up their hands in frustration pretty quickly. Those who don't will usually focus on only a few of the reported issues. Even research tools such as BOON tend to have incredibly high false-positive rates.

The commercial world has much better analysis tools available. To date, such tools have not appeared outside of consulting engagements. Open source software certainly hasn't been the big target of these tools.

Soon, similar tools will become available commercially. As open source developers get access to these tools, it hopefully will turn more people into reasonably competent auditors and have a big positive impact, because the right eyeballs are looking in force. For now, only very few people will do even an adequate job of software security review. As you might suspect, those people are in high demand. They're far more likely to work on auditing commercial software than open source software, simply because commercial clients pay.

Besides that, looking at code all day is pretty tedious. Few software security auditors want to take their job home, and not many people who aren't professional auditors want to complete an audit once they realize how big a task even a cursory audit can be. For example, Sardonix is a site dedicated to security auditing with big DARPA money behind it. The idea is to give people incentives to audit open source code in detail and publish results, even if the results are negative. Those incentives are similar to the ones that exist in the open source world: namely, respect among peers and the satisfaction of having an impact on real projects. Sadly, despite publicity in the geek press, Sardonix has had no traction. There was activity on the mailing list, as apparently people like to talk about audits, but almost no one performed any. The exceptions were primarily UC Berkeley students who did it on assignment.

If you look at the Sardonix database for the few applications that have actually been audited, you'll see that most of the reports will say something like "I ran RATS and found some potential issues" without providing definitive answers or any guarantee of completeness. While this may seem lame, all of those people probably spent quite a lot of time investigating tool output. It just isn't easy to give good answers.

All in all, in some cases open source may have more eyeballs on it. Are those eyeballs looking for security problems, though? Are they doing it in a structured way? Do they have any compelling incentive? Do they have a reason to focus dozens or hundreds of hours on the problem to approach the level of effort generally given to a commercial audit? The answer to all of these questions is usually no. A good deal of software doesn't get examined for security at all, open source or not. When it does, commercial software tends to receive much more qualified attention.

Pages: 1, 2

Next Pagearrow






Sponsored by: