O'Reilly    
 Published on O'Reilly (http://oreilly.com/)
 See this if you're having trouble printing code examples


Open Source Security: Still a Myth

by John Viega, coauthor of Secure Programming Cookbook for C and C++
09/16/2004

Open source may have many benefits over closed systems, but don't count security among them--yet. This article looks at why open source software may currently be less secure than its commercial counterparts.

Several years ago I wrote a few articles on whether open source software has security benefits, including The Myth of Open Source Security. Since then I've had open source developers tell me over and over again that now there is no myth of open source security. Some of those people will say, "Building secure software is difficult, open source or not; neither open source nor proprietary code should be considered a crutch." Others will say, "Open source developers are more clued in to security issues due to a better sense of community, and their software is more secure as a result."

Occasionally, people in the first camp will insist that there's no widespread belief that open source software is more secure. That's clearly wrong. For instance, David Wheeler, who publishes an online book offering guidance on secure programming, lists security as a technical advantage of open source software in his essay Why Open Source Software/Free Software? Look at the Numbers!

Related Reading

Secure Programming Cookbook for C and C++
Recipes for Cryptography, Authentication, Input Validation & More
By John Viega, Matt Messier

In the meantime, plenty of commercial and governmental organizations are still concerned that open source software is usually less secure than proprietary software. They are worried that open source developers are too much "hacker" and too little "engineer," cobbling together solutions without going through a structured software engineering process (such as requirements, specification, and analysis).

This article briefly examines both sides of the argument, focusing particularly on how the open source development mind-set is out of alignment with the general software market. While I really believe it's possible to judge relative security only on a case-by-case basis, we will talk about ways in which the open source community as a whole could stand for some serious improvement. That's not to say that similar problems don't plague commercial software; the open source community has a different set of challenges, one that so far it has largely failed to recognize.

The Argument for Open Source Security

The original argument for open source security is Eric Raymond's "many eyeballs" maxim: with many eyeballs, all bugs are shallow. The gist of Raymond's argument is that the number of bugs found in a piece of software correlates to the number of people looking at the code. The hope is that more people looking at the code will unearth more of the lingering issues.

Additionally, I commonly hear people insist that open source developers tend to be better at security because they have a better, more supportive community. Part of this may be a reinforcement of the "many eyeballs" argument, but part of it is a cultural elitism, in which the open source community believes it is better at writing code in general, and secure code in particular.

Don't Believe the Hype

Outside the open source community, the notion that open source is good for security--particularly when developers are diligent--often meets with extreme skepticism. In fact, many people worry that exactly the opposite may be true: that open source software may tend to be less secure. One can certainly make a reasonable claim.

The "many eyeballs" argument can look good on first glance. Let's give it closer scrutiny.

For most applications it does seem reasonable to expect that proprietary software will generally have fewer eyeballs trained on the source code. However, can the average developer who looks at open source software do a good job of finding security vulnerabilities? While I do believe the answer to this could someday be yes, the answer is not at all clear-cut right now.

Most people who look at the source code for open source software don't explicitly look for security bugs. Instead they likely have in mind a particular piece of functionality that they want to augment, and they'll look at a subset of the code to scope it out and maybe write a fix. Generally this does not require a thorough audit of the entire program. This kind of person might casually notice a security bug in the software, but it's probably best to assume it's a long shot.

There are people who really will look at code for security problems. There are altruistic types who simply want to see a safer world, but most people who do this are trying to promote themselves or their company. Either way, both groups want to make the biggest impact possible, and as a result, what tends to attract the eyeballs in the open source world is the popular, widely adopted software.

Most of these people who look for security problems will start by looking for the low-hanging fruit, focusing on the potential problems that could have monumental impact. In practice, this means that people tend to look for straightforward instances of common problems such as buffer overflows, format string problems, and SQL injection.

Less sexy risks tend to get ignored completely. For instance, plenty of open source programs use SSL improperly and are subject to network-based eavesdropping and tampering attacks (a problem I'll explore in more detail soon). People who publish security advisories aren't really publishing risks like this. From my experience, that's largely because people are far more interested in finding more immediately demonstrable problems.

Just looking for the common problems can be incredibly difficult and time consuming. For instance, even though buffer overflows are a well-understood, straightforward problem, in plenty of instances they've remained in heavily audited code for years. For example, WU-FTPD had several complex buffer overflows that survived for more than a decade, despite the code base being tiny (around 10,000 lines). This code was heavily audited and was popular as a test bed for early static vulnerability analysis tools due to its size and its history of security issues.

In the real world, it's rare that someone reviewing code for security will perform a thorough audit. Line-by-line review is often not feasible, simply because the human mind can't retain a detailed understanding of a large code base. Generally, people have tools to support them. Those tools are a starting point for manual inspection, which focuses on the findings of the tool and looks to see whether there's actually anything to the problem.

"Real" analysis tools are just starting to hit the market. The tools people use tend to be simple ones that don't do sophisticated analysis--grep-like tools such as RATS and flawfinder. A few commercial companies offer "web scanners" that look for common vulnerabilities in an application using a fuzz-like approach (you pick the inputs you think might exercise a common problem, give it a go, and see what happens). The problem with black-box testing for security is that most programs are complex and have states that an automated crawler isn't likely to find. Security problems are often buried in complex systems. Finding them with such an approach would require heavy user interaction to put the system into a large number of different states.

With both the grep-like tools and the black-box testing tools, you will almost always have a large number of false positives to sift through. Most potential auditors throw up their hands in frustration pretty quickly. Those who don't will usually focus on only a few of the reported issues. Even research tools such as BOON tend to have incredibly high false-positive rates.

The commercial world has much better analysis tools available. To date, such tools have not appeared outside of consulting engagements. Open source software certainly hasn't been the big target of these tools.

Soon, similar tools will become available commercially. As open source developers get access to these tools, it hopefully will turn more people into reasonably competent auditors and have a big positive impact, because the right eyeballs are looking in force. For now, only very few people will do even an adequate job of software security review. As you might suspect, those people are in high demand. They're far more likely to work on auditing commercial software than open source software, simply because commercial clients pay.

Besides that, looking at code all day is pretty tedious. Few software security auditors want to take their job home, and not many people who aren't professional auditors want to complete an audit once they realize how big a task even a cursory audit can be. For example, Sardonix is a site dedicated to security auditing with big DARPA money behind it. The idea is to give people incentives to audit open source code in detail and publish results, even if the results are negative. Those incentives are similar to the ones that exist in the open source world: namely, respect among peers and the satisfaction of having an impact on real projects. Sadly, despite publicity in the geek press, Sardonix has had no traction. There was activity on the mailing list, as apparently people like to talk about audits, but almost no one performed any. The exceptions were primarily UC Berkeley students who did it on assignment.

If you look at the Sardonix database for the few applications that have actually been audited, you'll see that most of the reports will say something like "I ran RATS and found some potential issues" without providing definitive answers or any guarantee of completeness. While this may seem lame, all of those people probably spent quite a lot of time investigating tool output. It just isn't easy to give good answers.

All in all, in some cases open source may have more eyeballs on it. Are those eyeballs looking for security problems, though? Are they doing it in a structured way? Do they have any compelling incentive? Do they have a reason to focus dozens or hundreds of hours on the problem to approach the level of effort generally given to a commercial audit? The answer to all of these questions is usually no. A good deal of software doesn't get examined for security at all, open source or not. When it does, commercial software tends to receive much more qualified attention.

Bonfire of the Vanities

The people you want looking at your free software are for the most part doing other things with their time, such as auditing commercial software. Still, those in the open source community never seem too worried about security in their own software. It seems like every open source developer under the sun thinks he understands security well enough to avoid security issues, even if he isn't too surprised when other developers are not quite as competent; plenty of security advisories for open source applications come out every week.

A disturbing trend I've noted is that many developers (open source or not) will pick up a little about security and think they know a lot. For example, when developers learn about buffer overflows, they generally wonder how they can allow attackers to execute arbitrary code. Usually, in looking for an answer they'll find out all about stack overflows and how easy it is to overwrite an address that the program will quickly jump to, and then put attack code in that address. A few times I've seen as a result those developers taking all their stack-allocated buffers and turning them into heap-allocated ones (that is, allocated by malloc). They tend to think there's nothing critically exploitable for an attacker to overwrite on the heap.

Usually they're wrong. Even if there isn't security-sensitive, application-specific data, and even if there aren't function pointers stored in memory that will later be dereferenced, there is usually something called the global offset table (GOT), which helps position-independent code access global data. The GOT contains function pointers that real programs dereference.

Then there's the case of SSL. In my years of doing security audits, I've learned that few developers understand how to use SSL properly. They tend to view it as a drop-in replacement for traditional sockets--just rewrite your Berkeley calls with the corresponding SSL API calls!

Unfortunately that's not how SSL works. In order for an SSL connection to provide confidentiality, the client and server must authenticate each other. Most SSL APIs do not perform authentication by default, due to the wide variety of authentication requirements. In fact, proper authentication often requires pages and pages of code (see the Secure Programming Cookbook for details).

Developers usually haven't heard that; when they do, they tend to become defensive. Sometimes they'll insist that there's no problem until someone can actually demonstrate a working attack. That is, instead of fixing potential problems and moving on, they'll try to force security auditors to spend hours of precious time demonstrating exploitability. This actually tends to be more of a problem in the open source world than in the commercial world, because commercial projects typically are driven more by schedules. Managers often are already worried about sticking to their schedule and will try to railroad developers into taking the easy road, even if they start to question the audit results. In the open source world, developers tend to be quite independent, even when people are managing a project. With such projects, it's rare that anyone worries about a negative impact should a release take a bit longer to ship.

The Market for Secure Software

Ultimately, building secure software well is incredibly challenging for the average developer. Too much must be known in order to master all the issues that might lead to security risks, particularly considering the obscure libraries that might be misused and the arcane matters that can go wrong when adding cryptography to an application. Developers shouldn't be quick to assume that they have security issues well in hand.

Certainly, commercial software organizations can fall into similar traps by assuming that they have security under control. However, commercial organizations are more likely to take security seriously, simply because they are more likely to have paying customers demanding some level of security assurance.

Many market segments have not only identified security as a big concern but also realized that the root causes are better confronted by the development team, not network engineers. In particular, financial companies and government organizations are looking for assurance that the software they acquire is likely to jump some basic security bar.

People who want to sell software to organizations in these markets have to answer tough questions about the security properties of their software. Many times, potential customers must fill out extensive documentation about their products and the processes and technologies used to build them. Sometimes, potential customers must even submit their software to independent third-party source code auditing before purchase. For instance, the U.S. Navy is currently working on the prototype of a process that all vendors will go through before they may sell software for use on the Navy's intranet.

Of course, the customer won't pay for software security. Customer demand will force the software suppliers to address the issue. Even the Navy's prototype requires that vendors pay for their assessments.

Who will pay for open source software assessments? Ultimately the highest-profile open source software may go through the wringer if behemoths like IBM think it's worth the cost. Smaller projects are unlikely to receive that kind of treatment.

Security assurance isn't just about assessments, either, and the security-aware customer knows it. It's a worn but true cliché that security can't be bolted onto an application after the fact. It needs to be considered from the beginning. A positive, cost-effective change in security requires a change in process.

Customer pressure is starting to have a big impact on the development processes of development organizations. For example, Microsoft may have traditionally had a bad reputation for security, and that reputation may even hang over it still. Nonetheless, for the past two years it's made a dramatic effort toward improving software security throughout the organization. It doesn't just conduct security assessments of software after it's written. It conducts extensive architectural security assessments of every piece of software; it has its testing teams perform thorough security testing; it puts all its developers through awareness programs; and it holds developers accountable for their security mistakes. Microsoft realizes that it will be years before it has a huge impact on deployed software and that it may be years before it's changed industry perceptions, yet it has made a tremendous long-term investment in order to reach that point eventually. Many other big software vendors such as Oracle are just as serious about this issue.

These organizations have spent a lot of money improving their security process, knowing that if they address their security problems earlier in the life cycle, reliability goes up and, in the long term, cost goes down. Because open source projects aren't driven by traditional financial concerns, they're less likely meet the needs of the market. Open source projects tend to focus more on what the development community finds interesting rather than what potential users find interesting. The lack of financial incentive results in a relatively poor feedback mechanism.

That's one big reason why even though some government and financial organizations have begun to adopt open source software, many remain skeptical or cautious about going down the path, adopting only the most prominent, well-vetted open source projects (such as Apache, a rare bird in the open source world in that it actually has an appointed security officer responsible for the security posture of software). Whether true or not, there's a reasonably widespread perception that open source developers are either part-timers or kids who focus on providing great functionality but sacrifice reliability as a result.

What Should the Community Do?

I believe that in the long run, open source software does have the potential to be more secure than closed systems, since open source projects can do everything commercial projects can. When high-quality analysis tools become more common, hopefully the "many eyeballs" phenomenon will work. Still, when it comes to security, money looks like it will be a big catalyst for positive change--and the open source community is largely insulated from it.

If the open source world wants to make sure that security is not an impediment to adoption, then it needs a strong answer. First, open source projects need to migrate to software engineering processes that resonate with the industry. Most projects are devoid of structured process. Programmers who use it tend to take the Extreme Programming approach, which may end up being too liberal for many buyers.

Of course, too much process can be worse than too little. Plenty of commercial software applications have failed for spending too much time building models instead of software. Yet, particularly for the purposes of outside security review, documenting software architecture can help tremendously. Conservative customers will feel much better about work where they can see that security is an ongoing concern, not something handled in an ad hoc way.

A little bit of process can go a long way here. Produce a sketch of your system architecture, and label network connections with how you handle things like confidentiality, message and entity authentication, and input validation. Keep it updated as you build the system. Make it easy to see from the code that you actually implemented the design. Just showing that you thought about security issues from the beginning and followed through on them will make many people far more comfortable.

Second, open source projects need to improve security awareness across the board, integrating security into their development in much the same way that Microsoft is doing. Particularly, open source projects need to produce relevant artifacts such as security architecture documents that demonstrate a careful response to security risks. Additionally, the open source community will need to focus on other security-aware functionality, such as better audit trails for anything that might contain financial data, as Sarbanes-Oxley and other similar legislation is becoming a major factor in many acquisition processes.

Third, the open source world needs to recognize the importance of independent, third-party auditing. The community needs to develop security auditing expertise that it applies to projects, particularly as better automation and educational material become available. The community should be on the lookout for creative arrangements to persuade the industry to pay for independent security audits and certifications, as such matters are highly likely to grow in importance and are unlikely to be free.

Comparing all open source software with all commercial software is tough. Certainly, when it comes to security, there are good cases and disasters in each camp. I do believe that from a security point of view, Apache is probably better off than Microsoft's IIS and that djbdns is better off than almost anything competitive. While I do think the open source community has a long way to go in general, I don't think it is necessarily worse on the whole. I would evaluate it only on a case-by-case basis.

In the end it doesn't matter if open source systems tend to be more secure than proprietary systems, because on the whole they aren't yet coming close to being "secure enough."

John Viega is CTO of the SaaS Business Unit at McAfee and the author of many security books, including Building Secure Software (Addison-Wesley), Network Security with OpenSSL (O'Reilly), and the forthcoming Myths of Security (O'Reilly).


Return to the Security DevCenter

Copyright © 2009 O'Reilly Media, Inc.