Security DevCenter
oreilly.comSafari Books Online.Conferences.

advertisement




Open Source Security: Still a Myth
Pages: 1, 2

Bonfire of the Vanities

The people you want looking at your free software are for the most part doing other things with their time, such as auditing commercial software. Still, those in the open source community never seem too worried about security in their own software. It seems like every open source developer under the sun thinks he understands security well enough to avoid security issues, even if he isn't too surprised when other developers are not quite as competent; plenty of security advisories for open source applications come out every week.

A disturbing trend I've noted is that many developers (open source or not) will pick up a little about security and think they know a lot. For example, when developers learn about buffer overflows, they generally wonder how they can allow attackers to execute arbitrary code. Usually, in looking for an answer they'll find out all about stack overflows and how easy it is to overwrite an address that the program will quickly jump to, and then put attack code in that address. A few times I've seen as a result those developers taking all their stack-allocated buffers and turning them into heap-allocated ones (that is, allocated by malloc). They tend to think there's nothing critically exploitable for an attacker to overwrite on the heap.

Usually they're wrong. Even if there isn't security-sensitive, application-specific data, and even if there aren't function pointers stored in memory that will later be dereferenced, there is usually something called the global offset table (GOT), which helps position-independent code access global data. The GOT contains function pointers that real programs dereference.

Then there's the case of SSL. In my years of doing security audits, I've learned that few developers understand how to use SSL properly. They tend to view it as a drop-in replacement for traditional sockets--just rewrite your Berkeley calls with the corresponding SSL API calls!

Unfortunately that's not how SSL works. In order for an SSL connection to provide confidentiality, the client and server must authenticate each other. Most SSL APIs do not perform authentication by default, due to the wide variety of authentication requirements. In fact, proper authentication often requires pages and pages of code (see the Secure Programming Cookbook for details).

Developers usually haven't heard that; when they do, they tend to become defensive. Sometimes they'll insist that there's no problem until someone can actually demonstrate a working attack. That is, instead of fixing potential problems and moving on, they'll try to force security auditors to spend hours of precious time demonstrating exploitability. This actually tends to be more of a problem in the open source world than in the commercial world, because commercial projects typically are driven more by schedules. Managers often are already worried about sticking to their schedule and will try to railroad developers into taking the easy road, even if they start to question the audit results. In the open source world, developers tend to be quite independent, even when people are managing a project. With such projects, it's rare that anyone worries about a negative impact should a release take a bit longer to ship.

The Market for Secure Software

Ultimately, building secure software well is incredibly challenging for the average developer. Too much must be known in order to master all the issues that might lead to security risks, particularly considering the obscure libraries that might be misused and the arcane matters that can go wrong when adding cryptography to an application. Developers shouldn't be quick to assume that they have security issues well in hand.

Certainly, commercial software organizations can fall into similar traps by assuming that they have security under control. However, commercial organizations are more likely to take security seriously, simply because they are more likely to have paying customers demanding some level of security assurance.

Many market segments have not only identified security as a big concern but also realized that the root causes are better confronted by the development team, not network engineers. In particular, financial companies and government organizations are looking for assurance that the software they acquire is likely to jump some basic security bar.

People who want to sell software to organizations in these markets have to answer tough questions about the security properties of their software. Many times, potential customers must fill out extensive documentation about their products and the processes and technologies used to build them. Sometimes, potential customers must even submit their software to independent third-party source code auditing before purchase. For instance, the U.S. Navy is currently working on the prototype of a process that all vendors will go through before they may sell software for use on the Navy's intranet.

Of course, the customer won't pay for software security. Customer demand will force the software suppliers to address the issue. Even the Navy's prototype requires that vendors pay for their assessments.

Who will pay for open source software assessments? Ultimately the highest-profile open source software may go through the wringer if behemoths like IBM think it's worth the cost. Smaller projects are unlikely to receive that kind of treatment.

Security assurance isn't just about assessments, either, and the security-aware customer knows it. It's a worn but true cliché that security can't be bolted onto an application after the fact. It needs to be considered from the beginning. A positive, cost-effective change in security requires a change in process.

Customer pressure is starting to have a big impact on the development processes of development organizations. For example, Microsoft may have traditionally had a bad reputation for security, and that reputation may even hang over it still. Nonetheless, for the past two years it's made a dramatic effort toward improving software security throughout the organization. It doesn't just conduct security assessments of software after it's written. It conducts extensive architectural security assessments of every piece of software; it has its testing teams perform thorough security testing; it puts all its developers through awareness programs; and it holds developers accountable for their security mistakes. Microsoft realizes that it will be years before it has a huge impact on deployed software and that it may be years before it's changed industry perceptions, yet it has made a tremendous long-term investment in order to reach that point eventually. Many other big software vendors such as Oracle are just as serious about this issue.

These organizations have spent a lot of money improving their security process, knowing that if they address their security problems earlier in the life cycle, reliability goes up and, in the long term, cost goes down. Because open source projects aren't driven by traditional financial concerns, they're less likely meet the needs of the market. Open source projects tend to focus more on what the development community finds interesting rather than what potential users find interesting. The lack of financial incentive results in a relatively poor feedback mechanism.

That's one big reason why even though some government and financial organizations have begun to adopt open source software, many remain skeptical or cautious about going down the path, adopting only the most prominent, well-vetted open source projects (such as Apache, a rare bird in the open source world in that it actually has an appointed security officer responsible for the security posture of software). Whether true or not, there's a reasonably widespread perception that open source developers are either part-timers or kids who focus on providing great functionality but sacrifice reliability as a result.

What Should the Community Do?

I believe that in the long run, open source software does have the potential to be more secure than closed systems, since open source projects can do everything commercial projects can. When high-quality analysis tools become more common, hopefully the "many eyeballs" phenomenon will work. Still, when it comes to security, money looks like it will be a big catalyst for positive change--and the open source community is largely insulated from it.

If the open source world wants to make sure that security is not an impediment to adoption, then it needs a strong answer. First, open source projects need to migrate to software engineering processes that resonate with the industry. Most projects are devoid of structured process. Programmers who use it tend to take the Extreme Programming approach, which may end up being too liberal for many buyers.

Of course, too much process can be worse than too little. Plenty of commercial software applications have failed for spending too much time building models instead of software. Yet, particularly for the purposes of outside security review, documenting software architecture can help tremendously. Conservative customers will feel much better about work where they can see that security is an ongoing concern, not something handled in an ad hoc way.

A little bit of process can go a long way here. Produce a sketch of your system architecture, and label network connections with how you handle things like confidentiality, message and entity authentication, and input validation. Keep it updated as you build the system. Make it easy to see from the code that you actually implemented the design. Just showing that you thought about security issues from the beginning and followed through on them will make many people far more comfortable.

Second, open source projects need to improve security awareness across the board, integrating security into their development in much the same way that Microsoft is doing. Particularly, open source projects need to produce relevant artifacts such as security architecture documents that demonstrate a careful response to security risks. Additionally, the open source community will need to focus on other security-aware functionality, such as better audit trails for anything that might contain financial data, as Sarbanes-Oxley and other similar legislation is becoming a major factor in many acquisition processes.

Third, the open source world needs to recognize the importance of independent, third-party auditing. The community needs to develop security auditing expertise that it applies to projects, particularly as better automation and educational material become available. The community should be on the lookout for creative arrangements to persuade the industry to pay for independent security audits and certifications, as such matters are highly likely to grow in importance and are unlikely to be free.

Comparing all open source software with all commercial software is tough. Certainly, when it comes to security, there are good cases and disasters in each camp. I do believe that from a security point of view, Apache is probably better off than Microsoft's IIS and that djbdns is better off than almost anything competitive. While I do think the open source community has a long way to go in general, I don't think it is necessarily worse on the whole. I would evaluate it only on a case-by-case basis.

In the end it doesn't matter if open source systems tend to be more secure than proprietary systems, because on the whole they aren't yet coming close to being "secure enough."

John Viega is CTO of the SaaS Business Unit at McAfee and the author of many security books, including Building Secure Software (Addison-Wesley), Network Security with OpenSSL (O'Reilly), and the forthcoming Myths of Security (O'Reilly).


Return to the Security DevCenter




Sponsored by: