Seven Common SSL Pitfalls
Pages: 1, 2
5. Poor Certificate Validation.
CRLs aren't useful if the client software isn't first adequately validating the server certificate. Certainly, for SSL to work at all, the client must be able to extract the public key from a presented certificate, and the server must have a private key that corresponds with that public key. But, without certificate checking, there is nothing to verify that the client has the right public key, meaning that it could be talking to a "man-in-the-middle" instead of the server.
In order to perform adequate certificate validation, the client must have an implicit notion of the minimal set of certificates that should be considered valid, and must be able to check whether a presented certificate is in that set.
A common solution is to trust everything signed by a particular Certification Authority. However, anyone can get a certificate signed by any Certification Authority, and often under false pretenses. If you rely on the authority for authentication, you're likely vulnerable to man-in-the-middle attacks. Another solution is to use a hard-coded list of valid certificates. Unfortunately, this solution doesn't scale well as a system grows.
A more solid approach is to only accept a subset of certificates signed by a particular Certification Authority. Digital certificates have a wealth of information in them, such as the entity that is associated with the certificate, and a DNS name associated with that entity. The Certification Authority signs that information, so as long as the authority's signature is valid, the information should be a reasonable differentiator.
Another approach is to run your own Certification Authority. That's a lot of work, and not recommended for most people. However, we discuss the tools you need in order to do this in the book Network Security with OpenSSL.
6. Poor Entropy.
In the SSL protocol, both the client and the server need to generate random data for keys and other secrets. The data must be generated in such a way that a knowledgeable attacker cannot guess anything about it. Numbers produced from traditional pseudo-random number generators are not sufficient for security. OpenSSL does have its own cryptographic pseudo-random number generator, but it only provides security if it has been "seeded" with enough random information.
A seed is a piece of data fed to the generator to get it going. Given a single, known seed at startup, the generator will produce a stream of predictable outputs. The seed itself must be a random number, but it needs to be a truly unguessable piece of data of sufficient length to thwart any possible guessing attacks. Usually, you'll need at least 128 bits of data, where each bit is just as likely a 0 as it is a 1.
If you try to use OpenSSL without seeding the random number generator, the library will complain. However, the library has no real way to know whether the seed you give it contains enough entropy. Therefore, you must have some idea of how to get entropy. There are hardware devices that do a good job of collecting it, including most of the cryptographic accelerator cards. However, in many cases hardware is impractical because your software will be deployed across a large number of clients, most of which will have no access to such devices.
Many Unix-based operating systems now come with a random device that provides entropy harvested by the operating system. On other Unix systems and on Windows NT systems, you can use tools such as EGADS, which is a portable entropy collection system.
7. Insecure Cryptography.
While version 3 of the SSL protocol and TLS are believed to be reasonably secure if used properly, SSLv2 (version 2) had fundamental design problems that led to wide-ranging changes in subsequent versions (version 1 was never publicly deployed). For this reason, you should not support version 2 of the protocol. This ensures that an attacker does not launch a network attack and cause the client and server to settle upon the insecure version of the protocol. This is easy to do by intercepting the connection request and sending a response that makes it look like a v3 server does not exist. The client will then try to connect using version 2 of the protocol.
Additionally, you should avoid small key lengths and, to a lesser degree, algorithms that aren't well regarded. Forty-bit keys are never secure and neither is 56-bit DES. Nonetheless, it's common to see servers that only support these weak keys due to old U.S. export regulations that no longer apply.
As for an individual algorithm choice in SSL, RC4 and 3DES are both excellent solutions. RC4 is much faster, and 3DES is more conservative. Soon, TLS will be standardizing on AES, at which time TLS will be widely regarded as a good choice.
John Viega, Matt Messier, Pravir Chandra is CTO of the SaaS Business Unit at McAfee and the author of many security books, including Building Secure Software (Addison-Wesley), Network Security with OpenSSL (O'Reilly), and the forthcoming Myths of Security (O'Reilly).
O'Reilly & Associates recently released (June 2002) Network Security with OpenSSL.
Sample Chapter 1, Introduction, is available free online in PDF format.
For more information, or to order the book, click here.