Using Penetration Testing to Identify Management Issues04/08/2004
Note: Bob Ayers wrote a thought-provoking foreword for my book, Network Security Assessment, detailing network attack and penetration techniques in line with U.K. and U.S. government standards. I have slightly modified Bob's foreword for the book to present here in article form. -- Chris McNab
After managing the performance of over 20,000 infrastructure and application penetration tests and vulnerability assessment exercises, I have come to realize the importance of technical testing and provision of information security assurance. The purpose for conducting the tens of thousands of penetration tests during my 20-plus years working in information systems security was "to identify technical vulnerabilities in the tested system in order to correct the vulnerability or mitigate any risk posed by it." In my opinion, this is a clear, concise, and perfectly wrong reason to conduct penetration testing.
Vulnerabilities and exposures in most environments come about by poor system management -- patches not installed in a timely fashion, weak password policy, poor access control, et al. Therefore, the principal reason and objective behind penetration testing should be to identify and correct the underlying systems management process failures that produced the vulnerability detected by the test. The most common of these systems management process failures exist in the following areas:
- System software configuration
- Applications software configuration
- Software maintenance
- User management and administration
Unfortunately, many IT security consultants provide detailed lists of specific test findings, and never attempt the higher order analysis needed to answer the question of "why." This failure to identify and correct the underlying management cause of the test findings assures that, when the consultant returns to test the client again in 6 months, a whole new set of findings will appear.
Several years ago, my company conducted a series of penetration tests for a very large international client. The client was organized regionally. IT security policy was issued centrally and implemented regionally. We mapped the technical results to the following management categories:
OS Configuration: Vulnerabilities due to improperly configured operating system software.
Software Maintenance: Vulnerabilities due to failure to apply patches to known vulnerabilities.
Password/Access Control: Failure to comply with password policy; improper access control settings.
Malicious Software: Existence of malicious software (Trojans, worms, et al) or evidence of use.
Dangerous Services: Existence of vulnerable or easily exploited services or processes.
Application Configuration: Vulnerabilities due to improperly configured applications.
We then computed the average number of security assessment findings per 100 systems tested for the total organization and produced the following chart, shown in Figure F-1.
Figure F-1. Average vulnerabilities by management category.
We then conducted a comparison of the performance of each region against the corporate average. The results were quite striking, as shown in Figure F-2 (above the average is bad, with more findings than the corporate average).
Figure F-2. Regional comparisons against the corporate average.
This chart clearly shows easily discernable and quantifiable differences in the effectiveness of the security management in each of the regions. For example, the IT manager in Region 3 clearly was not performing software maintenance or password / access controls management, and the IT manager in Region 1 was failing to remove unneeded services from his systems.
It is important that, as you perform technical security testing of networks and systems, you place vulnerabilities and exposures into categories, and look at them in a new light. You can present a report to a client that fully documents the low-level technical issues at hand, but unless the underlying high-level mismanagement issues are tackled, network security will not improve, and different incarnations of the same vulnerabilities will be found later on. It is vital that you always ask the question -- "Why are these vulnerabilities present?" -- and improve the fundamental system and network management processes that are at fault, in order to ensure security into the future.
Bob Ayers is currently the Director for Critical Infrastructure Defense with a major IT company based in the U.K. Previously, Bob worked for 29 years with the U.S. Department of Defense (DoD). His principal IT security assignments were with the Defense Intelligence Agency (DIA) where he served as the Chief of the DoD Intelligence Information System (DoDIIS). During this assignment, Bob developed and implemented new methodologies to ensure the security of more than 40,000 computers processing highly classified intelligence information. Bob also founded the DoD computer emergency response capability, known as the Automated Systems Security Incident Support Team (ASSIST). Noticed for his work in DoDIIS, the U.S. Assistant Secretary of Defense (Command, Control, Communications, and Intelligence) selected Bob to create and manage a 155-person, $100M/year DoD-wide program to improve all aspects of DoD IT security. Prior to leaving government service, Bob was the Director of the U.S. DoD Defensive Information Warfare program.
O'Reilly & Associates published Network Security Assessment in January 2004.
Chapter 4, "IP Network Scanning," is available free online.
For more information, or to order the book, click here.
Return to the Security DevCenter.