February 01, 2010, 6:19 PM — While I was reviewing a whitepaper titled Fuzzing Challenges: Metrics and Coverage, I thought the topic actually would deserve a wider analysis from the perspective of penetration testing. All the same metrics seem to apply to a good technical pentest. Well, most penetration testers would anyways pull a fuzz-tool of their choice from their toolkit when coming to do the audit.
My goal here is not to go through the whitepaper, but to look at it from an enterprise perspective. As a consumer of IT services and devices, you do not necessarily look at coverage from the same perspective as a network equipment manufacturer would. You would probably have a mindset closer to a risk analyst. So let's look at the coverage and metrics from a risk perspective.
Attack Surface is a term commonly used for two different meanings. For a software developer, attack surface can mean the actual lines of code that can be touched through a hostile network interface. But for system integrators or enterprise end-users, attack surface is often at much higher level. When identifying the attack surface of an IT system, you look at devices and protocol interfaces. Attack surface is often limited to just identifying the critical network elements that can be attacked from untrusted sources by untrusted parties. A simple network scan is not enough to identify the high-level attack surface, as you also need to identify all client software that can be attacked. The risk analysis part of attack surface is focused on prioritizing the interfaces. Some need better testing, others can do with less comprehensive tests.
Specification Coverage can appear as a simple thing, but often in real life it isn't. Look for example inside your TLS VPN, and you might be surprised that it will also support legacy (broken) versions of SSL. Same applies for SSH servers, Web servers, Web browsers, Email clients, ... Unless you have been efficient in minimizing the available features in all devices you use, each interface in communication software often requires testing at a number of different protocol layers, and protocol versions. From a risk analysis perspective, think about probabilities: which specifications are most complex, which have unused functionality, and where can most of the vulnerabilities be expected to be hiding. Surprisingly it is often the security protocols like encryption where complexity is overwhelming and mistakes are most probable. Most bugs are also often hiding in features that are never used, and never seen in the network.