Security Testing: It Is About Coverage

Bells and whistles do not find bugs

While I was reviewing a whitepaper titled Fuzzing Challenges: Metrics and Coverage, I thought the topic actually would deserve a wider analysis from the perspective of penetration testing. All the same metrics seem to apply to a good technical pentest. Well, most penetration testers would anyways pull a fuzz-tool of their choice from their toolkit when coming to do the audit.

My goal here is not to go through the whitepaper, but to look at it from an enterprise perspective. As a consumer of IT services and devices, you do not necessarily look at coverage from the same perspective as a network equipment manufacturer would. You would probably have a mindset closer to a risk analyst. So let's look at the coverage and metrics from a risk perspective.

Attack Surface is a term commonly used for two different meanings. For a software developer, attack surface can mean the actual lines of code that can be touched through a hostile network interface. But for system integrators or enterprise end-users, attack surface is often at much higher level. When identifying the attack surface of an IT system, you look at devices and protocol interfaces. Attack surface is often limited to just identifying the critical network elements that can be attacked from untrusted sources by untrusted parties. A simple network scan is not enough to identify the high-level attack surface, as you also need to identify all client software that can be attacked. The risk analysis part of attack surface is focused on prioritizing the interfaces. Some need better testing, others can do with less comprehensive tests.

Specification Coverage can appear as a simple thing, but often in real life it isn't. Look for example inside your TLS VPN, and you might be surprised that it will also support legacy (broken) versions of SSL. Same applies for SSH servers, Web servers, Web browsers, Email clients, ... Unless you have been efficient in minimizing the available features in all devices you use, each interface in communication software often requires testing at a number of different protocol layers, and protocol versions. From a risk analysis perspective, think about probabilities: which specifications are most complex, which have unused functionality, and where can most of the vulnerabilities be expected to be hiding. Surprisingly it is often the security protocols like encryption where complexity is overwhelming and mistakes are most probable. Most bugs are also often hiding in features that are never used, and never seen in the network.

Statefullness is another challenge in most penetration tests. In most penetration tests, simple capture-replay tests around known vulnerabilities and simple use cases built around commonly used features might give a good coverage in attack surface, and some confidence in known issues in legacy specifications. But in real life, protocols such as SOAP/XML applications often come in complex state-diagrams with deeply nested interdepencencies between messages and sequences. A simple traffic capture fuzzer might not be enough to go deep enough in the protocol message flows. A simple method for risk analysis in complex protocols is looking at them from traffic analyzers. If the message flow exceeds just few messages back and forth, then you know the complexity of that protocol is probably beyond any manual analysis.

So next time when someone claims they do fuzzing or any other form of security testing, ask them how they do it. Look at how they try to explain test coverage. And especially require them to provide a measurable definition on what was tested, and what was not. If someone claims they do 100% coverage, you will know they are lying.

ITWorld DealPost: The best in tech deals and discounts.
Shop Tech Products at Amazon