July 23, 2009, 4:25 PM — Hackers (or security researchers) come with a range of rainbow colored hats. Some guys'n'gals are nice (the White Hats). They find and disclose problems in communication products using approved responsible disclosure models. Others are in the business for money, and are not satisfied by the fame they get for disclosing problems. The process can easily get close to what some would consider unethical, or even direct blackmailing. There is a gray area between telling a vendor that they have a problem and disclosing the details to them, and withholding vulnerability details for a payment. The entire business of bug hunting individual flaws from someone else's product has always been seen quite unethical by some, if not even illegal in some countries already.
From the disclosure point of view, the most aggressive means of disclosure has traditionally been through full public disclosure, without giving the vendor any time to correct the problems. The full means that in the disclosure process, the party behind the discovery will provide all details including means of building exploits, if not even complete exploit scripts, with the vulnerability report. The public stands for the disclosure being made through open forums such as mailing lists or web portals.
The matrix of various other disclosure models are easy to deduct from that example: full, partial, or limited disclosure describes how much details is given; and internal, closed and public refer to who is actually included in the disclosure. Any combination can also co-exist, with, for example, full details being kept internally, partial details being reported to a closed circle of parties, and finally extremely limited details being disclosed publicly.
If you are interested to know more on past discussions on vulnerability disclosure, I would urge you to have a look at the Vulnerability disclosure publications and discussion tracking maintained by OUSPG (Oulu University Secure Programming Group.
Business Models in Vulnerability Research
Still, Jolt and Butterfingers are required to keep even the most ethical security researchers in the business, and therefore there is always money involved in security research. There are various means of cashing out from security research.
The starting point for security research for many is in adolescence, through network of friends interested in "breaking stuff". Although this is perhaps just my individual view, I tend to believe most of the bugs seen in the public disclosure mailing lists and forums today come from this crowd. Dozens of professional security people work almost full time to guide these kids into correct ethical practices, just to save their future careers in or outside the industry. Let's not focus on these "script kiddies", even if some of them do cash out with their findings by selling them to interested parties.
The gray hats, blue hats and red hats out there (and I do not refer to the Linux distribution) usually work outside the vendors, as a third party interested in security research. Many of them work for government funded research projects, or in a happy situation like our background has proven to us, in industry funded research. In such cases you do not necessarily need to focus on the vulnerabilities themselves, but in the means in how those vulnerabilities were discovered. Vulnerability data is a happy side-product of such research, and therefore easily shared with the industry. But finally, the end result of the research is that you have a set of skills and tools that very few in the industry possess.
Finally, the people "graduate" from the lifetime of education in security research, and they start to think on how to cash out from the security know-how and vulnerability data. The most obvious, and easiest career is in penetration testing. Most of the business in penetration testing come from the end users of communication devices. The range of skillset in the industry can range from being able to run Nessus and understand the results, to being able to reverse-engineer complex embedded devices and deduct problems in them. The biggest problem with this industry is that you cannot really cash out big time in this area. The growth of security consulting business is always resource constrained, and even if you are one of the lucky figures in the industry to gather enough know-how around you, you will not make a huge fortune out of it (although there are opposite examples available).
The optimal dream job for many a security expert (even though few would admit it) would be to work in a company that actually cared about product security. Even the most notorious security researchers of the past are rumored to fall to this utopia. And more than few have fallen away, wings burned, back into finding more constructive means of improving the security of the modern communication society. The fact is that there is very little motivation for most vendors to allocate any resources or money for security research. Although by now most of us really hope the industry would have learned something, this is still quite scarce skill inside most vendors in the communication industry.
The last step in the career then is to end up working in a security company, trying to fix the actual problem itself. Three types of security companies exist:
- Proactive product security companies such like Codenomicon and Fortify: Producers of software tools that developers and testers can use to find and fix the problems by themself.
- Reactive security companies such as Qualys: Producers of scanning tools to find and eliminate known vulnerabilities in deployed software.
- Vulnerability traders and databases (I do not want to name examples): Companies that buy and sell vulnerability data, zero-day threats, exploits, and even automated hacking engines.
It is the last category that is now raising its ugly head, and if you are interested, I can provide more links on those stories later. Whichever category a security researcher is working in, eventually they will find some new, previously unknown vulnerabilities. And they start to think about the value of those findings. And that is where the problems start building up.
No More Free Bugs Movement
There is a new movement of security people in all of the above categories to stop doing free security research for vendors. As long as product vendors get security details for free, they have less and less motivation to actually start improving their product security. Personally, I am completely behind enforcing the current movement of stopping free security research. It is a practice we have promoted at Codenomicon since our launch in 2001. When you promote tools that would eliminate such flaws to some vendors, you get funny responses like "Who knows about these flaws?" and "Can I get a demo so that I can find and fix my flaws?". All the past years have promoted the ideology of free security research, so why should they pay for the vulnerability data. The worst example is all the hacker contests where globally known skilled hackers do public stunts to win a lousy mobile phone or a laptop. How low can you go?
I do not personally promote the sale of vulnerability data, and also I have no answer to how the value of a vulnerability data is decided, and how payment for vulnerabilities could be enforced to vendors. I personally think that the best resolution is the legacy "Bugs Bounty" programs that e.g. Netscape (remember them?) used to have for vulnerabilities. On the other hand, every single bug a vendor has will anyways be extremely expensive for them, so do we need to hurt them more? But in the end, we need to think about this from both perspectives. The last thing we want to see is people stopping security research. And definitely we do not want the skilled guys to go to "the dark side". So let's work together to find a solution to this, before the security researchers build their union to set the market value for their work.