Hackers (or security researchers) come with a range of rainbow colored hats. Some guys'n'gals are nice (the White Hats). They find and disclose problems in communication products using approved responsible disclosure models. Others are in the business for money, and are not satisfied by the fame they get for disclosing problems. The process can easily get close to what some would consider unethical, or even direct blackmailing. There is a gray area between telling a vendor that they have a problem and disclosing the details to them, and withholding vulnerability details for a payment. The entire business of bug hunting individual flaws from someone else's product has always been seen quite unethical by some, if not even illegal in some countries already.
From the disclosure point of view, the most aggressive means of disclosure has traditionally been through full public disclosure, without giving the vendor any time to correct the problems. The full means that in the disclosure process, the party behind the discovery will provide all details including means of building exploits, if not even complete exploit scripts, with the vulnerability report. The public stands for the disclosure being made through open forums such as mailing lists or web portals.
The matrix of various other disclosure models are easy to deduct from that example: full, partial, or limited disclosure describes how much details is given; and internal, closed and public refer to who is actually included in the disclosure. Any combination can also co-exist, with, for example, full details being kept internally, partial details being reported to a closed circle of parties, and finally extremely limited details being disclosed publicly.
If you are interested to know more on past discussions on vulnerability disclosure, I would urge you to have a look at the Vulnerability disclosure publications and discussion tracking maintained by OUSPG (Oulu University Secure Programming Group.
Business Models in Vulnerability Research
Still, Jolt and Butterfingers are required to keep even the most ethical security researchers in the business, and therefore there is always money involved in security research. There are various means of cashing out from security research.
The starting point for security research for many is in adolescence, through network of friends interested in "breaking stuff". Although this is perhaps just my individual view, I tend to believe most of the bugs seen in the public disclosure mailing lists and forums today come from this crowd. Dozens of professional security people work almost full time to guide these kids into correct ethical practices, just to save their future careers in or outside the industry. Let's not focus on these "script kiddies", even if some of them do cash out with their findings by selling them to interested parties.
The gray hats, blue hats and red hats out there (and I do not refer to the Linux distribution) usually work outside the vendors, as a third party interested in security research. Many of them work for government funded research projects, or in a happy situation like our background has proven to us, in industry funded research. In such cases you do not necessarily need to focus on the vulnerabilities themselves, but in the means in how those vulnerabilities were discovered. Vulnerability data is a happy side-product of such research, and therefore easily shared with the industry. But finally, the end result of the research is that you have a set of skills and tools that very few in the industry possess.