Security through obscurity

Unix Insider –

Want to really irritate security architects? Just tell them that their design relies on security through obscurity (http://www.tuxedo.org/~esr/jargon/html/entry/security-through-obscurity.html).

That accusation was leveled at me. I'd recommended that a client have internal headers stripped out of email at the firewall before that mail was being outside the company. I thought this was just good common sense. I even provided the technical solution to do it with the MTA the client was running (Sendmail). The admins balked and said, "No one does this." OK. So I asked the gods at Sendmail.org for guidance. To my surprise, they also felt it was unnecessary, even inadvisable. In fact, it was said that I was "paranoid" and relying on "security by obscurity."

Ouch. As a firm believer in full disclosure and open source systems, that really hurt.

Bruce Perens made some good points on this in his July 20 Slashdot article, "Why Security Through Obscurity Won't Work" (http://slashdot.org/features/980720/0819202.shtml).

Sometimes, however, a little obscurity is advisable. The technique is problematic when it is relied upon exclusively as a measure of defense; in the case of my client with the bleeding headers, however, the internal network was protected from intrusion from outside the company by firewalls and other security mechanisms.

I still think it's foolhardy to advertise internal information so promiscuously. The first step in attacking a site is to gather as much information about it as possible, as demonstrated in this July 1998 Phrack article from Brian Martin (aka Jerhico) -- (http://www.attrition.org/~jericho/works/security/phrack-53.html).

Why make it easier than it has to be? Why advertise your internal mail routes and the internal IP addresses of firewalls that accept SMTP traffic? Relying solely on security through obscurity is foolhardy. Blatantly advertising internal information to the outside is even more foolhardy.

With software packages, it's a different matter entirely. End users are at the mercy of the software vendors, and are forced to rely on them to properly test their products. I used to be in a system test group and, believe me, such groups have no status in software development departments. I tried going directly to developers before writing bug reports on their software, and many appreciated my covering for their mistakes. One developer surprised me by telling me to write up the bug report even though she fixed the problem as I was talking to her. When I questioned her on this, she explained that the monthly bug report that was distributed to the entire department forced developers to do a better job at debugging their code. It also forced management to recognize that unrealistic deadlines led to bad code.

Sadly, the today's system test department is an unfunded, loosely organized group of technologists, commonly referred to as hackers. Many hackers provide exploit code to demonstrate the bug in question -- just as I did when I was in system test. The big difference is that these hackers release the exploit to the public at large, not just to the vendor. Some people, particularly Marcus Ranum (of TIS FWTK fame), object to this practice and feel it causes more harm than good.

"Ranum in the Lion's Den," Lewis Z. Koch (Inter@ctive Week, September 21, 2000) -- (http://www.zdnet.com/intweek/stories/columns/0,4164,2630983,00.html).

Others, particularly Mudge (of L0pht fame) vehemently disagree.

"The Other Side of the Story" Lewis Z. Koch (Inter@ctive Week, September 28, 2000) -- sequel to above story (http://www.zdnet.com/intweek/stories/columns/0,4164,2634819,00.html).

Despite what Ranum would like to believe, most software manufacturers lack the self-motivation to fix bugs. Motivation is provided by fear of public embarrassment. Ranum seems to believe that the danger of script kiddies using an exploit is reason enough to obscure information about vulnerabilities. Great, we'll be protected from script kiddies who aren't bright enough to figure out that Back Orifice won't work on a Unix system.

If you're protecting your systems from script kiddies, you're wasting a huge amount of time and money. Script kiddies, though highly annoying and often immature, generally don't know what to do with a system once they've broken in. Command-line access is their electronic equivalent of a fantasy date: they seek it, but none of them know what to do once they've got it.

The real danger is from corporate spies who use an unknown exploit and cover all signs of their intrusion. Mostly, the exploits they use are not public knowledge -- and they don't want them to be. If the information about the vulnerability is made public, companies can analyze the exploit and properly evaluate their risk. If there's no patch or workaround, they have the option (and justification!) to take certain critical systems off the Net, and monitor their systems more closely. Admittedly, many corporations will not do this. They also will not apply patches once the vendor gets around issuing them. Those who truly are concerned about security should not have to suffer for this negligence.

A Dutch hacker who recently broke into Nasdaq and several other financial sites did not brag of his achievement in hacker chat rooms. Instead, he emailed the administrators, who gratefully patched the holes in their systems on his advice. He claims to have a new exploit that he wrote himself but will not publish, because "people will start using it and that's just too dangerous." How noble of him. Of course, no malicious hacker will ever figure out the same thing. This type of security through obscurity makes the job of the corporate spying much easier, and why do that?

From CIO: 8 Free Online Courses to Grow Your Tech Skills
Join the discussion
Be the first to comment on this article. Our Commenting Policies