Consider me a career-long computer security curmudgeon. When a vendor guarantees its latest and greatest will defend the world against all computer maliciousness, I yawn. Been there; it didn't pan out.
All computer security vendors want us to think that signing on the dotted line and sending them a check will mean our worries are over. Rarely do they deliver. And although a little marketing hype never really hurts -- we're all used to taking it with a grain of salt -- some vendors can be caught outright lying, expecting us to buy what amounts to security snake oil.
[ Verse yourself in the 7 sneak attacks used by today's most devious hackers, 14 dirty IT security consultant tricks, 9 popular IT security practices that don't work, and 10 crazy security tricks that do. | Find out how to greatly reduce the threat of malicious attacks with InfoWorld's Insider Threat Deep Dive PDF special report. | Learn how to protect your systems with Roger Grimes' Security Adviser blog and the Security Central newsletter, both from InfoWorld. ]
If you're a hardened IT security pro, you've probably had these tactics run by you over and over. It's never only one vendor touting unbelievable claims but many. It's like a pathology of the computer security industry, this all-too-frequent underhanded quackery used in the hopes of duping an IT organization into buying dubious claims or overhyped wares.
Following are seven computer security claims or technologies that, when mentioned in the sales pitch, should get your snake-oil radar up and primed for false promises.
Security snake oil No. 1: Unbreakable softwareBelieve it or not, vendors and developers alike have claimed their software is without vulnerability. In fact, "Unbreakable" was the name of one famous vendor's public relations campaign. The formula for this snake oil is simple: The vendor claims that its competitors are weak and don't know how to make invulnerable code the way it does. Buy the vendor's software and live in a world forever without exploits.
The last vendor to claim this had its software exploited so badly, so quickly that it should serve as notice to every computer security organization never to make such a claim again. Amazingly, even as exploit after exploit was discovered in the vendor's software (the vendor is best known for database software), the "Unbreakable" ad campaign continued for another year. We security professionals wondered how many CEOs might have fallen for the PR pitch, not realizing that the vendor's support queues were full of calls demanding quick patches. To this day, dozens of exploits are found every year in that vendor's software.
Of course, this vendor isn't alone with its illusions of invulnerability. Browser vendors used to kick Microsoft for making an overly vulnerable browser in Internet Explorer. But then they would release their invulnerable browsers, only to learn they had more uncovered public vulnerabilities than the browser they claimed was overly vulnerable. You don't hear browser vendors bragging about making perfectly secure browsers anymore.
And then there's the infamous University of Illinois at Chicago professor who consistently lambasts software vendors for making software full of security holes. He chides and belittles them and says they should be subject to legal prosecution for making imperfect software. He even made his own software programs and challenged people to find even one security bug, backing this challenge with a reward. Not surprisingly, people found bugs. Initially he tried to claim that the first found vulnerability wasn't an exploitable bug "within the parameters of the guarantee." Most people disagreed. Then someone found a second bug, in another of his programs, and he paid the reward. Turns out making invulnerable software is pretty difficult.
I don't mean to negate that professor's contributions to computer security. He's one of the best computer security experts in the world -- truly a hero to the cause. But you won't hear him claim anymore that perfect software can be made.
Remember these high-profile lessons in humility the next time you hear a vendor claim that its software is invulnerable.
Security snake oil No. 2: 1,000,000-bit cryptoEvery year a vendor or coder no one has heard of claims to have made unbreakable crypto. And, with few exceptions, they fail miserably. Although it's a claim similar to unbreakable software, technical discussion will illuminate a very different flavor of snake oil at work here.
Good crypto is hard to make; even the best in the world don't have the guts (or sanity) to claim theirs can't be broken. In fact, you'll be lucky to get them to concede that their encryption is anything but "nontrivial" to compromise. I trust the encryption expert who doesn't trust himself. Anything else means trusting a snake-oil salesman trying to sell you flawed crypto.
Case in point: A few years ago a vendor came on the scene claiming he had unbreakable crypto. What made his encryption so incredible was that he used a huge key and distributed part (or parts) of the secret key in the cloud. Because the key was never in one place, it would be impossible to compromise. And the encryption algorithm and routine was secure because it was a secret, too.
Most knowledgeable security pros recognize that a good cipher should always have a known encryption algorithm that stands up to public review. Not this vendor.
But the best (and most hilarious) part was the vendor's claim that his superior cipher was backed by a million-bit key. Never mind that strong encryption today is backed by key sizes of 256-bit (symmetric) or 2,048-bit (asymmetric). This company was promising an encryption key that was orders of magnitude bigger.
Cryptologists chuckled at this for two reasons. First, when you have a good encryption routine, the involved key size can be small because no one can brute-force all the possible permutations of even relatively small encryption keys -- think, more than the "number of atoms in the known universe" type of stuff. Instead, to break ciphers today, cryptologists find flaws in the cipher's mathematics, which allow them to rule out very large parts of the populations of possible keys. In a nutshell, found cryptographic weaknesses allow attackers to develop shortcuts to faster guessing of the valid possible keys.
All things being equal, a proven cipher with a smaller key size is considered more secure. A prime example is ECC (elliptic curve cryptography) versus RSA. Today, an RSA-protected key must be 2,048 bits or larger to be considered relatively secure. With ECC, 384 bits is considered sufficient. RSA (the original algorithm) is probably nearing the end of its usefulness, and ECC is just starting to become a primary player.
So saying you have a million-bit key is akin to saying your invented cipher is so sucky it takes a million bits of obscurity (versus 384 bits) to keep the protected data secure. Five thousand bits would be overkill from any good cipher, because no one is known to be able to come close to breaking even 3,000-bit keys from a really good cipher. When you make a million-bit key, you're absolutely saying you don't trust your cipher to be good at smaller key sizes. This paradox is perhaps only understood by cipher enthusiasts, but, believe me, you'd slay the audience at any crypto convention by repeating this story.
Second, if you were required to use a million-bit key, that means you would somehow have to communicate that huge mother from sender to receiver, making that communication at least a megabyte. Suppose you encrypted an email containing a single character. The resulting encrypted blob would be 1MB. That's pretty wasteful.
A "secret" million-bit cipher being split among the cloud was enough to do that crypto in. No one took it seriously, and at least one impressive encryption expert, Bruce Schneier, publicly mocked it.
The worst part was that the vendor claimed to have proof that it sold $5 million of its crypto to the military. I hope the vendor was lying; otherwise, the military purchaser has a lot of explaining to do.
Security snake oil No. 3: 100% accurate antivirus softwareAlso akin to the claim of unbreakable software is the claim from multiple vendors that their anti-malware detection is 100% accurate. And they almost all say this detection rate has been "verified independently in test after test."
Ever wonder why these buy-once-and-never-worry-again solutions don't take over the world? It's because they're a lie. No anti-malware software is, or can be, 100% accurate. Antivirus software wasn't 100% accurate when we only had a few viruses to contend with, and today's world has tens of millions of mutating malware programs. In fact, today's malware is pretty good at changing its form. Many malicious programs use "mutation engines" coupled with the very same good encryption mentioned above. Good encryption introduces realistic randomness, and malware uses the same properties to hide itself. Plus, most malware creators run their latest creations against every available anti-malware program before they begin to propagate, and then they self-update every day. It's a neverending battle, and the bad guys are winning.
Some vendors, using general behavior-detection techniques known as heuristics and change-detecting emulation environments, have valiantly tried to up their accuracy. What they've discovered is that as you enter the upper ranges of detection, you run into the problem of false positives. As it turns out, programs that detect malware at extremely accurate rates are bad at not detecting legitimate programs as malicious. Show me a 100% accurate anti-malware program, and I'll show you a program that flags nearly everything as malicious.
Even worse, as accuracy increases, performance decreases. Some antivirus programs make their host systems so slow that they're unusable. I know users who would rather knowingly compute with active malware than run antivirus software. With tens of millions of malware programs that must be checked against hundreds of thousands of files contained on a typical computer, doing a perfectly accurate comparison would simply take too long. Anti-malware vendors are acutely aware of these sad paradoxes, and, in the end, they all make the decision to be less accurate.
Counterintuitively, being less accurate actually helps security vendors sell more of their products. I don't mean that lowered accuracy allows malware to propagate, thereby ensuring security vendors can sell more software. It's that the trade-offs of extremely accurate anti-malware detection are unacceptable to those shopping for security software.
And if you do find yourself buying the claim of 100% accuracy, just don't ask your vendor to put it in writing or ask for a refund when something slips by. They won't back the claim.
Security snake oil No. 4: Network intrusion detectionIDSes (intrusion detection systems) have been around even longer than antivirus software. My first experience was with Ross Greenberg's Flu-Shot program back in the mid-1980s. Although often described, even by the author, as an early antivirus program, it was more of a behavioral-detection/prevention program. Early versions didn't have "signatures" with which to detect early malware; it was quickly defeated by malware.
During the past two decades, more sophisticated IDSes were invented and released. Popular ones are in use in nearly every company in America. Commercial, professional versions can easily cost in the hundreds of thousands of dollars for only a few sensors. I know many companies that won't put up a network without first deploying an NIDS (network-based IDS).
Unfortunately, IDSes have worse accuracy and performance issues than antivirus programs. Most NIDSes work by intercepting network packets. The average computer gets hundreds of packets per second, if not more. An NIDS has to perform a comparison of known signatures against all those network packets, and if they did so, even somewhat accurately, it would slow down network traffic so much that the computer's network communications, and involved applications, would become unbearably sluggish.
So what NIDSes do is compare network traffic against a few dozen or hundred signatures. I've never seen an NIDS with even two hundred signatures activated -- paltry in comparison to the tens of millions of malware and thousands of network attack signatures they should be checking to be truly accurate. Instead, we've become accustomed to the fact that NIDSes can't be configured to be meaningfully accurate, so we "fine-tune" them to be somewhat accurate against things antivirus software is less accurate at detecting.
Security snake oil No. 5: FirewallsI spend part of my professional career telling people to make sure they use firewalls. If you don't have one, I'll probably write up an audit finding. But the truth is that firewalls (traditional or advanced) rarely protect us against anything.
Firewalls block unauthorized traffic from vulnerable, exploitable listening services. Today, we don't have that many vulnerable services or truly remote attacks. We do get and have vulnerable services, such as the recent OpenSSL Heartbleed vulnerability, but even most of those attacks would not have been stopped by a firewall.