The purported veering of a jetliner caused by an onboard hacker points to a larger problem, experts say – airlines and other providers of services may be blind to the value such security researchers can offer in the name of public safety.
While it’s far from clear that security researcher Chris Brown actually did commandeer the avionics system of an airplane and force it to steer to one side, the story is prompting other security experts to call for better cooperation between white-hat hackers and industries whose infrastructures they probe.
Airlines have to get used to the idea that this type of hacking can be useful and ought to make test environments that simulate aircraft systems available for researchers to hack against, says Jeremiah Grossman, the founder of White Hat Security.
There is an emotional reaction on the part of corporate executives not to admit their systems have weaknesses, he says, but as their view matures, they become more cooperative with the research community. “Their first reaction is only authorized people can test their network. Google got over it. Facebook got over it,” he says.
He points to his own experience in 2000 with Yahoo, when he hacked his own Yahoo Mail account– an exploit he could have carried out against anyone else’s account. But instead he told the company, and it not only worked with him to resolve the issue without penalizing him, it gave him a job, he says.
But that’s not always the case, and hackers with the best of intentions are warned against probing for flaws without permission. The first thing Grossman tells researchers is, “Never touch or test anything when you don’t have express written consent or own it. Otherwise you’re at the whim of the target.”
If that whim is to call in law enforcement, the consequences can be dire for the hacker personally, but also for the general public safety, says Josh Corman, CTO of Sonatype, and founder of I Am The Cavalry, a grassroots group “focused on issues where computer security intersect public safety and human life.”
The group has been working closely with the auto industry for more than a year to build enough trust between hackers and the car manufacturers so they can work toward safer vehicles, he says. And it is hoping to do the same with airlines and medical-device manufacturers.
A big hurdle is that leaders in industries that need protection are not well versed in the nature of cyber security. Their worlds are governed by static sets of facts and principles of science like those used by the engineers who make their products. “Cyber lacks physics,” Corman says, which makes security an elusive target. “Assumptions move or are incomplete or are no longer sound.” It’s a slow process to get industries to first understand the difference and then build enough trust with security researchers so they can work together toward safer products and services, he says.
The problem is compounded by these industries wanting to project an image of safety and security to the public, says Paul Kocher. For example, services that store data online want customers to trust that their data is safe. “If I’m operating a service my financial interest is usually in trying to make my customers feel comfortable, which is not necessarily to disclose accurately what the risks are,” he says. “
So they are understandably reluctant to allow hackers to test their systems, particularly if the flaws are revealed publicly, even if finding vulnerabilities and fixing them is desirable. “If you want to have researchers providing this level of authentic, unfiltered information – at least when people are doing things badly – it’s going to be challenging,” he says.
He thinks there are black and white areas about what is and is not appropriate for hackers to do, and then there is an enormous gray area where things are not so clear.
His company tries to stay “very, very far into that white zone, but I recognize the challenges. The question of messing with a flying plane’s avionics is pretty clearly for me in the black area. I see a lot of places where there’s a big debate, but I don’t see this as one where the shades of gray are as nuanced as they will be in future situations that will come along.”
One such area is medical devices, which must receive lengthy FDA approval. When changes are made, devices must be reapproved. “But that doesn’t fit very well into your zero-day response cycles,” Kocher says. “How do you patch the Linux install running inside your implantable medical device? We haven’t worked these things out yet, and perhaps we never will. It’s just one of these horribly messy problems that we’ll be struggling with for a long time.”
But that doesn’t mean hackers and industry shouldn’t come to agreement, Corman says. Constant testing of existing systems is necessary, even if they have previously determined secure. For example, Shellshock bugs lived in the Unix Bash shell from 1989 to 2014 before they were exposed. Similarly, the Heartbleed bug existed in the Open SSL library from 2011 to 2014.
Hostile reactions to responsible bug disclosures by altruistic hackers could hurt the security of vital systems, he says. “We have a role to play and are willing and able to help,” he says.
If white-hat hackers face legal consequences of finding flaws in avionics networks, for example, they may abandon studying them. “They can easily research something less risky and not run afoul of the law,” he says. “If it’s too risky or politically loaded, who in their right mind would do it?”
One obvious answer: the black hat hackers.
This story, "Should hackers be tolerated to test public systems?" was originally published by Network World.