November 05, 2012, 1:34 PM — Three years ago, when electric grid operators were starting to talk about the need to protect critical infrastructure from cyberattacks, few utilities had even hired a chief information security officer.
Then came Stuxnet.
In 2010, that malware, widely reported to have been created by the U.S. and Israel, reportedly destroyed 1,000 centrifuges that Iran was using to enrich uranium after taking over the computerized systems that operated the centrifuges.
Gen. Michael Hayden, principal at security consultancy The Chertoff Group, was director of the National Security Agency, and then the CIA, during the years leading up to the event. "I have to be careful about this," he says, "but in a time of peace, someone deployed a cyberweapon to destroy what another nation would describe as its critical infrastructure." In taking this step, the perpetrator not only demonstrated that control systems are vulnerable, but also legitimized this kind of activity by a nation-state, he says.
The attack rattled the industry. "Stuxnet was a game-changer because it opened people's eyes to the fact that a cyber event can actually result in physical damage," says Mark Weatherford, deputy undersecretary for cybersecurity in the National Protection Programs Directorate at the U.S. Department of Homeland Security.
In another development that raised awareness of the threat of cyberwar, the U.S. government in October accused Iran of launching distributed denial-of-service (DDoS) attacks against U.S. financial institutions. In a speech intended to build support for stalled legislation known as the Cybersecurity Act that would enable greater information sharing and improved cybersecurity standards, Defense Secretary Leon Panetta warned that the nation faced the possibility of a "cyber Pearl Harbor" unless action was taken to better protect critical infrastructure.