Do sci-fi films get advanced tech right?
With today's theatrical release of Star Trek, the starship Enterprise launches on its mission to seek out new life and new civilizations. Gene Roddenberry's vision for the future was founded on hope for humanity -- but what powers his crew's ongoing trek across the stars is incredibly advanced technology.
In 2009, we're still a long way off from warp-drive engines, having not yet solved simple problems such as avoiding space debris or sending astronauts to nearby Mars. Yet despite this slow progress, the silver screen is consistently portraying less fiction and more prediction.
Starting with Georges Méliès' Le Voyage dans la Lune (A Trip to the Moon) in 1902, science-fiction films have a tradition of taking contemporary science and extending it to logical conclusions. Modern developments are bringing us closer to that predicted future, but there is still a gulf between reality and Hollywood's artistic license.
We've looked at some of our favorite science fiction films ("sci-fi" in popular parlance) to see how they reflect the reality of our times. Here are six kinds of technologies that sci-fi movies have long relied on, and how we're approaching or diverging from those fictional applications.
At the movies: In a few weeks, the war against the machines will resume as John Connor leads a band of humans in the post-Judgment Day fight to save the future in Terminator Salvation. Their opponent is a sentient machine that, after being given control of the U.S.'s nuclear arsenal, decided that Earth would be better off without humanity.
Terminator's Skynet (1984 - 2009) has not been the only movie machine to make such decisions; The Matrix (1999 - 2003) and Colossus (1970) came to similar conclusions. But sometimes annihilation is not so much a result of megalomania as of faulty programming, as with WarGames' Joshua (1983) and 2001's HAL (1968).
Of course, artificial intelligence is rarely malevolent by design. In 2008's Iron Man, multimillionaire protagonist Tony Stark was aided by an unseen servant named Jarvis, a self-aware program that first resided literally within the walls of Stark's mansion and was later uploaded to Stark's titular armored suit. Jarvis exhibited an undeniable personality just by his choice of words and tone of voice.
In reality: AI has thus far amounted to the preprogramming of computers to respond to (and occasionally learn from) increasingly diverse and complex sets of actions and conditions. We may never create a technological equivalent of sentience, or self-awareness.
Given the reality of Moore's Law and the seemingly limitless growth potential of computing, some people consider true AI to be inevitable. "In the coming decades, humanity will likely create a powerful artificial intelligence," predicts The Singularity Institute for Artificial Intelligence. The institute's mission is "to confront this urgent challenge, both the opportunity and the risk." The risk arises when two intelligent species refuse to coexist, which in sci-fi movies means a cybernetic revolt.
Of course, conflict makes for better narrative, which is why there have been few films about peaceful resolutions between organic and mechanical beings. Given humanity's seeming inability to restrain its own innovation, we can only hope that the fiction in sci-fi is found not in the eventual development of AI, but in how our future creations view their creators.
At the movies: Not all manmade life is robotic in nature. The Enterprise crew encountered a planet populated by humanoid androids, while others could be found in 1982's Blade Runner, and the Star Wars prequels (1999-2005) featured mass-produced clones.
And not all cinematic genetic manipulation results in human lookalikes. In Jurassic Park (1993), scientists recreate dinosaurs from fragments of their DNA. In 1995's Species, the beautiful but deadly alien trying to find a mate is as much an expression of fears of unpredictable genetic mutations as it is a horror movie about sex and death.
In reality: Real researchers are focusing on creating their own organisms right here on Earth. While genetic engineering hasn't yet led to human clones, Dolly the cloned sheep made her debut 13 years ago, and gene therapies, bionic limbs and organs, and research into artificial life (in contrast to artificial intelligence) have become everyday news.
And although scientists aren't close to recreating dinosaurs, they are trying to bring back other extinct species through genetic engineering.
Meanwhile, many people are wary of unpredictable genetic mutations as they encounter genetically modified foods, tracing the human genome for anthropology and stem-cell research, all of which have real implications for the environment, human evolution, health care and bioethics.
Could Kurzweil's theory of singularity, which posits that humanity will become ever more intertwined with technology at a basic level, be coming soon? While science fiction has guessed at some possible futures, we in the audience have control over how developing technologies are pursued and used -- for good or ill, feeding billions or waging war among them.
At the movies: When we're not creating life, we're creating virtual worlds. What is reality? It's a question philosophers have asked themselves for ages -- though it was only recently that the answer might have been "a virtual construct designed by our machine overlords to enslave us."
The world of The Matrix was a wonder not only of artificial intelligence, but also of virtual reality. In the film eXistenZ (1999), the virtual game was so real, players could never be sure if they had actually disengaged from the system or if the illusion of doing so was simply part of the game. VR is envisioned as a technology that would give humanity the power to create and shape worlds so real that we lose ourselves in them.
Sci-fi has depicted these landscapes as mental projections ( The Matrix), combinations of photons and force fields ( Star Trek's holodeck), or drugs and gyroscopes ( The Lawnmower Man).
In reality: Virtual reality has never been as seamless as actual reality. Early consumer applications, such as the 1991 game Dactyl Nightmare, required bulky visors and body suits that kept the player firmly tethered to the arcade.
Given the difficulty in replacing one world with another, it is more likely that VR will be implemented not as an alternative to the real world, but integrated with it through a variety of visual displays. For example, a company called Mobilizy is working on an app for GPS-enabled smartphones that augments the display with geotagged data. You could point the phone's camera lens at a restaurant, for instance, and see reviews of the establishment superimposed on the picture.
Employing this "mixed reality," a research team in Singapore has developed a game of Human Pacman, in which a head-mounted display overlays pellets in a real-world setting; they appear to be floating in midair in front of you as you walk down the street. Such devices hold the potential to augment, not supplant, our world, giving us more reason to exist within it than to escape from it.
In 1992's Lawnmower Man, Pierce Brosnan's character insisted that "Virtual reality holds the key to the evolution of the human mind," and "This technology will free the mind of man, not enslave it." Were the machines that power these worlds to become innocuous extensions of our own bodies, then the line would indeed begin to blur. But such an integration faces a variety of technological -- and ethical -- dilemmas that may keep virtual reality, at least as movies depict it, firmly in the realm of fiction.
At the movies: Whether they're virtual worlds or corporate networks, computers need to be restricted to trusted parties. For all the patches and firewalls in today's networks, cybersecurity most often comes down to not losing a laptop with unencrypted, sensitive data. But Hollywood doesn't find purloined property to be sufficiently sensational and so often represents IT management in ways that make actual security experts guffaw. We don't just mean standard thrillers like Firewall (2006) and Die Hard 4 (2007); science-fiction movies have many cringe-inducing security moments as well. Most often, the fiction comes from the interfaces through which hacking occurs.
The most glaring and memorable instance of impossible interfaces may be in 1996's Independence Day, when the world's freedom was restored by an Apple PowerBook's ability to interface with and infect an alien operating system, promptly disabling the alien fleet's shields. Viruses are designed to exploit specific flaws in operating systems and applications; for Jeff Goldblum to have deciphered an alien computer's code and instantly developed a virus that could take it down was perhaps the greatest work of fiction IT managers will ever witness.
In reality: Viruses and other malware are rarely so focused. When the Conficker worm spread throughout the Internet earlier this year, it did so indiscriminately, not targeting government or alien domains specifically. Although malware can be used to send personal data back to a central computer, as the "spider-bug" did in Transformers (2007), it is unlikely to be able to distinguish a shipping invoice from a military schematic.
But so what? The more capable and precise a virus is, the better it suits a megalomaniac's goals -- just what Hollywood ordered.
Electronic surveillance and identity protection
At the movies: Speaking of data security, if all politics is local, all data is personal -- and public. In fiction and fact, the government has been joined by big business and cybercriminals as threats to privacy.
Personal identity is up for grabs in the world portrayed in The Island (2005), where the wealthy are cloned, but a few clones try to turn the tables and take their identities.
Surveillance is another concern both onscreen and off. In 2008's Dark Knight, Batman taps into every cell phone in Gotham City to track down the Joker, even as techie and sometime mentor Lucius Fox quits in protest of the privacy violation.
The privacy implications of the Web are also up for grabs. Should online activity be monitored for criminal predisposition, as the citizens of the dystopic Minority Report (2002) are?
In reality: In our own world, personal data has become a lucrative target for cyberthieves, and consumers are right to worry about the privacy of their Social Security numbers, financial info and health data.
Early fears that RFID-tagged items such as clothing would be tracked with their wearers may have been unfounded, but GPS-enabled smartphones and real-time tracking services like Google Latitude have filled the same role. Terrorists have also proven to be adept at using GPS and Google Earth to locate targets.
Meanwhile, businesses and individuals have only just begun to weigh the privacy implications of popular social networking sites such as Facebook. And the genetic screening of Gattaca (1997), Minority Report and The Island also seem plausible, especially given the creation (and potential breaches) of massive healthcare databases and proposals for a national ID card.
Data security is not only a matter of privacy, but also of organizational responsibility and reputation.
At the movies: The government may surreptitiously watch its citizens for internal threats, but it responds to external ones with a show of force. Movies such as 1986's Aliens and 1997's Starship Troopers provided platoon-level views of battles against inhuman enemies, but the issues they raise, of secure communications against apparently low-tech foes, the need to protect our troops and questions about the reasons for going to war, are just as important when fighting other humans.
During the Cold War, many sci-fi films played on themes of nuclear devastation, as in 1954's Godzilla, a parable about the nuclear destruction of Japan, or 1983's WarGames, in which a teenager hacks into a U.S. military computer and nearly triggers World War III. Those fears may have faded, but more-recent movies such as Cloverfield (2008), which uses handicam footage to evoke memories of 9/11 and implies that a rampaging monster is a government creation, have reflected our revived dread of mass destruction.
In reality: Now that the U.S. military is using remotely piloted drones for reconnaissance and supplying and is researching autonomous combat robots that make their own decisions in battle, the warnings of Terminator and Ghost in the Shell (1995) seem as timely as ever. Although the body armor, vertical take-off and landing (VTOL) personal vehicles and ruthlessness shown in RoboCop (1987), Starship Troopers and Iron Man have yet to materialize in a police force or on the battlefield, the U.S. government continues its research into creating powered armor and soldiers that won't bleed.
The real-time satellite observation of troops, which was still sci-fi in Clear and Present Danger (1994) and Aliens, has come to fruition. Spy satellites and the tracking of forces on the battlefield have become the basis for the technology behind Google Maps and GPS capabilities, as well as the privacy concerns they raise.
Although it's unlikely that an attacking spaceship could be taken down by a laptop as in Independence Day, cyberwarfare is a fact of life (see conflicts involving China and Russia). Soldiers in the Middle East and astronauts aboard the space shuttle rely on powerful but small computers as links to the outside world, connected via the all-powerful Internet, where data is a target in both movies and reality.
The final frontier
In addition to providing summer entertainment and earning billions for movie studios, speculative fiction can predict -- and inspire -- technological development.
Many films have presented scientific advances such as artificial intelligence, genetic engineering and global surveillance as threats, abused by people to create a dystopia. But others, including Gene Roddenberry's Star Trek, are more hopeful about how humanity will use these and other tools to reshape itself and the world. It's up to us to determine which dreams of the future will come true.