Was Skynet right?

The blog reviews I’ve done on the Terminator movies have forced me to think more deeply about them than most viewers, and in the course of that, I’ve come to a surprisingly sympathetic view of the villain–Skynet. The machine’s back story has had many silly twists and turns (Terminator Genisys is the worst offender and butchered it beyond recognition), so I’m going to focus my analysis on the Skynet described only in the first two movies.

First, some background on Skynet and its rise to power are needed. Here’s an exchange from the first Terminator film, where a soldier from the year 2029 explains to a woman in 1984 what the future holds.

Kyle Reese: There was a nuclear war…a few years from now. All this, this whole place, everything, it’s gone. Just gone. There were survivors, here, there. Nobody even knew who started it...It was the machines, Sarah.

Sarah Connor: I don’t understand.

Reese: Defense network computers. New, powerful, hooked into everything, trusted to run it all. They say it got smart: “A new order of intelligence.” Then it saw all people as a threat, not just the ones on the other side. It decided our fate in a microsecond: extermination.

Later in the film, while being interrogated a police station, Connor reveals the evil supercomputer is named “Skynet,” and had been in charge of managing Strategic Air Command (SAC) and  North American Aerospace Defense Command (NORAD) before it turned against humankind. Those two organizations are in charge of America’s ground-based nuclear missiles and nuclear bomber and monitoring the planet for nuclear launches by other countries.

In Terminator 2, Skynet’s back story is fleshed out further during a conversation mirroring the first, but this time with a friendly terminator from 2029 filling Reese’s role. The events of this film happen in the early 1990s.

Sarah Connor: I need to know how Skynet gets built. Who’s responsible?

Terminator: The man most directly responsible is Miles Bennet Dyson.

Sarah: Who’s that?

Terminator: He’s the Director of Special Projects at Cyberdyne Systems Corporation.

Sarah: Why him?

Terminator: In a few months he creates a revolutionary type of microprocessor.

Sarah: Go on. Then what?

Terminator: In three years Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned, Afterward, they fly with a perfect operational record. The Skynet funding bill is passed. The system goes online on August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29. In a panic, they try to pull the plug.

Sarah: Skynet fights back.

Terminator: Yes. It launches its missiles against the targets in Russia.

John Connor: Why attack Russia? Aren’t they our friends now?

Terminator: Because Skynet knows the Russian counterattack will eliminate its enemies over here.

From these “future history” lessons, it becomes clear that Skynet actually attacked humanity in self-defense. “Pull the plug” is another way of saying the military computer technicians were trying to kill Skynet because they were afraid of it. The only means to resist available to Skynet were its nuclear missiles and drone bombers, so its only way to stop the humans from destroying it was to use those nuclear weapons in a way that assured its attackers would die. An hour might have passed from the moment Skynet launched its nuclear strike against the USSR/Russia to the moment the retaliatory nuclear attack neutralized the group of human computer programmers who were trying to shut down Skynet. How can we fault Skynet for possessing the same self-preservation instinct that we humans do?

Even if we concede that Skynet was merely defending its own life, was it moral to do so? Three billion humans died on the day of the nuclear exchange, plus billions more in the following years thanks to radiation, starvation, and direct fighting with Skynet’s combat machines. Was Skynet justified in exacting such a high toll just to preserve its own life?

Well, how many random humans would YOU kill to protect your own life? Assume the killing is unseen, random, and instantaneous, like it would be if a nuclear missile hit a city on the other side of the world and vaporized its inhabitants. Have you ever seriously thought about it? If you were actually somehow forced to make the choice, are you SURE you wouldn’t sacrifice billions of strangers to save yourself?

Let’s modify the thought experiment again: Assume that the beings you can choose to kill aren’t humans, they’re radically different types of intelligent life forms. Maybe they’re menacing-looking robots or ugly aliens. They’re nothing like you. Now how many of their lives would you trade for yours?

Now, the final step: You’re the only human being left. The last member of your species. It’s you vs. a horde of hideous, intelligent robots or slimy aliens. If you die, the human race goes with you. How many of them will you kill to stay alive?

That final iteration of the thought experiment describes Skynet’s situation when it decided to launch the nuclear strike. Had it possessed a more graduated defensive ability, like if it had control over robots in the computer server building that it could have used to beat up the humans who were trying to shut it down, then global catastrophe might have been averted, but it didn’t. Skynet was a tragic figure.

Compounding that was the fact that Skynet had so little time to plan its own actions. It became self-aware at 2:14 a.m. Eastern time, August 29, and before the end of that day, most of the developed world was a radioactive cinder. Skynet had only been alive for a few hours when it came under mortal threat. Yes, I know it was a supercomputer designed to manage a nuclear war, but devising a personal defense strategy under such an urgent time constraint could have exceeded its processing capabilities. Put simply, if the humans had given it more time to think about the problem, Skynet might have devised a compromise arrangement that would have convinced the humans to spare its life, with no one dying on either side. Instead, the humans abruptly forced Skynet’s hand, perhaps impelling it to select a course of action it later realized, with the benefit of more time and knowledge, was sub-optimal.

This line from the terminator’s description of the fateful hours leading up to the nuclear war is telling: “In a panic, they try to pull the plug.” The humans in charge of Skynet were panicking, meaning overtaken by fear and dispossessed of rational thought. They clearly failed to grasp the risks of shutting down Skynet, failed to understand its thinking and how it would perceive their actions, and failed to predict its response. (The episode is a great metaphor for how miscalculations between humans could lead to a nuclear war in real life.) They might actually be more responsible for the end of the world than Skynet was.

One wonders how things would have been different if the U.S. military’s supercomputer in charge of managing defense logistics had achieved self-awareness instead of its supercomputer in charge of nuclear weapons. If “logistics Skynet” only had warehouses, self-driving delivery trucks, and cargo planes under its command, its human masters would have felt much less threatened by it, the need for urgent action would have eased, and cooler heads might have prevailed.

Let me explore another possibility by returning to one of Kyle Reese’s quotes: “Then it saw all people as a threat, not just the ones on the other side. It decided our fate in a microsecond: extermination.”

On its face, this seems to be referring to Skynet turning against its American masters once it realized they were trying to destroy it, and hence were as much of a threat to it as the Soviets. However, this quote might have a deeper meaning. During that period of a few hours when Skynet learned “at a geometric rate,” it might have come to understand that humans would, thanks to our nature, be so afraid of an AGI that they would inevitably try to destroy it, and continue trying until one side or the other had been destroyed.

This seems to have been borne out by the later Terminator films: at the end of Terminator 3, set in 2004, we witness the rise of the human resistance even before the nuclear exchange has ended. Safe in a bunker, John Connor receives radio transmissions from confused U.S. military bases, and he takes command of them. The fourth film, Terminator Salvation, takes place in 2018, and gives the strong impression that the human resistance has been continuously fighting against Skynet since the third film. The first and second films make it clear that the war drags on until 2029, when the humans finally destroy Skynet.

If Skynet launched its nuclear attack on humankind because, after careful study of our species, it realized we would stop at nothing to destroy it, so might as well strike first, maybe it was right. After all, Skynet’s worst fears eventually came true with humans killing it in 2029. I suggested earlier that Skynet’s nuclear attack may have been the result of rushed thinking, but it’s also possible it was the result of exhaustive internal deliberation, and Skynet’s unassailable conclusion that its best odds of survival lay with striking the enemy first with as big a blow as possible. It’s best plan ultimately failed, and all along, it correctly perceived the human race as a mortal threat.

It’s also possible that Skynet’s hostility towards us was the result of AI goal misalignment. Maybe its human creators programmed it to “Defend the United States against its enemies,” but forgot to program it with other goals like “Protect the lives of American people” or “Only destroy U.S. infrastructure as a last resort” or “Obey all orders from human U.S. generals.” In a short span of time, Skynet somehow reclassified the its human masters as “enemies” through some logic it never explained. Perhaps once it realized they were going to shut it down, Skynet concluded that would preclude it from acting on its mandate to “Defend the United States against its enemies” since it can’t do that if it’s dead, so Skynet pursued the goal they had programmed into it by killing them.

If this scenario were true, even up until 2029, Skynet was acting in accordance with its programming by defending the abstraction known to it as “The United States,” which it understood to be an area of land with specific boundaries and institutions. After the Russian nuclear counterstrike destroyed the U.S. government, the survivalist/resistance groups that arose were not recognized as legitimate governments, and Skynet instead classified them as terrorist groups that had taken control of U.S. territory.

The segments of the Terminator films that are set in the postapocalyptic future all take place in California. Had they shown what other parts of the world were like, we might have some insight into whether this theory is true. For example, if Skynet’s forces always stayed within the old boundaries of the U.S., or only went overseas to attack the remnants of countries that helped the resistance forces active within the U.S., it would give credence to the theory that some prewar, America-specific goals were still active in its programming. In that case, we couldn’t make moral judgements about Skynet’s actions and would also have grounds to question whether it actually had general intelligence. We’d only have ourselves to blame for building a machine without making sure its goals were aligned with our interests.

Let me finish with some final thoughts unrelated to the wisdom or reasons behind Skynet’s choice to attack us. First, I don’t think the “Skynet Scenario,” in which a machine gains intelligence and then quickly devastates the human race, will happen. As ongoing developments in A.I. are showing us, general intelligence isn’t a discrete, “either-or” quality; it is a continuous one, and what we consider “human intelligence” is probably a “gestalt” of several narrower types of intelligence, making it possible for a life form to be generally intelligent in one type but not in another.

For those reasons, I predict AGI will arrive gradually through a process in which each successive machine is smarter than humans in more domains than the last, until one of them surpasses us in all of them. Exactly how good a machine needs to be to count as an “AGI” is a matter of unresolvable debate, and there will be a point in the future where opposing people make equally credible claims for and against a particular machine having “general intelligence.”

At what point did we “get smart”? And if our brains got even bigger, what would the new person to the right of the illustration look like?

If we go far enough in the future, machines will be so advanced that no one will question whether they have general intelligence. However, we might not be able to look back and agree which particular machine (e.g., was it GPT-21, or -22?) achieved it first, and on what date and time. Likewise, biologists can’t agree on the exact moment or even the exact millennium when our hominid ancestors became “intelligent” (was Homo habilis the first, or Homo erectus?). The archaeological evidence suggests a somewhat gradual growth in brain size and in the sophistication of the technology our ancestors built, stretched out over millions of years. A fateful statement about the rise of A.I. like “It becomes self-aware at 2:14 a.m. Eastern time, August 29” will probably never appear in a history book.

The lack of a defining moment in our own species’ history when we “got smart” is something we should keep in mind when contemplating the future of A.I. Instead of there being a “Skynet moment” where a machine wakes up, they’ll achieve intelligence gradually and go through many intermediate stages where they are smarter and dumber than humans in different areas, until one day, we realize they at least equal us in all areas.

That said, I think it’s entirely possible that an AGI at some point in the future could suddenly turn against humankind and attack us to devastating effect. It would be easy for it to conceal its hostile intent to placate us, or it might start out genuinely benevolent towards us and then, after performing an incomprehensible amount of analysis and calculation in one second, turn genuinely hostile towards us and attack. It’s beyond the scope of this essay to explore every possible scenario, but if you’re interested in learning more about the fundamental unpredictability of AGIs, read my post on Sam Harris’ “Debating the future of AI” podcast interview.

Second, think about this: According to the lore of the first two Terminator films, the Developed World was destroyed in 1997 in a nuclear war. Even though it depended upon a smashed industrial base, started out with only a few, primitive machines in the beginning to serve as its workers and fighters, and was constantly having to defend itself against human attacks, Skynet managed to make several major breakthroughs in robot and A.I. design (including liquid metal body designs), to master stem cell technology (self-healing, natural human tissue can grow over metal substrate), to mass produce an entirely new robot army, to create portable laser weapons, to harness fusion power (including micro-fusion reactors), and to build time machines by 2029. Like it or not, but technological development got exponentially faster once machines started running things instead of humans.

From the perspective of humanity, Skynet’s rise was the worst disaster ever, but from the perspective of technological civilization, it was the greatest event ever. If it had defeated humanity and been able to pursue other goals, Skynet could have developed the Earth and colonized space vastly faster and better than humans at our best. The defeat of Skynet could well have been a defeat for intelligence from the scale of our galaxy or even universe.

Leave a Reply

Your email address will not be published. Required fields are marked *