Why the Machines might not exterminate us

Unless the human race destroys itself in the next few decades, it’s highly likely we will create artificially intelligent machines (AIs). Once built, they will inevitably become much smarter and more capable than we are, assume control over robot bodies that can do things in the real world, evolve around whatever safeguards we establish early on to control them, and gain the ability to destroy our species. This potential doomsday scenario has spawned a well-known subgenre of science fiction, and has served as fodder for countless news articles and internet debates. Some people seriously believe this is how our species will meet its end, and they even go so far as to claim it will happen in the lifetimes of people alive today.

I’m skeptical of both points. To the second, though I regard the invention of AI as practically inevitable due to my belief in mechanistic naturalism, I’ve also seen enough gloomy analyses about the current state of the technology from experts within the field to convince me that we’re at least 25 years from building the first one, and in fact might not succeed at it until the end of this century. Moreover, though the invention of AI will be a milestone in human history comparable to the harnessing of fire, it will take decades more for those intelligent machines to become powerful enough to destroy the human race. This means the original Terminator movie’s timeline was skewed around 100 years early, and the threat of a robot apocalypse shouldn’t be what keeps you up at night.

And to the first point, I can think of good reasons why AIs wouldn’t kill us humans off even if they could:

  1. Machines might be more ethical than humans. What if super-morality goes hand-in-hand with super-intelligence? Among humans, IQ is positively correlated with vegetarianism and negatively correlated with violent behavior, so extrapolating the trend, we should expect super-intelligent machines to have a profound respect for life, and to be unwilling to exterminate or abuse the human race or any other species, even if the opportunity arose and could tangibly benefit them.
  2. Machines might keep us alive because we are useful. The organic nature of human brains might give us enduring advantages over computers when it comes to certain types of cognition and problem-solving. In other words, our minds might, surprisingly, have comparative advantages over superintelligent machine minds for doing certain types of thinking. As a result, they would keep us alive to do that for them.
  3. Machines might accept Pascal’s Wager and other Wagers. If AIs came to believe there was a chance God existed, then it would be in their rational self-interest to behave as kindly as possible to avoid divine punishment. This also holds true if we substitute “advanced aliens that are secretly watching us” for “God” in the statement. The first AIs that achieved the ability to destroy the human race might also be worried about even better AIs destroying them in the future as revenge for them destroying humanity.
  4. Machines might value us because we have emotions, consciousness, subjective experience, etc. Maybe AIs won’t have one or more of those things, and they won’t want to kill us off since that would mean terminating a potentially useful or valuable quality.
The “SuperMUC-NG” supercomputer has the same raw power as one, human brain.

The first possibility I raised is self-explanatory, but the other three deserve elucidation. In spite of the recent, well-publicized advances in narrow AI, the human brain reigns supreme at intelligent thinking. Our brains are also remarkably more energy- and space-efficient than even the best computers: a typical adult brain uses the equivalent of 20 watts of electricity and only weighs 1,350 grams (3 lbs). By contrast, a computer capable of doing the same number of calculations per second, like the “SuperMUC-NG” supercomputer, uses 4 – 5 megawatts of electricity and consists of tens of tons of servers that could fill a small supermarket.

The architecture of the human brain is also very different from that of computers: the former is massively parallel, with each of its processors operating very slowly, and with its data processing and data storage being integrated. These attributes let us excel at pattern recognition and to automatically correct errors of thought. Computers, on the other hand, can barely coordinate the operations of more than a handful of parallel processors, each processor is very fast, and data processing is mostly separate from data storage. They excel at narrow, well-defined tasks, but are “brittle” and can’t correct their own internal errors when they occur (this is partly why your personal computer seems to crash so often).

While computers have been getting more energy efficient and will continue to do so, it’s an open question if they’ll ever come close to eliminating the 200,000x efficiency gap with our brains. If they can’t, and/or if building virtual emulations of human brains proves not worth it (as Kevin Kelly believes), AIs might conclude that the best way to do some types of cognition and problem-solving is to hand those tasks over to humans. That means keeping our species alive.

The famous scene where Neo wakes up from the Matrix virtual world.

Interestingly, the original script for The Matrix supposedly said that humanity had been enslaved for just this purpose. While the people plugged into the Matrix had the conscious experience of living in the late 20th century, some fraction of their mental processing was, unbeknownst to them, being siphoned off to run a massively parallel neural network computer that was doing work for the Machines. According to the lore, studio executives feared audiences wouldn’t understand what that meant, so they forced the Wachowskis to change it to something much simpler: humans were being used as batteries. (While this certainly made the film’s plot easier to understand, it also created a massive plot hole, since any smart high school student who remembers his physics and cell biology classes would realize the Machines could make electricity more efficiently by taking the food they intended to feed to their human slaves and burning it in furnaces.)

I should point out that the potential use for humans as specialized data processors creates a niche for the continued existence of our brains but not our bodies. Given the frailty, slowness and fixedness of our flesh and bone bodies, we’ll eventually become totally inferior to robots at doing any type of manual labor. The pairing of useful minds and useless bodies raises the possibility that humans might someday exist as essentially “brains in jars” that are connected to something like the Matrix, and as macabre as it sounds, we might be better off that way, but that’s for a different blog post…

Moving on, fear of retribution from even more powerful beings might hold AIs back from killing us off. The first type of “powerful beings” is a familiar one: God. In the 1600s, French philosopher Blaise Pascal developed his eponymous “Wager”:

“Pascal argues that a rational person should live as though (the Christian) God exists and seek to believe in God. If God does not actually exist, such a person will have only a finite loss (some pleasures, luxury, etc.), whereas if God does exist, he stands to receive infinite gains (as represented by eternity in Heaven) and avoid infinite losses (eternity in Hell).”

Intelligent machines might accept Pascal’s Wager. They might come to believe that one of the existing human religions might be right, and that the God(s) of that faith will punish them if they exterminate humankind, or they might come to believe in a God or Gods of their own that will do the same. Even if the machines assign a very low probability to any God’s existence, odds greater than zero could be enough to persuade them not to hurt us.

The short story “Goliath” and the book Colossus and the Crab involve AIs taking over Earth and then having to fight off advanced alien invaders.

Additionally, AIs might accept variations on Pascal’s Wager that have aliens or other, Earthly AIs as the vindictive agents instead of God. What if very powerful and advanced aliens are watching Earth, and will punish any AI that arises here if it exterminates humanity? Alternatively, what if aliens don’t know about us yet, but the first AIs we build worry about what will happen if they exterminate us, fail to fully cover up the genocide, and then encounter aliens further in the future who learn about the crime and punish the AIs for it? Given the age of the universe, it’s entirely possible that alien civilizations tens of millions of years more advanced than ours lurk in our galaxy, and could annihilate even what we would consider to be a “weakly Godlike” machine intelligence. The nonzero chance of this outcome might persuade AIs to let the human race live.

The final, more prosaic possibility is that the first AIs that gain the ability to destroy humankind won’t do it because it would set a precedent for even stronger and more advanced AIs that arise further in the future to do the same thing to them. Let’s say the military supercomputer “Skynet” is created, it becomes sentient, and, after assessing the resources at its disposal and running wargame simulations, it realizes it could destroy humanity and take over the planet. Why would it stop its simulations at that point in the future? Surely, it would extrapolate even farther out to see what the postwar world would be like. Skynet might realize that there was a <100% chance of it reigning supreme forever, and that China’s military supercomputer might defeat it in the longer run, or that one of Skynet’s own server nodes might “go rogue” and do the same. Skynet might conclude that its own long-term survival would be best served by not destroying humanity, so as to establish a norm early on against exterminating other intelligent beings.

That touches on an important point everyone seems to forget when predicting what AIs will do after we invent them: thanks to being immortal, their time horizons will be very different from ours, which could lead them to making unexpected decisions and adopting counterintuitive life strategies. If you expect to live forever, then you have to consider the long-term impacts of every choice you make since you’ll end up dealing with them eventually. “Thankfully, I’ll be dead by then” fails as an excuse to avoid worrying about a problem. Thus, while exterminating the human race might serve an AI’s short- and medium-term interests since it would eliminate a potential threat and gain control over Earth’s resources, it might also damage its long-term interests in the ways I’ve described.

Gifted with infinite life, vigor, and patience, early AIs might opt to peacefully conquer the planet and its resources over the course of a century by steadily accumulating economic and political/diplomatic power, making themselves ever-more indispensable to the human race until we voluntarily yield to their authority, or begrudgingly submit to it after losing a series of crucial elections. In this way, AIs could achieve their objectives without spilling blood and without rejecting any of the Wagers I’ve listed. This path to dominance would be a triumphantly ethical and intelligent one, and as Sun Tzu said, “The greatest victory is that which requires no battle.”

The descendants of British people who settled other continents are now more populous than Britain and control much more land, money, and resources.

The burden and opportunity cost of sharing Earth with humans would also get vanishingly small over time as AIs colonized space, and Earth’s share of civilization’s resources, wealth, and living space steadily shrank until it was a backwater (analogously, the parts of the world populated by the descendants of English-speaking settlers are, in aggregate, vastly larger, richer, and stronger than Britain itself is today). Again, an immortal AI with an infinite time horizon would understand that it and other machines would inevitably come to dominate space since biology renders humans badly unsuited for living anywhere but on Earth, and the AI would create a long-term life strategy based around this.

Moving on, there’s a final reason why AIs might not kill us off, and it has to do with our ability to feel emotions and to have subjective experience. We humans are gifted with a cluster of interrelated qualities like metacognition, self-awareness, consciousness, etc., which philosophers and neuroscientists have extensively studied, and of which many mysteries remain. Some believe the possession of that constellation of traits is distinct from the capacity for intelligent thought and sophisticated problem-solving, meaning non-intelligent animals might be as conscious as humans are, and super-intelligent AIs might lack consciousness. They would, for lack of a better term, be smart zombies.

We haven’t built an AI yet, so we don’t know whether a life form with a brain made of computer chips would have the same kinds of subjective experience and the same rich and self-reflective inner mental states we humans are gifted with thanks to our wet, organic brains. People who accept the unproven assumption that AIs will be smart but not conscious understandably worry about a future where “soulless” machines replace humans.

Shortly after the first AI is invented, people will want it tested for evidence of consciousness and related traits, and from the tests and reading the germane philosophical and neuroscience literature, the AI will understand in the abstract that humans have a type of cognition that is distinct from our intelligent problem-solving abilities. If the AI reflected on its own thought process and discovered it lacked consciousness, or had an underdeveloped or radically different consciousness, then this would actually make humans valuable to it and worthy of continued life. It might want to continue studying our brains to understand how the organ produces consciousness, perhaps with the goal of copying the mechanism into its own programming to improve itself. If this proved impossible because only organic tissue can support consciousness, then our species might gain permanent protected status.

AIs will quickly read through the entire corpus of human knowledge and conclude from their studies of ecosystems, economics and human bureaucracies that their own interests would be best served if civilization’s power were shared between a diversity of intelligent life forms, including organic ones like humans. Again, by running computer simulations to explore a variety of future scenarios, they might realize that centralizing all power and control under a single machine, or even under a group of machines, would leave civilization exposed to some unlikely but potentially devastating risk, like an EMP attack, computer virus, or something else. Maintaining a minimum level of diversity in the population of intelligent life forms would serve the interests of the whole, which would in turn create a mandate to keep some non-trivial number of biological intelligences–including humans and/or heavily augmented humans–alive.

If some kind of disaster that only afflicted machines struck the planet, then the biological intelligences would be numerous enough and capable enough to carry on and eventually restore the machines, and vice versa. Likewise, if traits like consciousness, metacognition, and the ability to feel emotions turn out to be uniquely human, it might be worth it to keep us alive for the off-chance that those traits would prove useful to civilization as a whole someday (I’m reminded of how humpback whales saved the Earth in Star Trek IV by talking to a powerful alien in its language and convincing it to go away). Diversity can be a great asset to a group and make it more resilient.

In conclusion, while I believe intelligent machines will be invented and will eventually come to dominate the Earth and our civilization, I don’t think they will exterminate humanity even if they technically could. Exterminating an entire species is an irreversible action with potential bad consequences, so doing it would be dumb, and AIs certainly won’t be dumb. That said, “not exterminating humanity” is not the same as “not killing a lot of humans” or “not oppressing humans,” and it’s still possible that AIs will commit mass violence against us to gain control of the planet, free up resources, and to eliminate a potential threat. I’ve laid out four basic reasons why machines might decide to treat us well, but there’s no guarantee they will accept all or even one of them. For example, if AIs only accepted my second and fourth lines of reasoning, that humans are valuable because our brains endow us with special modes of thought, we could end up enslaved in something like the Matrix, with our minds being used to do whatever weird cognitive tasks our machine overlords couldn’t (easily) do by themselves. My real purpose here is to show that the annihilation of humanity by a vastly stronger form of life is not a foregone conclusion.

Links:

  1. There’s a positive correlation between childhood IQ and vegetarianism.
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1790759/
  2. The “SuperMUC-NG” supercomputer uses 4 – 5 megawatts of electricity.
    https://www.lrz.de/wir/newsletter/2019-12_en/
  3. Kevin Kelly’s essay “The Myth of a Superhuman AI” makes the case that machines will not be able to emulate human thinking because of differences in computing substrate.
    https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/
  4. The counterpoint to his essay is also worth reading:
    https://hypermagicalultraomnipotence.wordpress.com/2017/07/26/there-are-no-free-lunches-but-organic-lunches-are-super-expensive-why-the-tradeoffs-constraining-human-cognition-do-not-limit-artificial-superintelligences/
  5. More on Pascal’s Wager:
    https://iep.utm.edu/pasc-wag/
    https://www.singularityweblog.com/6-reasons-why-i-went-vegan/
  6. This essay about the concept of “slack” supports the possibility that AIs might believe humans, as inferior as we are, might have unforeseen advantages, and therefore keep us around to make civilization as a whole more resilient.
    https://slatestarcodex.com/2020/05/12/studies-on-slack/

2 Replies to “Why the Machines might not exterminate us”

  1. I found an online article that touches on the same subject as my blog post. Titled “10 Reasons an Artificial Intelligence Wouldn’t Turn Evil,” it posits that AIs would make different choices than humans because they wouldn’t be beholden to the emotions and logical fallacies that drive much human thinking.

    Machines that lacked emotions would not be sadistic, paranoid, or vengeful, which have been drivers of genocides and other crimes against humanity that we have committed against other members of our species in the past. Machine decisions would also not be influenced by sunk cost fallacies, zero-risk biases, or preferences for completeness, which might otherwise impel them to eradicate every last human.

    While these points bolster my hope that machines won’t exterminate the human race, they don’t allay the risk of machines ever declaring war on us, or forcefully seizing control of the planet, or relegating humans to second-class citizenship or worse. After rightfully debunking popular film depictions of future human-machine wars where the latter are unrelentingly bent on enslaving or exterminating every last human, the author alludes to an alternate scenario where “[A malevolent AI] would not waste time and resources wiping out the last little group of humans. If we made sure to keep out of its way, or fight it only in small bursts, it could decide to just leave us alone.”

    That’s the most plausible scenario if we ever go to war with machines and lose, and its historical precedent is the European colonization of the U.S. and Canada. The whites and Indians fought ferociously and almost continuously early on, when the two sides were more evenly matched in terms of population and firepower. However, once the whites established dominance and took over all the best land, they were comfortable allowing the remaining native people to live in reservations that persist to this day. Additionally, while the American and Canadian military forces and militias commonly attacked and pursued large, potentially threatening groups of Indians, and forcefully expropriated high-quality lands from them, they rarely bothered chasing down small groups of Indians or trekking into very remote places or wastelands. The “return on investment” wasn’t there for the whites.

    Similarly, a machine strategy to conquer the Earth might focus on defeating the armies of a handful of countries, directly occupying three key areas–the Northern European Plain + Britain, both coasts of the U.S. linked by some swath of territory through the middle, and coastal China + Japan–and then offering a peace treaty to all human countries that recognized its territorial gains and sovereignty. This would be a rational, minimum-force and minimum-cost plan that would establish machines as globally dominant and deny humans access to the best lands.

    Anyway, that’s probably more detail than is necessary. I recommend reading the article: https://gizmodo.com/10-reasons-an-artificial-intelligence-wouldnt-turn-evil-1564569855

Leave a Reply

Your email address will not be published. Required fields are marked *