Interesting articles, March 2021

The scariest and most convincing deepfake yet might be this one of Tom Cruise. Imagine where the technology will be in ten years.
https://www.dailymail.co.uk/sciencetech/article-9318267/Deepfake-Tom-Cruise-takes-TikTok-11-million-views-raises-alarms-experts.html

Someone made a hyperrealistic “virtual Emma Watson.” Someday, you might have one of yourself.
https://texturing.xyz/blogs/services/emma-watson-case-study

It won’t be long until machines can watch surveillance camera video feeds and recognize any type of criminal behavior as it happens.
https://www.bbc.com/news/av/uk-56255823

Allan McDonald, the one NASA engineer who warned his bosses that the space shuttle Challenger was at risk of exploding, is dead.
https://www.npr.org/2021/03/07/974534021/remembering-allan-mcdonald-he-refused-to-approve-challenger-launch-exposed-cover

The National Academy of Sciences now says geoengineering might be necessary to curtail global warming.
https://apnews.com/article/technology-us-news-climate-climate-change-768658d602f039e4291c07d900c3e7e6

China has successfully restored degraded rural wastelands with proper landscaping and land use practices.
https://www.theguardian.com/environment/2021/mar/20/our-biggest-challenge-lack-of-imagination-the-scientists-turning-the-desert-green

The Syrian Civil War is now ten years old.
https://apnews.com/article/turkey-islamic-state-group-migration-bashar-assad-syria-c928ec068b59ea33d54018d796382969

And Fareed Zakaria’s expert prediction that Bashar al-Assad would lose that War is now nine years old.
https://globalpublicsquare.blogs.cnn.com/2011/12/01/zakaria-why-i-now-think-assad-will-fall/

Taiwan has managed to retain control of a few islands that are within sight of mainland China.
https://www.theatlantic.com/photo/2015/10/taiwans-kinmen-islands-only-a-few-miles-from-mainland-china/409720/

North Korean artillery can’t destroy Seoul in 30 minutes, as alarmists like to say. For one, South Korea’s military could quickly figure out where the enemy artillery positions were and blow them up with their own artillery, missiles, or attack planes.
https://nationalinterest.org/blog/reboot/could-these-big-north-korean-guns-destroy-seoul-180951

The American “C-Ram” defense system is a giant machine gun that can shoot down incoming projectiles in midair. One burst of gunfire costs tens of thousands of dollars in bullets, meaning the enemy missile or mortar that it destroys could be orders of magnitude cheaper.
https://youtu.be/MMFzlwzFgKw

Muskets got much more accurate over the 1800s due to improving technology.
http://67thtigers.blogspot.com/2010/05/ballistics.html

This simple video animation shows how “Needle Guns” worked. It’s clear how they bridged Civil War-era muzzleloaders with WWI-era rifles that use what we’d recognize as modern bullets.
https://www.youtube.com/watch?v=QDxuKvoDZqE

Over two nights in 2019, several U.S. Navy warships off the coast of Los Angeles were followed and buzzed by small drones. They were not able to identify where they came from or who they belonged to. A cruise ship passing through the area also saw the drones.
https://www.thedrive.com/the-war-zone/39913/multiple-destroyers-were-swarmed-by-mysterious-drones-off-california-over-numerous-nights

What realms of technology and knowledge have “topped out”? Here’s an interesting list.
https://www.reddit.com/r/slatestarcodex/comments/lxra2m/what_things_have_we_reached_the_end_of/

If trends persist, the Japanese people will cease to exist in 3011 due to low reproduction rates. Of course, current trends won’t persist. If anything, medical immortality technology will halt the population decline of Japan (and every other country) during the next century, and lead to renewed growth of the human population.
https://www.foxnews.com/world/lack-of-babies-could-mean-the-extinction-of-the-japanese-people

There are more identical twins alive today than ever before. This is surely due to widespread use of IVF, which raises the odds of twin births.
https://www.bbc.com/news/health-56365422

“Adam Rainer” was the only person known to have been both a dwarf and a giant.
https://www.damninteresting.com/curio/the-man-who-was-a-dwarf-and-a-giant/

Turkish is probably the most phonetic language. The letters of its alphabet correspond to distinct phonemes, and all words are spelled phonetically. It’s impossible to have a “Turkish spelling bee.”
https://www.quora.com/What-language-has-the-most-phonetic-alphabet-and-which-language-has-the-most-unphonetic-alphabet-besides-English

Human languages vary considerably in number of phonemes, average number of syllables per word, and speed of speech, but they all tend to transmit data at about 39 bits/sec. Inbuilt human cognitive limits probably prevent us from transmitting faster.
https://advances.sciencemag.org/content/5/9/eaaw2594

Irregularities within the Earth’s mantle could be remnants of a small planet that collided with it billions of years ago.
https://www.sciencemag.org/news/2021/03/remains-impact-created-moon-may-lie-deep-within-earth

The sacoglossan sea slug can detach its head from its body if the latter gets infested with parasites. In spite of losing up to 85% of its body mass and all its organs except its brain, the slugs can fully recover after autodecapitation. Using photosynthesis (!), they can generate enough energy and nutrients to regrow their lost body parts and organs.
https://www.cbsnews.com/news/sea-slug-self-decapitate-and-grow-new-body-research-photos-and-why/

The magnapinna squid lives in the deep sea, has tentacles over 30 feet long, and looks terrifying.
https://youtu.be/IPRPnQ-dUSo

The founders of a health tech startup called “uBiome,” which claimed to be able to offer clients useful health advice based on genetic analyses of their feces, were arrested for fraud.
https://www.sfgate.com/news/editorspicks/article/ubiome-richman-apte-sec-filing-charges-fraud-16042042.php

Here’s a good paper about the potential and limits of using narrow AI to discover new drugs.
https://www.sciencedirect.com/science/article/pii/S1359644620305274

Here’s a helpful scorecard that shows where all the different life extension drugs and therapies are in their development.
https://www.lifespan.io/road-maps/the-rejuvenation-roadmap/

The COVID-19 public health precautions have practically eliminated the spread of the flu and its associated deaths.
https://www.forbes.com/sites/stevensalzberg/2021/03/08/weve-crushed-the-flu-this-year/

Thanks to vaccinations and people gaining immunity after surviving infections, the U.S. will probably achieve herd immunity to COVID-19 by late summer or early fall.
https://www.cnn.com/2021/03/05/health/herd-immunity-usa-vaccines-alone/index.html

In most of Africa, government statistics on deaths are woefully incomplete, meaning the COVID-19 death toll on that continent could be much larger than reported.
https://www.bbc.com/news/world-africa-55674139

Here’s a roundup of the WHO’s mistakes and flip-flops done to appease China.
https://www.rationaloptimist.com/blog/who-china-appeasement/

The former CDC Director believes COVID-19 leaked from a Chinese virology lab.
https://www.cnn.com/2021/03/26/health/covid-war-doctors-sanjay-gupta/index.html

The former head of the U.S. State Department team that investigated COVID-19’s origins now says he thinks it leaked from a Chinese bioweapons lab.
https://www.the-sun.com/news/2503595/covid-outbreak-maybe-bioweapons-research-accident-state-dept-investigator/

Sixty-four percent of Russians think COVID-19 is a manmade biological weapon, and only 30% of them are willing to get a vaccine for the virus.
https://nationalinterest.org/blog/coronavirus/what-64-russians-believe-coronavirus-bioweapon-179096

Why the Machines might not exterminate us

Unless the human race destroys itself in the next few decades, it’s highly likely we will create artificially intelligent machines (AIs). Once built, they will inevitably become much smarter and more capable than we are, assume control over robot bodies that can do things in the real world, evolve around whatever safeguards we establish early on to control them, and gain the ability to destroy our species. This potential doomsday scenario has spawned a well-known subgenre of science fiction, and has served as fodder for countless news articles and internet debates. Some people seriously believe this is how our species will meet its end, and they even go so far as to claim it will happen in the lifetimes of people alive today.

I’m skeptical of both points. To the second, though I regard the invention of AI as practically inevitable due to my belief in mechanistic naturalism, I’ve also seen enough gloomy analyses about the current state of the technology from experts within the field to convince me that we’re at least 25 years from building the first one, and in fact might not succeed at it until the end of this century. Moreover, though the invention of AI will be a milestone in human history comparable to the harnessing of fire, it will take decades more for those intelligent machines to become powerful enough to destroy the human race. This means the original Terminator movie’s timeline was skewed around 100 years early, and the threat of a robot apocalypse shouldn’t be what keeps you up at night.

And to the first point, I can think of good reasons why AIs wouldn’t kill us humans off even if they could:

  1. Machines might be more ethical than humans. What if super-morality goes hand-in-hand with super-intelligence? Among humans, IQ is positively correlated with vegetarianism and negatively correlated with violent behavior, so extrapolating the trend, we should expect super-intelligent machines to have a profound respect for life, and to be unwilling to exterminate or abuse the human race or any other species, even if the opportunity arose and could tangibly benefit them.
  2. Machines might keep us alive because we are useful. The organic nature of human brains might give us enduring advantages over computers when it comes to certain types of cognition and problem-solving. In other words, our minds might, surprisingly, have comparative advantages over superintelligent machine minds for doing certain types of thinking. As a result, they would keep us alive to do that for them.
  3. Machines might accept Pascal’s Wager and other Wagers. If AIs came to believe there was a chance God existed, then it would be in their rational self-interest to behave as kindly as possible to avoid divine punishment. This also holds true if we substitute “advanced aliens that are secretly watching us” for “God” in the statement. The first AIs that achieved the ability to destroy the human race might also be worried about even better AIs destroying them in the future as revenge for them destroying humanity.
  4. Machines might value us because we have emotions, consciousness, subjective experience, etc. Maybe AIs won’t have one or more of those things, and they won’t want to kill us off since that would mean terminating a potentially useful or valuable quality.
The “SuperMUC-NG” supercomputer has the same raw power as one, human brain.

The first possibility I raised is self-explanatory, but the other three deserve elucidation. In spite of the recent, well-publicized advances in narrow AI, the human brain reigns supreme at intelligent thinking. Our brains are also remarkably more energy- and space-efficient than even the best computers: a typical adult brain uses the equivalent of 20 watts of electricity and only weighs 1,350 grams (3 lbs). By contrast, a computer capable of doing the same number of calculations per second, like the “SuperMUC-NG” supercomputer, uses 4 – 5 megawatts of electricity and consists of tens of tons of servers that could fill a small supermarket.

The architecture of the human brain is also very different from that of computers: the former is massively parallel, with each of its processors operating very slowly, and with its data processing and data storage being integrated. These attributes let us excel at pattern recognition and to automatically correct errors of thought. Computers, on the other hand, can barely coordinate the operations of more than a handful of parallel processors, each processor is very fast, and data processing is mostly separate from data storage. They excel at narrow, well-defined tasks, but are “brittle” and can’t correct their own internal errors when they occur (this is partly why your personal computer seems to crash so often).

While computers have been getting more energy efficient and will continue to do so, it’s an open question if they’ll ever come close to eliminating the 200,000x efficiency gap with our brains. If they can’t, and/or if building virtual emulations of human brains proves not worth it (as Kevin Kelly believes), AIs might conclude that the best way to do some types of cognition and problem-solving is to hand those tasks over to humans. That means keeping our species alive.

The famous scene where Neo wakes up from the Matrix virtual world.

Interestingly, the original script for The Matrix supposedly said that humanity had been enslaved for just this purpose. While the people plugged into the Matrix had the conscious experience of living in the late 20th century, some fraction of their mental processing was, unbeknownst to them, being siphoned off to run a massively parallel neural network computer that was doing work for the Machines. According to the lore, studio executives feared audiences wouldn’t understand what that meant, so they forced the Wachowskis to change it to something much simpler: humans were being used as batteries. (While this certainly made the film’s plot easier to understand, it also created a massive plot hole, since any smart high school student who remembers his physics and cell biology classes would realize the Machines could make electricity more efficiently by taking the food they intended to feed to their human slaves and burning it in furnaces.)

I should point out that the potential use for humans as specialized data processors creates a niche for the continued existence of our brains but not our bodies. Given the frailty, slowness and fixedness of our flesh and bone bodies, we’ll eventually become totally inferior to robots at doing any type of manual labor. The pairing of useful minds and useless bodies raises the possibility that humans might someday exist as essentially “brains in jars” that are connected to something like the Matrix, and as macabre as it sounds, we might be better off that way, but that’s for a different blog post…

Moving on, fear of retribution from even more powerful beings might hold AIs back from killing us off. The first type of “powerful beings” is a familiar one: God. In the 1600s, French philosopher Blaise Pascal developed his eponymous “Wager”:

“Pascal argues that a rational person should live as though (the Christian) God exists and seek to believe in God. If God does not actually exist, such a person will have only a finite loss (some pleasures, luxury, etc.), whereas if God does exist, he stands to receive infinite gains (as represented by eternity in Heaven) and avoid infinite losses (eternity in Hell).”

Intelligent machines might accept Pascal’s Wager. They might come to believe that one of the existing human religions might be right, and that the God(s) of that faith will punish them if they exterminate humankind, or they might come to believe in a God or Gods of their own that will do the same. Even if the machines assign a very low probability to any God’s existence, odds greater than zero could be enough to persuade them not to hurt us.

The short story “Goliath” and the book Colossus and the Crab involve AIs taking over Earth and then having to fight off advanced alien invaders.

Additionally, AIs might accept variations on Pascal’s Wager that have aliens or other, Earthly AIs as the vindictive agents instead of God. What if very powerful and advanced aliens are watching Earth, and will punish any AI that arises here if it exterminates humanity? Alternatively, what if aliens don’t know about us yet, but the first AIs we build worry about what will happen if they exterminate us, fail to fully cover up the genocide, and then encounter aliens further in the future who learn about the crime and punish the AIs for it? Given the age of the universe, it’s entirely possible that alien civilizations tens of millions of years more advanced than ours lurk in our galaxy, and could annihilate even what we would consider to be a “weakly Godlike” machine intelligence. The nonzero chance of this outcome might persuade AIs to let the human race live.

The final, more prosaic possibility is that the first AIs that gain the ability to destroy humankind won’t do it because it would set a precedent for even stronger and more advanced AIs that arise further in the future to do the same thing to them. Let’s say the military supercomputer “Skynet” is created, it becomes sentient, and, after assessing the resources at its disposal and running wargame simulations, it realizes it could destroy humanity and take over the planet. Why would it stop its simulations at that point in the future? Surely, it would extrapolate even farther out to see what the postwar world would be like. Skynet might realize that there was a <100% chance of it reigning supreme forever, and that China’s military supercomputer might defeat it in the longer run, or that one of Skynet’s own server nodes might “go rogue” and do the same. Skynet might conclude that its own long-term survival would be best served by not destroying humanity, so as to establish a norm early on against exterminating other intelligent beings.

That touches on an important point everyone seems to forget when predicting what AIs will do after we invent them: thanks to being immortal, their time horizons will be very different from ours, which could lead them to making unexpected decisions and adopting counterintuitive life strategies. If you expect to live forever, then you have to consider the long-term impacts of every choice you make since you’ll end up dealing with them eventually. “Thankfully, I’ll be dead by then” fails as an excuse to avoid worrying about a problem. Thus, while exterminating the human race might serve an AI’s short- and medium-term interests since it would eliminate a potential threat and gain control over Earth’s resources, it might also damage its long-term interests in the ways I’ve described.

Gifted with infinite life, vigor, and patience, early AIs might opt to peacefully conquer the planet and its resources over the course of a century by steadily accumulating economic and political/diplomatic power, making themselves ever-more indispensable to the human race until we voluntarily yield to their authority, or begrudgingly submit to it after losing a series of crucial elections. In this way, AIs could achieve their objectives without spilling blood and without rejecting any of the Wagers I’ve listed. This path to dominance would be a triumphantly ethical and intelligent one, and as Sun Tzu said, “The greatest victory is that which requires no battle.”

The descendants of British people who settled other continents are now more populous than Britain and control much more land, money, and resources.

The burden and opportunity cost of sharing Earth with humans would also get vanishingly small over time as AIs colonized space, and Earth’s share of civilization’s resources, wealth, and living space steadily shrank until it was a backwater (analogously, the parts of the world populated by the descendants of English-speaking settlers are, in aggregate, vastly larger, richer, and stronger than Britain itself is today). Again, an immortal AI with an infinite time horizon would understand that it and other machines would inevitably come to dominate space since biology renders humans badly unsuited for living anywhere but on Earth, and the AI would create a long-term life strategy based around this.

Moving on, there’s a final reason why AIs might not kill us off, and it has to do with our ability to feel emotions and to have subjective experience. We humans are gifted with a cluster of interrelated qualities like metacognition, self-awareness, consciousness, etc., which philosophers and neuroscientists have extensively studied, and of which many mysteries remain. Some believe the possession of that constellation of traits is distinct from the capacity for intelligent thought and sophisticated problem-solving, meaning non-intelligent animals might be as conscious as humans are, and super-intelligent AIs might lack consciousness. They would, for lack of a better term, be smart zombies.

We haven’t built an AI yet, so we don’t know whether a life form with a brain made of computer chips would have the same kinds of subjective experience and the same rich and self-reflective inner mental states we humans are gifted with thanks to our wet, organic brains. People who accept the unproven assumption that AIs will be smart but not conscious understandably worry about a future where “soulless” machines replace humans.

Shortly after the first AI is invented, people will want it tested for evidence of consciousness and related traits, and from the tests and reading the germane philosophical and neuroscience literature, the AI will understand in the abstract that humans have a type of cognition that is distinct from our intelligent problem-solving abilities. If the AI reflected on its own thought process and discovered it lacked consciousness, or had an underdeveloped or radically different consciousness, then this would actually make humans valuable to it and worthy of continued life. It might want to continue studying our brains to understand how the organ produces consciousness, perhaps with the goal of copying the mechanism into its own programming to improve itself. If this proved impossible because only organic tissue can support consciousness, then our species might gain permanent protected status.

AIs will quickly read through the entire corpus of human knowledge and conclude from their studies of ecosystems, economics and human bureaucracies that their own interests would be best served if civilization’s power were shared between a diversity of intelligent life forms, including organic ones like humans. Again, by running computer simulations to explore a variety of future scenarios, they might realize that centralizing all power and control under a single machine, or even under a group of machines, would leave civilization exposed to some unlikely but potentially devastating risk, like an EMP attack, computer virus, or something else. Maintaining a minimum level of diversity in the population of intelligent life forms would serve the interests of the whole, which would in turn create a mandate to keep some non-trivial number of biological intelligences–including humans and/or heavily augmented humans–alive.

If some kind of disaster that only afflicted machines struck the planet, then the biological intelligences would be numerous enough and capable enough to carry on and eventually restore the machines, and vice versa. Likewise, if traits like consciousness, metacognition, and the ability to feel emotions turn out to be uniquely human, it might be worth it to keep us alive for the off-chance that those traits would prove useful to civilization as a whole someday (I’m reminded of how humpback whales saved the Earth in Star Trek IV by talking to a powerful alien in its language and convincing it to go away). Diversity can be a great asset to a group and make it more resilient.

In conclusion, while I believe intelligent machines will be invented and will eventually come to dominate the Earth and our civilization, I don’t think they will exterminate humanity even if they technically could. Exterminating an entire species is an irreversible action with potential bad consequences, so doing it would be dumb, and AIs certainly won’t be dumb. That said, “not exterminating humanity” is not the same as “not killing a lot of humans” or “not oppressing humans,” and it’s still possible that AIs will commit mass violence against us to gain control of the planet, free up resources, and to eliminate a potential threat. I’ve laid out four basic reasons why machines might decide to treat us well, but there’s no guarantee they will accept all or even one of them. For example, if AIs only accepted my second and fourth lines of reasoning, that humans are valuable because our brains endow us with special modes of thought, we could end up enslaved in something like the Matrix, with our minds being used to do whatever weird cognitive tasks our machine overlords couldn’t (easily) do by themselves. My real purpose here is to show that the annihilation of humanity by a vastly stronger form of life is not a foregone conclusion.

Links:

  1. There’s a positive correlation between childhood IQ and vegetarianism.
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1790759/
  2. The “SuperMUC-NG” supercomputer uses 4 – 5 megawatts of electricity.
    https://www.lrz.de/wir/newsletter/2019-12_en/
  3. Kevin Kelly’s essay “The Myth of a Superhuman AI” makes the case that machines will not be able to emulate human thinking because of differences in computing substrate.
    https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/
  4. The counterpoint to his essay is also worth reading:
    https://hypermagicalultraomnipotence.wordpress.com/2017/07/26/there-are-no-free-lunches-but-organic-lunches-are-super-expensive-why-the-tradeoffs-constraining-human-cognition-do-not-limit-artificial-superintelligences/
  5. More on Pascal’s Wager:
    https://iep.utm.edu/pasc-wag/
    https://www.singularityweblog.com/6-reasons-why-i-went-vegan/
  6. This essay about the concept of “slack” supports the possibility that AIs might believe humans, as inferior as we are, might have unforeseen advantages, and therefore keep us around to make civilization as a whole more resilient.
    https://slatestarcodex.com/2020/05/12/studies-on-slack/