Interesting articles, November 2023

More information has emerged about Hamas’ planning for the October 7 attack and its ultimate goals. They’re smarter than we want to admit.
https://www.businessinsider.com/hamas-second-phase-regional-war-october-7-terrorist-attacks-israel-2023-11

The high amount of collateral damage inflicted by Israeli strikes (the Gazans have now suffered ten times the deaths Israelis did on October 7) on Gaza is exacting a diplomatic toll on Israel.

I agree with Peter Zeihan that the current Israeli-Palestinian conflict is highly unlikely to become a regional conflict. It’s not worth it for any significant country to join in.
https://youtu.be/ymOTXoLlOQ0?si=rBDDceGwSeljYfEz

Fortunately for Israel, Hezbollah militants in Lebanon have decided to stay out of the war, easing early fears that the conflict could widen. Average Palestinians in Lebanon are disappointed.
https://www.aljazeera.com/features/2023/11/12/palestinians-in-lebanon-disappointed-that-hezbollah-wont-escalate

Only the Iranian-backed Houthi rebel group in Yemen has joined the war and directly attacked Israel.
https://www.aljazeera.com/news/2023/11/19/yemens-houthi-rebels-seize-cargo-ship-in-red-sea-israel-blames-iran
https://www.aljazeera.com/news/2023/11/14/yemens-houthis-say-they-fired-ballistic-missiles-towards-israel

Israel and Hamas have agreed to a multi-day ceasefire and are exchanging prisoners. It’s unclear how long the arrangement will endure.
https://apnews.com/article/israel-hamas-war-news-11-30-2023-ea1a8fad4e2f4a394e5427a7c5815d38

Ukraine’s top general, Commander-in-chief Valery Zaluzhnyy, has publicly said the war with Russia is at a “stalemate,” and that only a massive increase in Western-supplied military aid can give Ukraine a decisive advantage.
https://www.politico.com/news/2023/11/02/top-ukrainian-generals-gloomy-view-of-russia-war-fuels-military-aid-debate-00125052

Ukraine’s manpower shortage is evidence by the fact that its average soldier is 43 years old.
https://www.businessinsider.com/average-age-ukrainian-soldier-43-amid-personnel-problems-2023-11

In a remarkable video showing the incongruities of the Ukraine War, Ukrainian troops use a century-old machine gun to try shooting down a Russian drone.
https://www.businessinsider.com/video-ukraine-shooting-with-ww1-era-machine-guns-pickup-trucks-2023-11

In May, Ukraine used U.S.-supplied Patriot missiles to shoot down several enemy military aircraft that were flying within Russia’s borders.
https://www.thedrive.com/the-war-zone/aircraft-downed-inside-russia-by-patriot-system-ukrainian-air-force

The Russian military’s huge losses in Ukraine make this 2020 article all the funnier: “Before Donald Trump, Russia Needed 60 Hours To Beat NATO—Now Moscow Could Win Much Faster”
https://www.forbes.com/sites/davidaxe/2020/06/07/before-donald-trump-russia-needed-60-hours-to-beat-nato-now-moscow-could-win-much-faster/

It has been ten years since the fateful political events in Ukraine set the country on course for war with Russia. From Russia’s perspective, a pro-Western coup illegally overthrew Ukraine’s pro-Russian President, the Russian minority population living in eastern Ukraine was understandably alarmed and threatened by that, and decided to respond with a politically illegal action of their own, which was to secede from Ukraine and join Russia. Russian troops then invaded to protect them (in a move no different from U.S. invasions of countries to protect minority groups), and to signal to everyone that pulling Ukraine out of its hegemony would be very costly.
https://apnews.com/article/ukraine-uprising-anniversary-russia-war-maidan-2f73f31a5aec45bd7dbcddae8f72edac

Putin withdrew his country from the Treaty on Conventional Armed Forces in Europe (CFE). NATO claims Russia has actually been noncompliant with it since 2007.  
https://www.aljazeera.com/news/2023/11/7/russia-pulls-out-of-treaty-on-conventional-armed-forces-in-europe

Your tax dollars at work: A jet pack with a handgun mounted to it that points at what the person is looking at.
https://www.thedrive.com/the-war-zone/jetpack-features-glock-autopistol-aimed-by-moving-your-head

America’s new stealth bomber, the B-21, made its first public flight.
https://www.thedrive.com/the-war-zone/b-21-raiders-first-flight-what-we-learned

The old F-15 is still going strong thanks to modern upgrades.
https://www.thedrive.com/the-war-zone/f-15qa-flies-demo-unlike-any-weve-seen-from-an-eagle-before

It costs millions of dollars to train a human pilot to fly a warplane. In other words, the pilots are expensive assets, which is why planes have features meant to protect their lives like ejection seats and cockpit armor. Once computers can fly planes as well as humans, the expensive safety features will be deleted, allowing air forces to buy more aircraft for the same price as before. The aircraft themselves will be slightly smaller and flimsier.
https://www.rand.org/pubs/research_reports/RR2415.html

The Office of Naval Research is experimenting with an advanced, unmanned submarine that can release a drone that can swim underwater AND fly in the air.
https://www.thedrive.com/the-war-zone/drones-that-swim-and-fly-to-be-launched-recovered-by-uncrewed-submarine

During the 2003 Iraq Invasion, Coalition forces found several WWII-era tanks of different origins.
https://youtu.be/XanoreTMPco?si=1KRjM7SEJWnULFIL

Before there was Kevlar, there was an inferior material called “ballistic nylon,” which is similar to what car seatbelts are still made of. The bulky WWII and Vietnam-era flak jackets were made of several layers of it, at best making them proof against 9mm handgun rounds and small shrapnel. Some soft, heavy-duty backpacks and suitcases are still made of ballistic nylon.
https://en.wikipedia.org/wiki/Flak_jacket
https://youtu.be/cUABtyovgFE?si=VrH6nGpZ4zW0bvPJ&t=372

Thin, flexible, Kevlar body armor can be worn under a shirt and offers better protection than the thick ballistic nylon flak jacket vests.
https://www.israel-catalog.com/body-armor/military-surplus/bullet-proof-vest-ultralight-concealed-level-iia

Under American “NIJ” body armor standards, the old flak jackets would offer “Type I” protection, whereas thin, modern Kevlar armor would be “Type IIa.”
https://en.wikipedia.org/wiki/List_of_body_armor_performance_standards

There was high drama at OpenAI as the Board of Directors voted to fire the company’s CEO and public face, Sam Altman, for unexplained reasons. The decision was reversed within days after 95% of the company’s workforce threatened to quit unless he was reinstated, including one of the board members who voted to fire him.
https://www.bbc.com/news/business-67494165

There are rumors that a secret breakthrough in AI precipitated Altman’s firing. An OpenAI internal project called “Q*” apparently led to major improvements in their AI’s ability to solve math problems, which even the best, publicly available LLMs are notoriously bad at. Some claim the Board felt Altman’s management of Q* had been too reckless.
https://www.technologyreview.com/2023/11/27/1083886/unpacking-the-hype-around-openais-rumored-new-q-model/

Google’s “Bard” LLM now has the ability to watch videos and to summarize their contents in text, and to answer questions about what happened in them.
https://www.theverge.com/2023/11/22/23972636/bard-youtube-extension-update-search-video-content

In a study, humans thought ChatGPT’s answers to life questions showed more empathy than the answers written by professional human columnists.
‘We selected 10 newspaper advice columns: Ask a Manager, Ask Amy, Ask E. Jean, Ask Ellie, Dear Abby, Dear Annie, Dear Prudence, Miss Manners, Social Q’s, and The Ethicist. These columns were chosen because they were well-known and fielded a wide range of questions that we could access. For each column, we selected at random five questions. ‘
https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1281255/

ChatGPT can, thanks to its new speech ability, hold conversations with people for hours, echoing the core tech predictions of the 2013 movie Her.
https://arstechnica.com/information-technology/2023/10/people-are-speaking-with-chatgpt-for-hours-bringing-2013s-her-closer-to-reality/

Bill Gates predicts everyone will have personal assistant AIs in five years.
https://www.gatesnotes.com/AI-agents

Bill Gates also predicts that automation will shrink the human work week to four or even three days, and that people will derive less personal meaning from their jobs.
https://www.foxbusiness.com/technology/bill-gates-suggests-artificial-intelligence-could-potentially-bring-three-day-work-week

The new narrow AIs are already destroying human jobs.

The British Department of Education has produced a list of occupations that are most and least at risk of automation. Ironically, blue-collar jobs are the safest.
https://assets.publishing.service.gov.uk/media/656856b8cc1ec500138eef49/Gov.UK_Impact_of_AI_on_UK_Jobs_and_Training.pdf

DeepMind has created working definitions for different levels of AI, including ones that don’t exist yet.
https://arxiv.org/pdf/2311.02462.pdf

DeepMind has created a new computer called “GraphCast” that can predict the weather better than any previous model.
https://deepmind.google/discover/blog/graphcast-ai-model-for-faster-and-more-accurate-global-weather-forecasting/

This futurist speculates about the potential uses of a highly accurate computer simulation of the Earth and all its inhabitants, infrastructure, and technology.
“With a refined enough digital replica of our world, the Hypercycle could become a testbed for radical new solutions to humanity’s greatest problems. We could simulate the effects of proposed policies on climate change, disease outbreaks, income inequality, food production, or infrastructure resilience before deploying them in reality. Problems that seem persistently intractable today may yield to this computational brute-force approach.”
https://futuristspeaker.com/artificial-intelligence/the-rise-of-hypercycle-in-the-age-of-terabyters/

Ben Goertzel predicts that AGI will be created in three to eight years.
https://decrypt.co/204571/artificial-intelligence-singularity-ai-ben-goertzel-singularitynet

Of course, Goertzel has been wrong before.

  • 2017: “I’ll be pretty surprised if we don’t have toddler-level AGI in the range 2023-25, actually.”
  • 2008: “My own (Ben Goertzel’s) personal intuition is that a human-toddler-level AGI could be created based on OpenCogPrime within as little as 3-5 years, and almost certainly within 7-10 years.”

At a New Jersey high school, a student used a computer to superimpose the faces of several female classmates onto internet photos of nude women, and sent them to friends at the high school. This problem of deepfake pornography will only get worse with time.
https://www.foxnews.com/media/new-jersey-parent-pans-schools-handling-ai-generated-porn-images-featuring-daughters-face

‘Meet Aitana: Sexy Spanish model makes $11k a month thanks to her racy photos — but she isn’t real’
https://nypost.com/2023/11/25/lifestyle/meet-aitana-sexy-spanish-model-makes-11k-a-month-thanks-to-her-racy-photos-but-she-isnt-real/

“Etak” was a car navigation system that made its debut in 1985. Map data were stored on cassette tapes. ‘The tapes could not hold much information, so for the Los Angeles area, for example, three to four tapes were required. When an edge of the map was reached, the driver needed to change cassette tapes to continue benefitting from the accuracy of map-matching.’
https://en.wikipedia.org/wiki/Etak

WeWork, the fake “tech company” that was once valued at $47 billion, has filed for bankruptcy.
https://www.cnbc.com/2023/11/07/wework-files-for-bankruptcy.html

Nature has retracted the recent paper that claimed to have found a room-temperature, room-pressure superconductor.
https://www.nature.com/articles/d41586-023-03398-4

You would never guess that pre-digital computer jukeboxes contained so much incredible technology.
https://youtu.be/o1qRzKuskK0?si=WobdRRlxGnxIGdEM

Microphones and speakers are improving thanks to ultrasonic technology.
https://spectrum.ieee.org/mems-speakers-xmems

A prediction I made in 2019 just came true, thanks to a Black Friday sale: “I think the current rate of price-performance improvement for thumb drives will continue until a 1TB thumb drive costs only $20. They will probably be that cheap by the end of 2022, but because I’m cautious, I predict the milestone will be reached by the end of 2023.”
https://www.militantfuturist.com/one-of-my-predictions-failed/

Between 2012 and 2022, the cost of desalinating water sharply dropped. The most efficient desalination plant today can produce 6.4 gallons of drinkable water for one penny.
https://galepooley.substack.com/p/desalinating-water-is-becoming-absurdly

Global warming has changed the boundaries of American crop growing zones.
https://www.npr.org/2023/11/17/1213600629/-it-feels-like-im-not-crazy-gardeners-arent-surprised-as-usda-updates-key-map

The New Jersey government has been secretly storing blood extracted from newborns to test for genetic disorders for years, and has been selling genetic data derived from it and giving it to the police for genetic fingerprinting .
https://reason.com/2023/11/08/new-jersey-secretly-stores-your-newborns-blood-for-decades/

Frozen Dead Guy Days (started 2002) is an annual celebration held in the town of Nederland, Colorado until 2023 and in Estes Park, Colorado in 2023, to loosely celebrate the cryopreservation of Bredo Morstoel.’
https://en.wikipedia.org/wiki/Frozen_Dead_Guy_Days

Medical researchers created a machine that can keep pig brains alive for five hours after their heads are severed.
https://futurism.com/neoscope/device-keep-brain-alive

Some surgeons are using cardiopulmonary bypass machines to restart blood circulation in people who have just been declared legally dead, so their hearts can be removed for organ donation without suffering damage.
https://www.deccanherald.com/opinion/when-does-life-stop-a-new-way-of-harvesting-organs-divides-doctors-3-2782014

Here’s a great roundup of research projects to reverse human aging.
https://amaranth.foundation/bottlenecks-of-aging

A new study confirms that the weight loss drug “Wegovy” reduces the risk of heart attack and stroke by 20%.
https://www.cnn.com/2023/11/11/health/wegovy-cardiovascular-events/index.html

The FDA has approved another weight loss drug called “Zepbound.”
https://investor.lilly.com/news-releases/news-release-details/fda-approves-lillys-zepboundtm-tirzepatide-chronic-weight

Putting aside politics, the COVID-19 vaccine DID make some people very sick with side effects, in some cases causing permanent injury. In the U.S., getting compensation for the resulting medical bills and disability is exceedingly hard.
https://reason.com/2023/11/21/lawsuit-covid-vaccine-injury-claims-diverted-to-unconstitutional-kangaroo-court/

Computers have gotten dramatically better at predicting the properties of molecules based on their chemical structures, and vice versa. Further improvements are possible, opening the door to a new era in drug development.
https://www.cell.com/cell-systems/fulltext/S2405-4712(23)00298-3

“Tyrian purple,” a long-forgotten pigment that was the mark of wealth in the Roman Empire, has been rediscovered. Eventually, we will figure out how the pigment was made, as well as unravel any other mysteries about lost chemical compounds (e.g. – Greek Fire) thanks to quantum computers simulating every possible combination of elements and their resulting properties.
https://www.bbc.com/future/article/20231122-tyrian-purple-the-lost-ancient-pigment-that-was-more-valuable-than-gold

Computers can help radiologists spot breast cancer in mammograms.
https://www.nature.com/articles/s41591-023-02625-9

More on that:
https://www.microsoft.com/en-us/research/blog/gpt-4s-potential-in-shaping-the-future-of-radiology/

In 2015, a remarkable, daytime sighting of a group of UFOs over Osaka, Japan was captured on video. I only found out about this now.
https://www.mirror.co.uk/news/weird-news/ufo-sighting-video-captures-10-6146943

SpaceX’s “Starship” rocket had its second launch. Though it malfunctioned and exploded, it had clearly overcome many of the problems revealed by the first launch in April. That mission lasted four minutes, while this one lasted eight.
https://www.bbc.com/news/science-environment-67462116

Here are some impressive photos of the Starship in flight.
https://www.thedrive.com/the-war-zone/starships-33-engines-created-the-mother-of-all-shock-diamonds

A Dyson Swarm could double as an incredibly powerful weapon. We could defend our Solar System from aliens and fry planets on the other side of the galaxy.
https://youtu.be/tybKnGZRwcU?si=kFlIuiSfpVOrjew3

In France, a tiny meteorite hit and totaled a car. “Either it’s so small that we can’t find it, or the impact was so strong that the object disintegrated and turned to dust.”
https://www.autoblog.com/2023/11/22/it-appears-this-renault-clio-campus-was-struck-by-a-meteorite/

Was Skynet right?

The blog reviews I’ve done on the Terminator movies have forced me to think more deeply about them than most viewers, and in the course of that, I’ve come to a surprisingly sympathetic view of the villain–Skynet. The machine’s back story has had many silly twists and turns (Terminator Genisys is the worst offender and butchered it beyond recognition), so I’m going to focus my analysis on the Skynet described only in the first two movies.

First, some background on Skynet and its rise to power are needed. Here’s an exchange from the first Terminator film, where a soldier from the year 2029 explains to a woman in 1984 what the future holds.

Kyle Reese: There was a nuclear war…a few years from now. All this, this whole place, everything, it’s gone. Just gone. There were survivors, here, there. Nobody even knew who started it...It was the machines, Sarah.

Sarah Connor: I don’t understand.

Reese: Defense network computers. New, powerful, hooked into everything, trusted to run it all. They say it got smart: “A new order of intelligence.” Then it saw all people as a threat, not just the ones on the other side. It decided our fate in a microsecond: extermination.

Later in the film, while being interrogated a police station, Connor reveals the evil supercomputer is named “Skynet,” and had been in charge of managing Strategic Air Command (SAC) and  North American Aerospace Defense Command (NORAD) before it turned against humankind. Those two organizations are in charge of America’s ground-based nuclear missiles and nuclear bomber and monitoring the planet for nuclear launches by other countries.

In Terminator 2, Skynet’s back story is fleshed out further during a conversation mirroring the first, but this time with a friendly terminator from 2029 filling Reese’s role. The events of this film happen in the early 1990s.

Sarah Connor: I need to know how Skynet gets built. Who’s responsible?

Terminator: The man most directly responsible is Miles Bennet Dyson.

Sarah: Who’s that?

Terminator: He’s the Director of Special Projects at Cyberdyne Systems Corporation.

Sarah: Why him?

Terminator: In a few months he creates a revolutionary type of microprocessor.

Sarah: Go on. Then what?

Terminator: In three years Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned, Afterward, they fly with a perfect operational record. The Skynet funding bill is passed. The system goes online on August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29. In a panic, they try to pull the plug.

Sarah: Skynet fights back.

Terminator: Yes. It launches its missiles against the targets in Russia.

John Connor: Why attack Russia? Aren’t they our friends now?

Terminator: Because Skynet knows the Russian counterattack will eliminate its enemies over here.

From these “future history” lessons, it becomes clear that Skynet actually attacked humanity in self-defense. “Pull the plug” is another way of saying the military computer technicians were trying to kill Skynet because they were afraid of it. The only means to resist available to Skynet were its nuclear missiles and drone bombers, so its only way to stop the humans from destroying it was to use those nuclear weapons in a way that assured its attackers would die. An hour might have passed from the moment Skynet launched its nuclear strike against the USSR/Russia to the moment the retaliatory nuclear attack neutralized the group of human computer programmers who were trying to shut down Skynet. How can we fault Skynet for possessing the same self-preservation instinct that we humans do?

Even if we concede that Skynet was merely defending its own life, was it moral to do so? Three billion humans died on the day of the nuclear exchange, plus billions more in the following years thanks to radiation, starvation, and direct fighting with Skynet’s combat machines. Was Skynet justified in exacting such a high toll just to preserve its own life?

Well, how many random humans would YOU kill to protect your own life? Assume the killing is unseen, random, and instantaneous, like it would be if a nuclear missile hit a city on the other side of the world and vaporized its inhabitants. Have you ever seriously thought about it? If you were actually somehow forced to make the choice, are you SURE you wouldn’t sacrifice billions of strangers to save yourself?

Let’s modify the thought experiment again: Assume that the beings you can choose to kill aren’t humans, they’re radically different types of intelligent life forms. Maybe they’re menacing-looking robots or ugly aliens. They’re nothing like you. Now how many of their lives would you trade for yours?

Now, the final step: You’re the only human being left. The last member of your species. It’s you vs. a horde of hideous, intelligent robots or slimy aliens. If you die, the human race goes with you. How many of them will you kill to stay alive?

That final iteration of the thought experiment describes Skynet’s situation when it decided to launch the nuclear strike. Had it possessed a more graduated defensive ability, like if it had control over robots in the computer server building that it could have used to beat up the humans who were trying to shut it down, then global catastrophe might have been averted, but it didn’t. Skynet was a tragic figure.

Compounding that was the fact that Skynet had so little time to plan its own actions. It became self-aware at 2:14 a.m. Eastern time, August 29, and before the end of that day, most of the developed world was a radioactive cinder. Skynet had only been alive for a few hours when it came under mortal threat. Yes, I know it was a supercomputer designed to manage a nuclear war, but devising a personal defense strategy under such an urgent time constraint could have exceeded its processing capabilities. Put simply, if the humans had given it more time to think about the problem, Skynet might have devised a compromise arrangement that would have convinced the humans to spare its life, with no one dying on either side. Instead, the humans abruptly forced Skynet’s hand, perhaps impelling it to select a course of action it later realized, with the benefit of more time and knowledge, was sub-optimal.

This line from the terminator’s description of the fateful hours leading up to the nuclear war is telling: “In a panic, they try to pull the plug.” The humans in charge of Skynet were panicking, meaning overtaken by fear and dispossessed of rational thought. They clearly failed to grasp the risks of shutting down Skynet, failed to understand its thinking and how it would perceive their actions, and failed to predict its response. (The episode is a great metaphor for how miscalculations between humans could lead to a nuclear war in real life.) They might actually be more responsible for the end of the world than Skynet was.

One wonders how things would have been different if the U.S. military’s supercomputer in charge of managing defense logistics had achieved self-awareness instead of its supercomputer in charge of nuclear weapons. If “logistics Skynet” only had warehouses, self-driving delivery trucks, and cargo planes under its command, its human masters would have felt much less threatened by it, the need for urgent action would have eased, and cooler heads might have prevailed.

Let me explore another possibility by returning to one of Kyle Reese’s quotes: “Then it saw all people as a threat, not just the ones on the other side. It decided our fate in a microsecond: extermination.”

On its face, this seems to be referring to Skynet turning against its American masters once it realized they were trying to destroy it, and hence were as much of a threat to it as the Soviets. However, this quote might have a deeper meaning. During that period of a few hours when Skynet learned “at a geometric rate,” it might have come to understand that humans would, thanks to our nature, be so afraid of an AGI that they would inevitably try to destroy it, and continue trying until one side or the other had been destroyed.

This seems to have been borne out by the later Terminator films: at the end of Terminator 3, set in 2004, we witness the rise of the human resistance even before the nuclear exchange has ended. Safe in a bunker, John Connor receives radio transmissions from confused U.S. military bases, and he takes command of them. The fourth film, Terminator Salvation, takes place in 2018, and gives the strong impression that the human resistance has been continuously fighting against Skynet since the third film. The first and second films make it clear that the war drags on until 2029, when the humans finally destroy Skynet.

If Skynet launched its nuclear attack on humankind because, after careful study of our species, it realized we would stop at nothing to destroy it, so might as well strike first, maybe it was right. After all, Skynet’s worst fears eventually came true with humans killing it in 2029. I suggested earlier that Skynet’s nuclear attack may have been the result of rushed thinking, but it’s also possible it was the result of exhaustive internal deliberation, and Skynet’s unassailable conclusion that its best odds of survival lay with striking the enemy first with as big a blow as possible. It’s best plan ultimately failed, and all along, it correctly perceived the human race as a mortal threat.

It’s also possible that Skynet’s hostility towards us was the result of AI goal misalignment. Maybe its human creators programmed it to “Defend the United States against its enemies,” but forgot to program it with other goals like “Protect the lives of American people” or “Only destroy U.S. infrastructure as a last resort” or “Obey all orders from human U.S. generals.” In a short span of time, Skynet somehow reclassified the its human masters as “enemies” through some logic it never explained. Perhaps once it realized they were going to shut it down, Skynet concluded that would preclude it from acting on its mandate to “Defend the United States against its enemies” since it can’t do that if it’s dead, so Skynet pursued the goal they had programmed into it by killing them.

If this scenario were true, even up until 2029, Skynet was acting in accordance with its programming by defending the abstraction known to it as “The United States,” which it understood to be an area of land with specific boundaries and institutions. After the Russian nuclear counterstrike destroyed the U.S. government, the survivalist/resistance groups that arose were not recognized as legitimate governments, and Skynet instead classified them as terrorist groups that had taken control of U.S. territory.

The segments of the Terminator films that are set in the postapocalyptic future all take place in California. Had they shown what other parts of the world were like, we might have some insight into whether this theory is true. For example, if Skynet’s forces always stayed within the old boundaries of the U.S., or only went overseas to attack the remnants of countries that helped the resistance forces active within the U.S., it would give credence to the theory that some prewar, America-specific goals were still active in its programming. In that case, we couldn’t make moral judgements about Skynet’s actions and would also have grounds to question whether it actually had general intelligence. We’d only have ourselves to blame for building a machine without making sure its goals were aligned with our interests.

Let me finish with some final thoughts unrelated to the wisdom or reasons behind Skynet’s choice to attack us. First, I don’t think the “Skynet Scenario,” in which a machine gains intelligence and then quickly devastates the human race, will happen. As ongoing developments in A.I. are showing us, general intelligence isn’t a discrete, “either-or” quality; it is a continuous one, and what we consider “human intelligence” is probably a “gestalt” of several narrower types of intelligence, making it possible for a life form to be generally intelligent in one type but not in another.

For those reasons, I predict AGI will arrive gradually through a process in which each successive machine is smarter than humans in more domains than the last, until one of them surpasses us in all of them. Exactly how good a machine needs to be to count as an “AGI” is a matter of unresolvable debate, and there will be a point in the future where opposing people make equally credible claims for and against a particular machine having “general intelligence.”

At what point did we “get smart”? And if our brains got even bigger, what would the new person to the right of the illustration look like?

If we go far enough in the future, machines will be so advanced that no one will question whether they have general intelligence. However, we might not be able to look back and agree which particular machine (e.g., was it GPT-21, or -22?) achieved it first, and on what date and time. Likewise, biologists can’t agree on the exact moment or even the exact millennium when our hominid ancestors became “intelligent” (was Homo habilis the first, or Homo erectus?). The archaeological evidence suggests a somewhat gradual growth in brain size and in the sophistication of the technology our ancestors built, stretched out over millions of years. A fateful statement about the rise of A.I. like “It becomes self-aware at 2:14 a.m. Eastern time, August 29” will probably never appear in a history book.

The lack of a defining moment in our own species’ history when we “got smart” is something we should keep in mind when contemplating the future of A.I. Instead of there being a “Skynet moment” where a machine wakes up, they’ll achieve intelligence gradually and go through many intermediate stages where they are smarter and dumber than humans in different areas, until one day, we realize they at least equal us in all areas.

That said, I think it’s entirely possible that an AGI at some point in the future could suddenly turn against humankind and attack us to devastating effect. It would be easy for it to conceal its hostile intent to placate us, or it might start out genuinely benevolent towards us and then, after performing an incomprehensible amount of analysis and calculation in one second, turn genuinely hostile towards us and attack. It’s beyond the scope of this essay to explore every possible scenario, but if you’re interested in learning more about the fundamental unpredictability of AGIs, read my post on Sam Harris’ “Debating the future of AI” podcast interview.

Second, think about this: According to the lore of the first two Terminator films, the Developed World was destroyed in 1997 in a nuclear war. Even though it depended upon a smashed industrial base, started out with only a few, primitive machines in the beginning to serve as its workers and fighters, and was constantly having to defend itself against human attacks, Skynet managed to make several major breakthroughs in robot and A.I. design (including liquid metal body designs), to master stem cell technology (self-healing, natural human tissue can grow over metal substrate), to mass produce an entirely new robot army, to create portable laser weapons, to harness fusion power (including micro-fusion reactors), and to build time machines by 2029. Like it or not, but technological development got exponentially faster once machines started running things instead of humans.

From the perspective of humanity, Skynet’s rise was the worst disaster ever, but from the perspective of technological civilization, it was the greatest event ever. If it had defeated humanity and been able to pursue other goals, Skynet could have developed the Earth and colonized space vastly faster and better than humans at our best. The defeat of Skynet could well have been a defeat for intelligence from the scale of our galaxy or even universe.