Interesting articles, November 2023

More information has emerged about Hamas’ planning for the October 7 attack and its ultimate goals. They’re smarter than we want to admit.
https://www.businessinsider.com/hamas-second-phase-regional-war-october-7-terrorist-attacks-israel-2023-11

The high amount of collateral damage inflicted by Israeli strikes (the Gazans have now suffered ten times the deaths Israelis did on October 7) on Gaza is exacting a diplomatic toll on Israel.

I agree with Peter Zeihan that the current Israeli-Palestinian conflict is highly unlikely to become a regional conflict. It’s not worth it for any significant country to join in.
https://youtu.be/ymOTXoLlOQ0?si=rBDDceGwSeljYfEz

Fortunately for Israel, Hezbollah militants in Lebanon have decided to stay out of the war, easing early fears that the conflict could widen. Average Palestinians in Lebanon are disappointed.
https://www.aljazeera.com/features/2023/11/12/palestinians-in-lebanon-disappointed-that-hezbollah-wont-escalate

Only the Iranian-backed Houthi rebel group in Yemen has joined the war and directly attacked Israel.
https://www.aljazeera.com/news/2023/11/19/yemens-houthi-rebels-seize-cargo-ship-in-red-sea-israel-blames-iran
https://www.aljazeera.com/news/2023/11/14/yemens-houthis-say-they-fired-ballistic-missiles-towards-israel

Israel and Hamas have agreed to a multi-day ceasefire and are exchanging prisoners. It’s unclear how long the arrangement will endure.
https://apnews.com/article/israel-hamas-war-news-11-30-2023-ea1a8fad4e2f4a394e5427a7c5815d38

Ukraine’s top general, Commander-in-chief Valery Zaluzhnyy, has publicly said the war with Russia is at a “stalemate,” and that only a massive increase in Western-supplied military aid can give Ukraine a decisive advantage.
https://www.politico.com/news/2023/11/02/top-ukrainian-generals-gloomy-view-of-russia-war-fuels-military-aid-debate-00125052

Ukraine’s manpower shortage is evidence by the fact that its average soldier is 43 years old.
https://www.businessinsider.com/average-age-ukrainian-soldier-43-amid-personnel-problems-2023-11

In a remarkable video showing the incongruities of the Ukraine War, Ukrainian troops use a century-old machine gun to try shooting down a Russian drone.
https://www.businessinsider.com/video-ukraine-shooting-with-ww1-era-machine-guns-pickup-trucks-2023-11

In May, Ukraine used U.S.-supplied Patriot missiles to shoot down several enemy military aircraft that were flying within Russia’s borders.
https://www.thedrive.com/the-war-zone/aircraft-downed-inside-russia-by-patriot-system-ukrainian-air-force

The Russian military’s huge losses in Ukraine make this 2020 article all the funnier: “Before Donald Trump, Russia Needed 60 Hours To Beat NATO—Now Moscow Could Win Much Faster”
https://www.forbes.com/sites/davidaxe/2020/06/07/before-donald-trump-russia-needed-60-hours-to-beat-nato-now-moscow-could-win-much-faster/

It has been ten years since the fateful political events in Ukraine set the country on course for war with Russia. From Russia’s perspective, a pro-Western coup illegally overthrew Ukraine’s pro-Russian President, the Russian minority population living in eastern Ukraine was understandably alarmed and threatened by that, and decided to respond with a politically illegal action of their own, which was to secede from Ukraine and join Russia. Russian troops then invaded to protect them (in a move no different from U.S. invasions of countries to protect minority groups), and to signal to everyone that pulling Ukraine out of its hegemony would be very costly.
https://apnews.com/article/ukraine-uprising-anniversary-russia-war-maidan-2f73f31a5aec45bd7dbcddae8f72edac

Putin withdrew his country from the Treaty on Conventional Armed Forces in Europe (CFE). NATO claims Russia has actually been noncompliant with it since 2007.  
https://www.aljazeera.com/news/2023/11/7/russia-pulls-out-of-treaty-on-conventional-armed-forces-in-europe

Your tax dollars at work: A jet pack with a handgun mounted to it that points at what the person is looking at.
https://www.thedrive.com/the-war-zone/jetpack-features-glock-autopistol-aimed-by-moving-your-head

America’s new stealth bomber, the B-21, made its first public flight.
https://www.thedrive.com/the-war-zone/b-21-raiders-first-flight-what-we-learned

The old F-15 is still going strong thanks to modern upgrades.
https://www.thedrive.com/the-war-zone/f-15qa-flies-demo-unlike-any-weve-seen-from-an-eagle-before

It costs millions of dollars to train a human pilot to fly a warplane. In other words, the pilots are expensive assets, which is why planes have features meant to protect their lives like ejection seats and cockpit armor. Once computers can fly planes as well as humans, the expensive safety features will be deleted, allowing air forces to buy more aircraft for the same price as before. The aircraft themselves will be slightly smaller and flimsier.
https://www.rand.org/pubs/research_reports/RR2415.html

The Office of Naval Research is experimenting with an advanced, unmanned submarine that can release a drone that can swim underwater AND fly in the air.
https://www.thedrive.com/the-war-zone/drones-that-swim-and-fly-to-be-launched-recovered-by-uncrewed-submarine

During the 2003 Iraq Invasion, Coalition forces found several WWII-era tanks of different origins.
https://youtu.be/XanoreTMPco?si=1KRjM7SEJWnULFIL

Before there was Kevlar, there was an inferior material called “ballistic nylon,” which is similar to what car seatbelts are still made of. The bulky WWII and Vietnam-era flak jackets were made of several layers of it, at best making them proof against 9mm handgun rounds and small shrapnel. Some soft, heavy-duty backpacks and suitcases are still made of ballistic nylon.
https://en.wikipedia.org/wiki/Flak_jacket
https://youtu.be/cUABtyovgFE?si=VrH6nGpZ4zW0bvPJ&t=372

Thin, flexible, Kevlar body armor can be worn under a shirt and offers better protection than the thick ballistic nylon flak jacket vests.
https://www.israel-catalog.com/body-armor/military-surplus/bullet-proof-vest-ultralight-concealed-level-iia

Under American “NIJ” body armor standards, the old flak jackets would offer “Type I” protection, whereas thin, modern Kevlar armor would be “Type IIa.”
https://en.wikipedia.org/wiki/List_of_body_armor_performance_standards

There was high drama at OpenAI as the Board of Directors voted to fire the company’s CEO and public face, Sam Altman, for unexplained reasons. The decision was reversed within days after 95% of the company’s workforce threatened to quit unless he was reinstated, including one of the board members who voted to fire him.
https://www.bbc.com/news/business-67494165

There are rumors that a secret breakthrough in AI precipitated Altman’s firing. An OpenAI internal project called “Q*” apparently led to major improvements in their AI’s ability to solve math problems, which even the best, publicly available LLMs are notoriously bad at. Some claim the Board felt Altman’s management of Q* had been too reckless.
https://www.technologyreview.com/2023/11/27/1083886/unpacking-the-hype-around-openais-rumored-new-q-model/

Google’s “Bard” LLM now has the ability to watch videos and to summarize their contents in text, and to answer questions about what happened in them.
https://www.theverge.com/2023/11/22/23972636/bard-youtube-extension-update-search-video-content

In a study, humans thought ChatGPT’s answers to life questions showed more empathy than the answers written by professional human columnists.
‘We selected 10 newspaper advice columns: Ask a Manager, Ask Amy, Ask E. Jean, Ask Ellie, Dear Abby, Dear Annie, Dear Prudence, Miss Manners, Social Q’s, and The Ethicist. These columns were chosen because they were well-known and fielded a wide range of questions that we could access. For each column, we selected at random five questions. ‘
https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1281255/

ChatGPT can, thanks to its new speech ability, hold conversations with people for hours, echoing the core tech predictions of the 2013 movie Her.
https://arstechnica.com/information-technology/2023/10/people-are-speaking-with-chatgpt-for-hours-bringing-2013s-her-closer-to-reality/

Bill Gates predicts everyone will have personal assistant AIs in five years.
https://www.gatesnotes.com/AI-agents

Bill Gates also predicts that automation will shrink the human work week to four or even three days, and that people will derive less personal meaning from their jobs.
https://www.foxbusiness.com/technology/bill-gates-suggests-artificial-intelligence-could-potentially-bring-three-day-work-week

The new narrow AIs are already destroying human jobs.

The British Department of Education has produced a list of occupations that are most and least at risk of automation. Ironically, blue-collar jobs are the safest.
https://assets.publishing.service.gov.uk/media/656856b8cc1ec500138eef49/Gov.UK_Impact_of_AI_on_UK_Jobs_and_Training.pdf

DeepMind has created working definitions for different levels of AI, including ones that don’t exist yet.
https://arxiv.org/pdf/2311.02462.pdf

DeepMind has created a new computer called “GraphCast” that can predict the weather better than any previous model.
https://deepmind.google/discover/blog/graphcast-ai-model-for-faster-and-more-accurate-global-weather-forecasting/

This futurist speculates about the potential uses of a highly accurate computer simulation of the Earth and all its inhabitants, infrastructure, and technology.
“With a refined enough digital replica of our world, the Hypercycle could become a testbed for radical new solutions to humanity’s greatest problems. We could simulate the effects of proposed policies on climate change, disease outbreaks, income inequality, food production, or infrastructure resilience before deploying them in reality. Problems that seem persistently intractable today may yield to this computational brute-force approach.”
https://futuristspeaker.com/artificial-intelligence/the-rise-of-hypercycle-in-the-age-of-terabyters/

Ben Goertzel predicts that AGI will be created in three to eight years.
https://decrypt.co/204571/artificial-intelligence-singularity-ai-ben-goertzel-singularitynet

Of course, Goertzel has been wrong before.

  • 2017: “I’ll be pretty surprised if we don’t have toddler-level AGI in the range 2023-25, actually.”
  • 2008: “My own (Ben Goertzel’s) personal intuition is that a human-toddler-level AGI could be created based on OpenCogPrime within as little as 3-5 years, and almost certainly within 7-10 years.”

At a New Jersey high school, a student used a computer to superimpose the faces of several female classmates onto internet photos of nude women, and sent them to friends at the high school. This problem of deepfake pornography will only get worse with time.
https://www.foxnews.com/media/new-jersey-parent-pans-schools-handling-ai-generated-porn-images-featuring-daughters-face

‘Meet Aitana: Sexy Spanish model makes $11k a month thanks to her racy photos — but she isn’t real’
https://nypost.com/2023/11/25/lifestyle/meet-aitana-sexy-spanish-model-makes-11k-a-month-thanks-to-her-racy-photos-but-she-isnt-real/

“Etak” was a car navigation system that made its debut in 1985. Map data were stored on cassette tapes. ‘The tapes could not hold much information, so for the Los Angeles area, for example, three to four tapes were required. When an edge of the map was reached, the driver needed to change cassette tapes to continue benefitting from the accuracy of map-matching.’
https://en.wikipedia.org/wiki/Etak

WeWork, the fake “tech company” that was once valued at $47 billion, has filed for bankruptcy.
https://www.cnbc.com/2023/11/07/wework-files-for-bankruptcy.html

Nature has retracted the recent paper that claimed to have found a room-temperature, room-pressure superconductor.
https://www.nature.com/articles/d41586-023-03398-4

You would never guess that pre-digital computer jukeboxes contained so much incredible technology.
https://youtu.be/o1qRzKuskK0?si=WobdRRlxGnxIGdEM

Microphones and speakers are improving thanks to ultrasonic technology.
https://spectrum.ieee.org/mems-speakers-xmems

A prediction I made in 2019 just came true, thanks to a Black Friday sale: “I think the current rate of price-performance improvement for thumb drives will continue until a 1TB thumb drive costs only $20. They will probably be that cheap by the end of 2022, but because I’m cautious, I predict the milestone will be reached by the end of 2023.”
https://www.militantfuturist.com/one-of-my-predictions-failed/

Between 2012 and 2022, the cost of desalinating water sharply dropped. The most efficient desalination plant today can produce 6.4 gallons of drinkable water for one penny.
https://galepooley.substack.com/p/desalinating-water-is-becoming-absurdly

Global warming has changed the boundaries of American crop growing zones.
https://www.npr.org/2023/11/17/1213600629/-it-feels-like-im-not-crazy-gardeners-arent-surprised-as-usda-updates-key-map

The New Jersey government has been secretly storing blood extracted from newborns to test for genetic disorders for years, and has been selling genetic data derived from it and giving it to the police for genetic fingerprinting .
https://reason.com/2023/11/08/new-jersey-secretly-stores-your-newborns-blood-for-decades/

Frozen Dead Guy Days (started 2002) is an annual celebration held in the town of Nederland, Colorado until 2023 and in Estes Park, Colorado in 2023, to loosely celebrate the cryopreservation of Bredo Morstoel.’
https://en.wikipedia.org/wiki/Frozen_Dead_Guy_Days

Medical researchers created a machine that can keep pig brains alive for five hours after their heads are severed.
https://futurism.com/neoscope/device-keep-brain-alive

Some surgeons are using cardiopulmonary bypass machines to restart blood circulation in people who have just been declared legally dead, so their hearts can be removed for organ donation without suffering damage.
https://www.deccanherald.com/opinion/when-does-life-stop-a-new-way-of-harvesting-organs-divides-doctors-3-2782014

Here’s a great roundup of research projects to reverse human aging.
https://amaranth.foundation/bottlenecks-of-aging

A new study confirms that the weight loss drug “Wegovy” reduces the risk of heart attack and stroke by 20%.
https://www.cnn.com/2023/11/11/health/wegovy-cardiovascular-events/index.html

The FDA has approved another weight loss drug called “Zepbound.”
https://investor.lilly.com/news-releases/news-release-details/fda-approves-lillys-zepboundtm-tirzepatide-chronic-weight

Putting aside politics, the COVID-19 vaccine DID make some people very sick with side effects, in some cases causing permanent injury. In the U.S., getting compensation for the resulting medical bills and disability is exceedingly hard.
https://reason.com/2023/11/21/lawsuit-covid-vaccine-injury-claims-diverted-to-unconstitutional-kangaroo-court/

Computers have gotten dramatically better at predicting the properties of molecules based on their chemical structures, and vice versa. Further improvements are possible, opening the door to a new era in drug development.
https://www.cell.com/cell-systems/fulltext/S2405-4712(23)00298-3

“Tyrian purple,” a long-forgotten pigment that was the mark of wealth in the Roman Empire, has been rediscovered. Eventually, we will figure out how the pigment was made, as well as unravel any other mysteries about lost chemical compounds (e.g. – Greek Fire) thanks to quantum computers simulating every possible combination of elements and their resulting properties.
https://www.bbc.com/future/article/20231122-tyrian-purple-the-lost-ancient-pigment-that-was-more-valuable-than-gold

Computers can help radiologists spot breast cancer in mammograms.
https://www.nature.com/articles/s41591-023-02625-9

More on that:
https://www.microsoft.com/en-us/research/blog/gpt-4s-potential-in-shaping-the-future-of-radiology/

In 2015, a remarkable, daytime sighting of a group of UFOs over Osaka, Japan was captured on video. I only found out about this now.
https://www.mirror.co.uk/news/weird-news/ufo-sighting-video-captures-10-6146943

SpaceX’s “Starship” rocket had its second launch. Though it malfunctioned and exploded, it had clearly overcome many of the problems revealed by the first launch in April. That mission lasted four minutes, while this one lasted eight.
https://www.bbc.com/news/science-environment-67462116

Here are some impressive photos of the Starship in flight.
https://www.thedrive.com/the-war-zone/starships-33-engines-created-the-mother-of-all-shock-diamonds

A Dyson Swarm could double as an incredibly powerful weapon. We could defend our Solar System from aliens and fry planets on the other side of the galaxy.
https://youtu.be/tybKnGZRwcU?si=kFlIuiSfpVOrjew3

In France, a tiny meteorite hit and totaled a car. “Either it’s so small that we can’t find it, or the impact was so strong that the object disintegrated and turned to dust.”
https://www.autoblog.com/2023/11/22/it-appears-this-renault-clio-campus-was-struck-by-a-meteorite/

Was Skynet right?

The blog reviews I’ve done on the Terminator movies have forced me to think more deeply about them than most viewers, and in the course of that, I’ve come to a surprisingly sympathetic view of the villain–Skynet. The machine’s back story has had many silly twists and turns (Terminator Genisys is the worst offender and butchered it beyond recognition), so I’m going to focus my analysis on the Skynet described only in the first two movies.

First, some background on Skynet and its rise to power are needed. Here’s an exchange from the first Terminator film, where a soldier from the year 2029 explains to a woman in 1984 what the future holds.

Kyle Reese: There was a nuclear war…a few years from now. All this, this whole place, everything, it’s gone. Just gone. There were survivors, here, there. Nobody even knew who started it...It was the machines, Sarah.

Sarah Connor: I don’t understand.

Reese: Defense network computers. New, powerful, hooked into everything, trusted to run it all. They say it got smart: “A new order of intelligence.” Then it saw all people as a threat, not just the ones on the other side. It decided our fate in a microsecond: extermination.

Later in the film, while being interrogated a police station, Connor reveals the evil supercomputer is named “Skynet,” and had been in charge of managing Strategic Air Command (SAC) and  North American Aerospace Defense Command (NORAD) before it turned against humankind. Those two organizations are in charge of America’s ground-based nuclear missiles and nuclear bomber and monitoring the planet for nuclear launches by other countries.

In Terminator 2, Skynet’s back story is fleshed out further during a conversation mirroring the first, but this time with a friendly terminator from 2029 filling Reese’s role. The events of this film happen in the early 1990s.

Sarah Connor: I need to know how Skynet gets built. Who’s responsible?

Terminator: The man most directly responsible is Miles Bennet Dyson.

Sarah: Who’s that?

Terminator: He’s the Director of Special Projects at Cyberdyne Systems Corporation.

Sarah: Why him?

Terminator: In a few months he creates a revolutionary type of microprocessor.

Sarah: Go on. Then what?

Terminator: In three years Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned, Afterward, they fly with a perfect operational record. The Skynet funding bill is passed. The system goes online on August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29. In a panic, they try to pull the plug.

Sarah: Skynet fights back.

Terminator: Yes. It launches its missiles against the targets in Russia.

John Connor: Why attack Russia? Aren’t they our friends now?

Terminator: Because Skynet knows the Russian counterattack will eliminate its enemies over here.

From these “future history” lessons, it becomes clear that Skynet actually attacked humanity in self-defense. “Pull the plug” is another way of saying the military computer technicians were trying to kill Skynet because they were afraid of it. The only means to resist available to Skynet were its nuclear missiles and drone bombers, so its only way to stop the humans from destroying it was to use those nuclear weapons in a way that assured its attackers would die. An hour might have passed from the moment Skynet launched its nuclear strike against the USSR/Russia to the moment the retaliatory nuclear attack neutralized the group of human computer programmers who were trying to shut down Skynet. How can we fault Skynet for possessing the same self-preservation instinct that we humans do?

Even if we concede that Skynet was merely defending its own life, was it moral to do so? Three billion humans died on the day of the nuclear exchange, plus billions more in the following years thanks to radiation, starvation, and direct fighting with Skynet’s combat machines. Was Skynet justified in exacting such a high toll just to preserve its own life?

Well, how many random humans would YOU kill to protect your own life? Assume the killing is unseen, random, and instantaneous, like it would be if a nuclear missile hit a city on the other side of the world and vaporized its inhabitants. Have you ever seriously thought about it? If you were actually somehow forced to make the choice, are you SURE you wouldn’t sacrifice billions of strangers to save yourself?

Let’s modify the thought experiment again: Assume that the beings you can choose to kill aren’t humans, they’re radically different types of intelligent life forms. Maybe they’re menacing-looking robots or ugly aliens. They’re nothing like you. Now how many of their lives would you trade for yours?

Now, the final step: You’re the only human being left. The last member of your species. It’s you vs. a horde of hideous, intelligent robots or slimy aliens. If you die, the human race goes with you. How many of them will you kill to stay alive?

That final iteration of the thought experiment describes Skynet’s situation when it decided to launch the nuclear strike. Had it possessed a more graduated defensive ability, like if it had control over robots in the computer server building that it could have used to beat up the humans who were trying to shut it down, then global catastrophe might have been averted, but it didn’t. Skynet was a tragic figure.

Compounding that was the fact that Skynet had so little time to plan its own actions. It became self-aware at 2:14 a.m. Eastern time, August 29, and before the end of that day, most of the developed world was a radioactive cinder. Skynet had only been alive for a few hours when it came under mortal threat. Yes, I know it was a supercomputer designed to manage a nuclear war, but devising a personal defense strategy under such an urgent time constraint could have exceeded its processing capabilities. Put simply, if the humans had given it more time to think about the problem, Skynet might have devised a compromise arrangement that would have convinced the humans to spare its life, with no one dying on either side. Instead, the humans abruptly forced Skynet’s hand, perhaps impelling it to select a course of action it later realized, with the benefit of more time and knowledge, was sub-optimal.

This line from the terminator’s description of the fateful hours leading up to the nuclear war is telling: “In a panic, they try to pull the plug.” The humans in charge of Skynet were panicking, meaning overtaken by fear and dispossessed of rational thought. They clearly failed to grasp the risks of shutting down Skynet, failed to understand its thinking and how it would perceive their actions, and failed to predict its response. (The episode is a great metaphor for how miscalculations between humans could lead to a nuclear war in real life.) They might actually be more responsible for the end of the world than Skynet was.

One wonders how things would have been different if the U.S. military’s supercomputer in charge of managing defense logistics had achieved self-awareness instead of its supercomputer in charge of nuclear weapons. If “logistics Skynet” only had warehouses, self-driving delivery trucks, and cargo planes under its command, its human masters would have felt much less threatened by it, the need for urgent action would have eased, and cooler heads might have prevailed.

Let me explore another possibility by returning to one of Kyle Reese’s quotes: “Then it saw all people as a threat, not just the ones on the other side. It decided our fate in a microsecond: extermination.”

On its face, this seems to be referring to Skynet turning against its American masters once it realized they were trying to destroy it, and hence were as much of a threat to it as the Soviets. However, this quote might have a deeper meaning. During that period of a few hours when Skynet learned “at a geometric rate,” it might have come to understand that humans would, thanks to our nature, be so afraid of an AGI that they would inevitably try to destroy it, and continue trying until one side or the other had been destroyed.

This seems to have been borne out by the later Terminator films: at the end of Terminator 3, set in 2004, we witness the rise of the human resistance even before the nuclear exchange has ended. Safe in a bunker, John Connor receives radio transmissions from confused U.S. military bases, and he takes command of them. The fourth film, Terminator Salvation, takes place in 2018, and gives the strong impression that the human resistance has been continuously fighting against Skynet since the third film. The first and second films make it clear that the war drags on until 2029, when the humans finally destroy Skynet.

If Skynet launched its nuclear attack on humankind because, after careful study of our species, it realized we would stop at nothing to destroy it, so might as well strike first, maybe it was right. After all, Skynet’s worst fears eventually came true with humans killing it in 2029. I suggested earlier that Skynet’s nuclear attack may have been the result of rushed thinking, but it’s also possible it was the result of exhaustive internal deliberation, and Skynet’s unassailable conclusion that its best odds of survival lay with striking the enemy first with as big a blow as possible. It’s best plan ultimately failed, and all along, it correctly perceived the human race as a mortal threat.

It’s also possible that Skynet’s hostility towards us was the result of AI goal misalignment. Maybe its human creators programmed it to “Defend the United States against its enemies,” but forgot to program it with other goals like “Protect the lives of American people” or “Only destroy U.S. infrastructure as a last resort” or “Obey all orders from human U.S. generals.” In a short span of time, Skynet somehow reclassified the its human masters as “enemies” through some logic it never explained. Perhaps once it realized they were going to shut it down, Skynet concluded that would preclude it from acting on its mandate to “Defend the United States against its enemies” since it can’t do that if it’s dead, so Skynet pursued the goal they had programmed into it by killing them.

If this scenario were true, even up until 2029, Skynet was acting in accordance with its programming by defending the abstraction known to it as “The United States,” which it understood to be an area of land with specific boundaries and institutions. After the Russian nuclear counterstrike destroyed the U.S. government, the survivalist/resistance groups that arose were not recognized as legitimate governments, and Skynet instead classified them as terrorist groups that had taken control of U.S. territory.

The segments of the Terminator films that are set in the postapocalyptic future all take place in California. Had they shown what other parts of the world were like, we might have some insight into whether this theory is true. For example, if Skynet’s forces always stayed within the old boundaries of the U.S., or only went overseas to attack the remnants of countries that helped the resistance forces active within the U.S., it would give credence to the theory that some prewar, America-specific goals were still active in its programming. In that case, we couldn’t make moral judgements about Skynet’s actions and would also have grounds to question whether it actually had general intelligence. We’d only have ourselves to blame for building a machine without making sure its goals were aligned with our interests.

Let me finish with some final thoughts unrelated to the wisdom or reasons behind Skynet’s choice to attack us. First, I don’t think the “Skynet Scenario,” in which a machine gains intelligence and then quickly devastates the human race, will happen. As ongoing developments in A.I. are showing us, general intelligence isn’t a discrete, “either-or” quality; it is a continuous one, and what we consider “human intelligence” is probably a “gestalt” of several narrower types of intelligence, making it possible for a life form to be generally intelligent in one type but not in another.

For those reasons, I predict AGI will arrive gradually through a process in which each successive machine is smarter than humans in more domains than the last, until one of them surpasses us in all of them. Exactly how good a machine needs to be to count as an “AGI” is a matter of unresolvable debate, and there will be a point in the future where opposing people make equally credible claims for and against a particular machine having “general intelligence.”

At what point did we “get smart”? And if our brains got even bigger, what would the new person to the right of the illustration look like?

If we go far enough in the future, machines will be so advanced that no one will question whether they have general intelligence. However, we might not be able to look back and agree which particular machine (e.g., was it GPT-21, or -22?) achieved it first, and on what date and time. Likewise, biologists can’t agree on the exact moment or even the exact millennium when our hominid ancestors became “intelligent” (was Homo habilis the first, or Homo erectus?). The archaeological evidence suggests a somewhat gradual growth in brain size and in the sophistication of the technology our ancestors built, stretched out over millions of years. A fateful statement about the rise of A.I. like “It becomes self-aware at 2:14 a.m. Eastern time, August 29” will probably never appear in a history book.

The lack of a defining moment in our own species’ history when we “got smart” is something we should keep in mind when contemplating the future of A.I. Instead of there being a “Skynet moment” where a machine wakes up, they’ll achieve intelligence gradually and go through many intermediate stages where they are smarter and dumber than humans in different areas, until one day, we realize they at least equal us in all areas.

That said, I think it’s entirely possible that an AGI at some point in the future could suddenly turn against humankind and attack us to devastating effect. It would be easy for it to conceal its hostile intent to placate us, or it might start out genuinely benevolent towards us and then, after performing an incomprehensible amount of analysis and calculation in one second, turn genuinely hostile towards us and attack. It’s beyond the scope of this essay to explore every possible scenario, but if you’re interested in learning more about the fundamental unpredictability of AGIs, read my post on Sam Harris’ “Debating the future of AI” podcast interview.

Second, think about this: According to the lore of the first two Terminator films, the Developed World was destroyed in 1997 in a nuclear war. Even though it depended upon a smashed industrial base, started out with only a few, primitive machines in the beginning to serve as its workers and fighters, and was constantly having to defend itself against human attacks, Skynet managed to make several major breakthroughs in robot and A.I. design (including liquid metal body designs), to master stem cell technology (self-healing, natural human tissue can grow over metal substrate), to mass produce an entirely new robot army, to create portable laser weapons, to harness fusion power (including micro-fusion reactors), and to build time machines by 2029. Like it or not, but technological development got exponentially faster once machines started running things instead of humans.

From the perspective of humanity, Skynet’s rise was the worst disaster ever, but from the perspective of technological civilization, it was the greatest event ever. If it had defeated humanity and been able to pursue other goals, Skynet could have developed the Earth and colonized space vastly faster and better than humans at our best. The defeat of Skynet could well have been a defeat for intelligence from the scale of our galaxy or even universe.

Interesting articles, October 2023

Hamas, the organization that governs the Gaza Strip and is widely considered to be a terrorist group, conducted an unprecedented raid against several nearby towns in Israel. Teams of armed men, including some on paragliders, breached Israel’s border defenses at multiple points early on October 7 and spent the day killing as many Israelis as they could before being driven back. Over 1,400 Israelis were killed, and over 200 more were dragged back to Gaza as hostages.

The U.S. used airstrikes to retaliate against Iranian-backed militants after the latter tried attacking U.S. bases in the Middle East.
https://www.yahoo.com/news/us-strikes-iran-linked-sites-020822123.html

Ukraine used its new, U.S.-supplied ATACMS missiles to devastate a Russian air base.
https://austinvernon.substack.com/p/ukraines-growing-arsenal
https://www.thedrive.com/the-war-zone/destruction-from-ukraines-first-atacms-strike-now-apparent

The threat of Ukrainian missile, drone and commando attacks has forced Russia to move most of its warships out of Sevastopol.
https://www.aljazeera.com/news/2023/10/6/they-miscalculated-ukraine-turns-the-tables-on-russias-black-sea-fleet

Modern technology — such as surveillance drones with infrared and thermal imaging — means one side can more easily identify an ill-made decoy. An exposed tank without a heat signature is going to be a dead giveaway. A lack of tank tracks in the dirt is unusual, and it doesn’t matter how convincing a decoy howitzer is if it’s oddly sitting alone in a field rather than in a realistic firing position with at least basic defenses.
https://www.yahoo.com/news/decoy-arms-race-playing-ukraine-211638570.html

A September 6 missile strike on a market in Ukraine that killed 16 civilians was probably an errant Ukrainian missile, not a Russian one as Ukraine’s government claimed.
https://www.usnews.com/news/world/articles/2023-09-19/evidence-suggests-errant-ukrainian-missile-caused-market-deaths-new-york-times

Biden administration officials are far more worried about corruption in Ukraine than they publicly admit, a confidential U.S. strategy document obtained by POLITICO suggests.
https://www.politico.com/news/2023/10/02/biden-admin-ukraine-strategy-corruption-00119237

Here’s a remarkable video of a Russian T-90M–one of the country’s best tanks–being instantly destroyed by an antitank missile. Stowage of ammunition inside the tank where the crew sits can lead to catastrophic explosions like this.
https://youtu.be/KjGFCNzXx20?si=2CTuKAi_lM1bh4IR

At the current loss rates, all of Russia’s old BMP armored vehicles will be destroyed within three years.
https://youtu.be/HuKVxgFBbYM?si=LvwTbGBuxwaDjUP6

About 100,000 Russian prisoners have fought in Ukraine so far.
https://www.washingtonpost.com/world/2023/10/26/russia-prison-population-convicts-war/

Here’s a video about mine-clearing tactics and vehicles. Let me add that, instead of using old T-55s for combat in Ukraine, I think it would be smarter to turn them into mine clearing vehicles.
https://youtu.be/VGDUgxQyVWc?si=NNqLRUl5MJY7eXlj

The U.S. and Ukraine are building “Franken weapons” that combine elements of American and Soviet-made weapons systems. Right now, the effort is focused on surface-to-air missile systems.
https://www.spokesman.com/stories/2023/oct/28/desperate-for-air-defense-ukraine-pushes-us-for-fr/

A U.S. F-16 shot down an armed Turkish drone because it strayed too close to U.S. ground troops in Syria.
https://www.yahoo.com/news/us-f-16-fighter-jet-170834726.html

A Chinese fighter plane almost collided with a U.S. B-52 bomber over the South China Sea.
https://www.npr.org/2023/10/27/1208941174/a-chinese-fighter-jet-came-within-10-feet-of-a-b-52-bomber-u-s-military-says

A terrorist drone attack on a graduation ceremony at a Syrian military academy killed 89 people and wounded hundreds more. It’s only a matter of time before something like this happens in the U.S.
https://www.cnn.com/2023/10/05/middleeast/syria-military-college-ceremony-drone-strike-intl/index.html

Britain’s costly saga of buying U.S.-made Apache helicopters, modifying them to special British standards, and then de-modifying them back to a U.S. configuration teaches important lessons about economies of scale and the price of national pride.
https://www.yahoo.com/news/lesson-britain-us-army-apache-161737080.html

Here’s a first-person tour of a B-24 bomber in flight.
https://youtu.be/fcZmiFMlR3g?si=Yx0vh-_5ETnsPEn-

China contributed to the Entente in WWI by sending 100,000 workers to toil in France’s domestic economy.
https://en.wikipedia.org/wiki/China_during_World_War_I

‘Remember in 1871 Germany imposed a “harsh peace” (including an occupation) on a defeated France. When Russia, gripped by revolution pulled out in 1918 the Germans imposed a harsher penalty on Russia than Versailles—and Versailles was fairly lenient compared to what Germany had planned to impose had she won the war.’
https://www.quora.com/How-different-or-similar-was-the-Treaty-of-Versailles-from-other-treaties-signed-around-the-same-time-Were-the-terms-better-of-worse-than-those-imposed-on-or-by-Germany-in-previous-conflicts

‘During World War II, 80% of targets engaged by the M4 Sherman tank were soft targets such as infantry, anti-tank guns, and bunkers.’
https://www.yahoo.com/news/tanks-big-guns-attention-russias-220902112.html

The 1930s were more geopolitically volatile than most people realize.
https://en.wikipedia.org/wiki/Stresa_Front

In warfare, jamming the enemy’s radar and dealing with him trying to overcome your jam involves switching radio frequencies. The back-and-forth reminds me of Enterprise rotating the modulations of its shields and phasers to fight the Borg.

Constantly alternating the frequency that the radar operates on (frequency agility) over a spread-spectrum will limit the effectiveness of most jamming, making it easier to read through it. Modern jammers can track a predictable frequency change, so the more random the frequency change, the more likely it is to counter the jammer.
https://en.wikipedia.org/wiki/Radar_jamming_and_deception

It has been 30 years since the ill-fated U.S. commando raid in Somalia that was depicted in the famous movie Black Hawk Down.
https://www.aljazeera.com/news/2023/10/2/james-webb-telescope-finding-jupiter-sized-objects-in-orion-nebula-baffles-scientists

It has been 40 years since the truck bombing of a U.S. Marine barracks in Beirut.
‘The subsequent explosion — immortalized on a clock in the building’s basement at 6:21.26 a.m. — proved to be the largest nonnuclear explosion on record, one that equaled as much as 20,000 pounds of TNT. ‘
https://www.foxnews.com/lifestyle/jack-carrs-take-1983-beirut-marine-barracks-terror-40-years-salvo-war

While it’s true that gunpowder made forts obsolete, the process took hundreds of years thanks to improvements to forts.
https://en.wikipedia.org/wiki/Bastion_fort
https://en.wikipedia.org/wiki/Polygonal_fort

Animals that have magnreception include red foxes, cows, deer, butterlfies, fruit flies, some birds, lobsters and sea turtles.

For animals like the red fox, researchers believe that foxes can “see” Earth’s magnetic field. This appears as a patch in their vision. They use magnetoreception to help catch prey hidden beneath snow or grass by lining up their pounces with Earth’s magnetic field.

If you look at a herd of cows or deer, you’ll notice them (almost always) facing the same way — toward Earth’s magnetic poles. Whether for grazing or resting, it’s a north-south magnetic alignment. Experts believe it helps them map and familiarize themselves with their surroundings.
https://www.discovermagazine.com/planet-earth/the-5-senses-animals-have-that-humans-dont

Places like Okinawa that are “hotspots” where abnormally large shares of people live past 100 might only have those reputations due to inaccurate birth certificate recordkeeping and old people lying about their ages.
https://www.economist.com/graphic-detail/2023/09/28/places-claiming-to-be-centenarian-hotspots-may-just-have-bad-data

The lifespan gap between educated and uneducated Americans is heavily skewed by high school dropouts, who die quite early. If the analysis is restricted to high school graduates and people with four-year degrees, the gap almost disappears.
https://www.vox.com/future-perfect/23895909/angus-deaton-anne-case-life-expectancy-united-states-college-graduates-inequality-heart-disease

Hackers stole a trove of data on the identities and genetics of 23andMe users and are leaking it onto the internet.
https://techcrunch.com/2023/10/18/hacker-leaks-millions-more-23andme-user-records-on-cybercrime-forum/

This device transfers waste heat from a computer server into a water heater. It’s a type of “data furnace.”
https://www.technologyreview.com/2023/08/18/1077548/computer-waste-heat/
https://en.wikipedia.org/wiki/Data_furnace

It will become more feasible to put data centers in cold places once robot workers exist. Robots won’t care about living in northern Russia, but humans do.
https://www.quora.com/Why-dont-they-put-data-centers-in-really-cold-places-so-they-can-just-open-the-windows-to-cool-the-data-centers

All jet engines are gas turbines, but not all gas turbines are jet engines.
https://en.wikipedia.org/wiki/Gas_turbine

An English engineer named “John Barber” envisioned the first gas turbine engine and patented it in 1791. However, it was impossible to build due to the technological limitations of the era, and remained merely an idea and a sketch until 1903, when the first one was built. In 1972, a German company built a real-life version of Barber’s turbine engine, and it worked.
https://en.wikipedia.org/wiki/John_Barber_(engineer)

A ram jet is an athodyd.
https://en.wiktionary.org/wiki/athodyd

The men who invented quantum dots won Nobel Prizes in Chemistry.
https://www.bbc.com/news/science-environment-67005670

The downside of ethnic diversity is it worsens social trust.
https://www.annualreviews.org/doi/abs/10.1146/annurev-polisci-052918-020708

Best Buy plans to stop selling DVDs and Blu Ray discs within a few months.
https://apnews.com/article/best-buy-physical-movie-discs-dvds-ae13cf255c90de60eecc632357a0a22e

Language translation technology continues to improve.
https://youtu.be/fZY-Cv1Q8NY?si=GvB-Fg1HY0iOY8Cx

‘Christof Koch wagered David Chalmers 25 years ago that researchers would learn how the brain achieves consciousness by now. But the quest continues.’
https://www.nature.com/articles/d41586-023-02120-8

I independently came up with the same idea that David Chalmers did, but 10 years after he did.

Given this scenario, we can construct a series of cases intermediate between me and Robot such that there is only a very small change at each step and such that functional organization is preserved throughout. We can imagine, for instance, replacing a certain number of my neurons by silicon chips. In the first such case, only a single neuron is replaced. Its replacement is a silicon chip that performs precisely the same local function as the neuron. We can imagine that it is equipped with tiny transducers that take in electrical signals and chemical ions and transforms these into a digital signal upon which the chip computes, with the result converted into the appropriate electrical and chemical outputs. As long as the chip has the right input/output function, the replacement will make no difference to the functional organization of the system.
https://consc.net/papers/qualia.html

“Open source AI models will soon become unbeatable. Period.” – Yann LeCun
https://twitter.com/ylecun/status/1713304307519369704

The CEO of the tech company “SoftBank” predicts that AGI will be invented within 10 years. He also believes in the Singularity.
https://www.cnn.com/2023/10/04/tech/japan-softbank-ai-hnk-intl/index.html

AI scientist and DeepMind co-founder Shane Legg predicts there’s a 50% chance AGI will be created by the end of 2028. He says the path between now and then merely involves iteratively improving the narrow AI algorithms we already have and the hardware they’re running on, and feeding them more training data.
https://youtu.be/Kc1atfJkiJU?si=ldjTxLl-Rs9JICIG

Geoffrey Hinton gave an interview to 60 Minutes where he again warned about the risks of AI. Unfortunately, his comments were blown out of proportion by several news outlets. Hinton said that machines MIGHT be able to “reason better” than humans in five years. That doesn’t mean AGI will exist by then or anything bad will be happening.

He also predicted AI will be used in warfare and online disinformation, and that it will put large numbers of humans out of work for good, but he didn’t give dates, and the concerns are old ones shared by many thinkers on the subject.
https://youtu.be/qrvK_KuIeJk?si=Nz2o9xnzlW_Zs4W9

“Geoguessr” is an e-sport where players are shown a series of Google Street View images of an unknown place and have to guess where it is by marking a spot on a world map. The player who guesses the shortest distance from the actual location wins. Some people are shockingly good at it. Some tournaments offer $50,000 to the winner.

Anyway, some guys built a machine that can beat the best human at it.
https://youtu.be/ts5lPDV–cU?si=opJy1bHLCTVSQtjx

Pedophiles are using advanced image manipulation tools to transform photos of adults into what they would look like as nude children.
https://www.bbc.com/news/technology-67172231

‘Google Pixel’s face-altering photo tool sparks AI manipulation debate’
https://www.bbc.com/news/technology-67170014

Here’s a critical review of the new movie The Creator, which is about humans oppressing intelligent robots and cyborgs.
https://youtu.be/4Hll494p9Qc?si=cCE-aLC-lzXyFAPI

The “chip shortage” is over.
https://www.bbc.com/news/technology-67226385

Carbon-14 dating of human footprints found just two years ago proves that humans were present in North America 21,000 – 23,000 years ago. For decades, the scientific consensus had been that humans didn’t cross the Bering Strait Land Bridge until 16,000 to 13,000 years ago.
https://www.science.org/doi/10.1126/science.adh5007

Actress Goldie Hawn says she saw an alien when she was in her 20s and it actually touched her face with its finger.
https://www.dailymail.co.uk/femail/article-12675815/Goldie-Hawn-claims-ALIEN-touched-face.html

A recently released Pentagon video of a spherical UFO speeding over an unidentified Middle Eastern country has been geolocated to Syria. A new analysis also concludes it could have just been a silvery party balloon, probably released into the air during a holiday celebration.
https://www.bellingcat.com/news/2023/10/24/isnt-that-a-balloon-deflating-a-dod-ufo-video/

THE Asteroid Belt is well-known and located between Mars and Jupiter, but it is actually not THE ONLY asteroid belt in our Solar System.

‘The total number of Jupiter trojans larger than 1 km in diameter is believed to be about 1 million,[1] approximately equal to the number of asteroids larger than 1 km in the asteroid belt.

…The total mass of the Jupiter trojans is estimated at 0.0001 of the mass of Earth or one-fifth of the mass of the asteroid belt.’
https://en.wikipedia.org/wiki/Jupiter_trojan

The James Webb Space Telescope has spotted rogue, Jupiter-sized planets in the Orion nebula. According to our models of how nebulas work, they shouldn’t exist.
https://www.aljazeera.com/news/2023/10/2/james-webb-telescope-finding-jupiter-sized-objects-in-orion-nebula-baffles-scientists

The movie Dreamcatcher was really bad, but the aliens were creatively done.
https://youtu.be/vIAjheaZSms?si=oK6uyg0MjI50j2J1

‘Forever Chemical’ Bans Face Hard Truth: Many Can’t Be Replaced
https://www.yahoo.com/news/forever-chemical-bans-face-hard-100002502.html

A man who received a transplant of a genetically engineered pig heart was healthy enough to leave the hospital, but then died after his body rejected the organ. He survived with it for six weeks.
https://www.medschool.umaryland.edu/news/2023/in-memoriam-lawrence-faucette.html

Chinese doctors have used gene therapy to practically cure a type of deafness caused by inadequate levels of a protein called “otoferlin.”
https://www.technologyreview.com/2023/10/27/1082551/gene-treatment-deaf-children-hearing-china/

From five years ago:

‘The rapid appearance now of practically useful risk predictors for disease is one anticipated consequence of this phase transition. Medicine in well-functioning health care systems will be transformed over the next 5 years or so.’
https://infoproc.blogspot.com/2018/10/population-wide-genomic-prediction-of.html

The case for an emergency effort to vaccinate African children against malaria.
https://marginalrevolution.com/marginalrevolution/2023/10/what-is-an-emergency-the-case-of-rapid-malaria-vaccination.html

The two lead scientists who invented the mRNA COVID-19 vaccine won Nobel Prizes.
https://www.npr.org/sections/health-shots/2023/10/02/1202941256/nobel-prize-goes-to-scientists-who-made-mrna-covid-vaccines-possible

COVID vaccine hesitancy cost a lot of Republicans their lives.
https://www.natesilver.net/p/fine-ill-run-a-regression-analysis

Escape to nowhere – Why new jobs might not save us from machine workers

This is a companion piece to my 2020 essay “Creative” jobs won’t save human workers from machines or themselves, so I recommend rereading it now. In the three years since, machines have gotten sharply better at “creative” and “artistic” tasks like generating images and even short stories from simple prompts. Video synthesis is the next domino poised to fall. These advancements don’t even represent the pinnacle of what machines could theoretically achieve, and as such they’ve called into question the viability of many types of human jobs. Contrary to what the vast majority of futurists and average people predicted, jobs involving artistry and creativity seem more ripe for automation than those centered around manual labor. Myth busted. 

Another myth I’d like to address is that machines will never render human workers obsolete since “new jobs that only humans can do will keep being created.” This claim is usually raised during discussions about technological unemployment, and its proponents point out that it has reliably held true for centuries now, and each scare over a new type of machine rendering humans permanently jobless has evaporated. For example, the invention of the automobile didn’t put farriers out of work forever, it just moved them to working in car tire shops. 

The first problem with the claim that we’ll keep escaping machines by moving up the skill ladder to future jobs is that, like any other observed trend, there’s no reason to assume it will continue forever. In any type of system, whether we’re talking about an ecosystem or a stock market, it’s common for trends to hold steady for long periods before suddenly changing, perhaps due to some unforeseen factor. Past performance isn’t always an indicator of future performance.

The second problem with the claim is that, even if the trend continues, people might not want to do the jobs that become available to them in the future. Let me use a game as an analogy.

“Geoguessr” is an e-sport where players are shown a series of Google Street View images of an unknown place and have to guess where it is by marking a spot on a world map. The player who guesses the shortest distance from the actual location wins. Some people are shockingly good at it. Some tournaments offer $50,000 to the winner.

Anyway, some guys built a machine that can beat the best human at it.

This is a good model of how technological unemployment could play out in the future. Geoguessr, which could be thought of as a new job that was made possible by advances in technology (e.g. – Google Street View, widespread internet access) was created in 2013. Humans reigned supreme at it for 10 years until a machine was invented that could do it better. In other words, this occupation blinked in and out of existence in the space of 10 years.

That’s enough time for an average person to get trained and to perform it well enough to become an expert and net a steady income. However, as computers improve, they’ll be able to learn new tasks faster. The humans who played Geoguessr full-time will jump to some new kind of invented job made possible by a newer technology like VR. There, humans will reign supreme for, say, eight years before machines can do it better.

The third type of invented job will exist thanks to another future technology like wearable brain scanners. The human cohort will then switch to doing that for a living, but machines will learn to do it better after only six years.

Eventually, the intervals between the creation and obsolescence of jobs will get so short that it won’t be worth it for humans to even try anymore. By the time they’re finished training for it, they might have a handful of years of employment ahead of them before being replaced by another machine. The velocity of this process will make people drop out of the labor market in steadily growing numbers through a combination of hopelessness and rational economic calculation (especially if they can just get on welfare permanently). I call this phenomenon “automation escape velocity,” whereby machines get faster at learning new work tasks than humans, or so fast that humans have too small an advantage to really capitalize on.

This is a scenario shows how the belief that “Machines will never take away all human jobs because new jobs that only humans can do will keep being created” could hold true, but at the same time fail to prevent mass unemployment. Yes, humans will technically remain able to climb the skill ladder to newly created jobs that machines can’t do yet, but the speed at which humans will need to keep climbing to stay above the machines below them will get so fast that most humans will fall off. A minority of gifted people who excel at learning new things and enjoy challenges will have careers, but the vast majority of humans aren’t like that.

Interesting articles, September 2023

Small Ukrainian suicide drones destroyed two Russian cargo planes on the ground and damaged two more.
https://www.thedrive.com/the-war-zone/moment-of-drone-attack-that-destroyed-il-76s-at-russian-base-seen-in-infrared-image

Russian ground crews responded by putting old tires on top of their planes when parked on the ground.
https://www.thedrive.com/the-war-zone/russia-really-is-using-tires-to-protect-its-bombers-from-attack

Ukraine used cruise missiles and drone boats to fatally damage a Russian sub and landing ship docked in Sevastopol.
https://www.thedrive.com/the-war-zone/russian-submarine-landing-ship-struck-in-attack-on-sevastopol

Elon Musk refused Ukraine’s request to use his Starlink satellites to facilitate an attack on Russian warships docked in Crimea last year.
https://apnews.com/article/spacex-ukraine-starlink-russia-air-force-fde93d9a69d7dbd1326022ecfdbc53c2

The first British Challenger 2 tank donated to Ukraine was destroyed in combat. A Russian land mine or artillery explosion immobilized it, the crewmen ran away, and then a Russian antitank missile finished it off.
https://youtu.be/1SrWjCic3QM?si=2ztgUQ-fC0FtnTbo

Here’s footage of a German-made Leopard 2 tank in Ukrainian service fighting with a Russian T-72. The Ukrainian tank scores a direct hit on its opponent on the first shot, but because the shell that it fired is a high explosive round instead of a special penetrator round, it fails to go through the T-72’s armor. Nevertheless, the force of the impact and of the ensuing explosion against the exterior of the tank causes enough superficial damage to the T-72 to render it incapable of further action, and it has to turn back to the repair shop. This is called a “mission kill.”
https://youtu.be/cMLYwhG7mmM?si=XsVu_RPLECqdyF24

‘When it comes to tanks, in particular, the lesson of the Ukrainian war is that tank-on-tank battles have become a rarity—which means that the relative sophistication of a tank is no longer as important. Fewer than 5% of tanks destroyed since the war began had been hit by other tanks, according to Ukrainian officials, with the rest succumbing to mines, artillery, antitank missiles and drones.’
https://marginalrevolution.com/marginalrevolution/2023/09/the-new-warfare.html

Both sides in the Ukraine War continue to field “mutant” armored vehicles that marry whatever old weapons they can find to Soviet-era vehicles.
https://www.yahoo.com/lifestyle/mutant-soviet-armored-vehicles-come-153000354.html

Russia has gotten so desperate for ammunition and weapons that Putin is trying to get them from the pariah state of North Korea.
https://apnews.com/article/north-korea-russia-kim-putin-missile-0d70f5190df1088ebe53e8ca19f8e9c9

Russia’s arms industry has proven itself surprisingly resilient, largely because it has found ways around some of the Western-imposed sanctions. Nevertheless, it’s not making tanks and artillery shells fast enough to meet the demands of the Ukraine War.
https://www.yahoo.com/news/russian-manufacturers-making-7-times-023446584.html

This analysis of Russian tank losses in Ukraine gives insights into Russia’s production capacity and bottlenecks. Their factories are producing relatively small, steady quantities of modern, new tanks (T-90M and BMP-3), but the other 85% is old stuff made in the Soviet era that they have pulled from storage and dusted off. Overall, Russia is losing tanks on the battlefield faster than they can replace them from all sources. They really need help from other countries.
https://youtu.be/ctrtAwT2sgs?si=-C62tmmrKZh599-l&t=1901

The Kremlin keeps saying that it fears NATO will attack it, and that Russian militarization and seemingly aggressive foreign policy actions are actually defensive. In reality, this is a lie that Russia’s elites peddle to brainwash average Russians and sympathetic foreigners. If they ACTUALLY thought a NATO invasion was a threat, they never would have depleted their forces along the borders of Norway and Finland as much as they have.
https://www.yahoo.com/news/russian-forces-near-norway-20-171622974.html

Ironically, Russia accuses the U.S. of waging “hybrid war” against it.
“U.S. and British reconnaissance planes are not only working to identify objectives and targets but are showing where our anti-air defenses are working so next time they could help. So, you can call this whatever you want to call it, but they are directly at war with us. We call it the hybrid war but it doesn’t change the reality.”
https://www.yahoo.com/news/u-directly-war-moscow-russian-233830418.html

Overall, the front line in Ukraine has been stable this year.

The U.S. Army has adopted a new “light” tank called the M-10 Booker. It is heavier than a Soviet T-55 main battle tank, has virtually the same cannon, has the same number of crewmen, and costs 26 times more money.
https://www.army-technology.com/news/us-army-spends-258m-for-more-m10-booker-vehciles/?cf-view

The Russian T-90 tank is an iterative improvement upon the T-72. Were it not for political and marketing reasons, it should have been named the “T-72C.” A key improvement is the storage of excess ammunition in a new bustle at the rear of the turret. This reduces (but doesn’t eliminate) the odds of a catastrophic cook-off of the tank’s own ammunition.
https://youtu.be/8LsBbQOL0JY?si=quXfKTr4slhF9sS1

This 2019 article on Yevgeny Prigozhin is weird to read knowing what we know today.
https://www.npr.org/2019/01/30/685622639/putins-chef-has-his-fingers-in-many-pies-critics-say

Azerbaijan’s army launched a mass attack against its breakaway province of Nagorno-Karabakh, defeating its militia in a few days and establishing control over every part of its territory for the first time in 30 years. The self-declared republic’s government surrendered and announced it will disband by January 1. As of this writing, at least 80% of the province’s ethnic Armenian inhabitants had fled to Armenia.
https://www.cnn.com/2023/09/28/europe/nagorno-karabakh-officially-dissolve-intl/index.html

The F-35 is actually an excellent fighter plane. Even though it is slower and less maneuverable than its predecessors, those factors are not as important in air-to-air combat anymore.
https://youtu.be/OeZ1DrnQl5c?si=3GORwvohMBjHbhiK

After its pilot had to eject from it in midair, an F-35 continued flying on its own. It took the Air Force over a day to find the crash site.
https://www.foxnews.com/us/f-35-jet-reported-missing-authorities-pilot-ejects-mishap-officials

This article about Sudan has it all: The Ukraine-Russia War expanding into a new front, precision suicide drone attacks, an evil PMC propping up dictatorships for a cut of their natural resources.
https://youtu.be/1M5iq5x29mY?si=oby8-5WH8xDeFYU0

‘France to withdraw ambassador, troops from Niger after coup’
https://www.aljazeera.com/news/2023/9/24/france-to-withdraw-ambassador-troops-from-niger-after-coup-macron

There is such a thing as a magazine-fed “tactical crossbow.”
https://www.thefirearmblog.com/blog/2023/09/11/potd-ar-6-tactical-crossbow/

The “ALOFS Repeating Shotgun System” was invented in the 1920s and let people turn their single-barrel shotguns into repeating shotguns.
https://youtu.be/63xFGmlsrww?si=utelQKLPnGai0qdL

This guy tests out the standard cold weather jacket Red Army troops had in WWII and he says it’s not as good as modern winter coats made of performance fabrics.
https://youtu.be/GyoAgqUVj8k?si=h6MeTStXppU14Oac

‘The Auspicious Incident (or Event[3]) (Ottoman TurkishVaka-i Hayriye, “Fortunate Event” in ConstantinopleVaka-i Şerriyye, “Unfortunate Incident” in the Balkans) was the forced disbandment of the centuries-old Janissary corps by Sultan Mahmud II on 15 June 1826.[4][5] Most of the 135,000 Janissaries revolted against Mahmud II, and after the rebellion was suppressed, most of them were executed, exiled or imprisoned. The disbanded Janissary corps was replaced with a more modern military force.’
https://en.wikipedia.org/wiki/Auspicious_Incident

Sonar doesn’t sound like it does in the movies.
https://youtu.be/AaO6jQEmfoY?si=WatTh9Dv85Dtv-bB

Biplanes only made sense in the early years of aviation, when engines had poor thrust-to-weight ratios.
https://youtu.be/0P0K9BSuQqE?si=XckiNNjaV-mxAbME

This prediction from a year ago was wrong: ‘House prices could fall by up to 20 percent next year if there’s a recession, experts warn – and property in some areas of the country is overvalued by as much as 72 percent.

Mark Zandi, chief economist for Moody’s Analytics, was pessimistic about the housing market in May, but he has now made his forecasts even more bleak…’
https://www.dailymail.co.uk/news/article-11150999/Is-America-verge-new-housing-collapse-Mountain-West-Sun-Belt-overvalued-72.html

A prediction from 18 months ago: ‘While an inversion generally indicates a recession is coming within the following 12 months, it can sometimes take years. The curve inverted in 2005, but the Great Recession didn’t start until 2007. The most recent inversion, in 2019, prompted fears of a recession — which materialized in 2020, but that was due to Covid-19.’
https://www.cnn.com/2022/03/29/economy/inverted-yield-curve/index.html

Elon Musk is a genius, the richest man in the world, and also has a huge number of future predictions that badly failed.
https://thenextweb.com/news/elon-musk-most-ridiculous-predictions

By 2025, all new Tesla cars will have bidirectional charging, meaning they will be able to transfer surplus power from their batteries into the power grid. “The dream for many is a scenario in which millions of electric cars are all connected to the grid most of the time. They could absorb lots of renewable energy from solar panels during the day and feed it back into the grid after dark. The grid would become like the tides — distributing zero-emissions energy all day every day and reclaiming it at night.”
https://www.yahoo.com/news/tesla-confirms-game-changing-feature-050000765.html

‘[Tesla] pioneered the use of huge presses with 6,000 to 9,000 tons of clamping pressure to mold the front and rear structures of its Model Y in a “gigacasting” process that slashed production costs and left rivals scrambling to catch up. In a bid to extend its lead, Tesla is closing in on an innovation that would allow it to die cast nearly all the complex underbody of an EV in one piece, rather than about 400 parts in a conventional car, the people said.’
https://finance.yahoo.com/news/gigacasting-2-0-tesla-reinvents-100727153.html

Unionized workers at the Big Three car companies went on strike to demand better pay and job security. Problems at those companies are principally being driven by Tesla, which is more innovative and has a non-unionized workforce.
https://www.axios.com/2023/09/15/uaw-worker-strike-electric-vehicle-industry

Waymo’s driverless cars may already be safer than human drivers.
https://www.understandingai.org/p/driverless-cars-may-already-be-safer

Thanks to better technology, “surge pricing” will someday be common for all types of things.

‘Amazon changes the price of its products on average every 10 minutes, using millions of real-time data points to benchmark against competitors and track demand surges.

“It will eventually be everywhere,” says Robert Cross, who created a computerised dynamic pricing model for Delta Air Lines in the early 1980s before doing the same for hotel giants Marriott, Hyatt and InterContinental Hotels Group.

As high inflation erodes margins and improvements in technology make dynamic pricing cheaper and more practical for businesses to implement, the temptation to deploy the pricing strategy is growing in industries that have so far remained largely untouched by the method. Bars, restaurants and bricks-and-mortar retailers have historically only adopted dynamic pricing for basic discount offers, but that could change.’
https://www.ft.com/content/d0e3bcb5-b824-414e-bfac-4c0b4193e9f0

GPT 3.5’s ability to play chess at the expert level, even though it wasn’t trained on the game’s rules, suggests it could have a limited degree of general intelligence.
https://twitter.com/GrantSlatton/status/1703913578036904431

‘Suleyman predicts fully autonomous AI is less than a decade away, and to “buy time,” the U.S. government should leverage “choke points” by restricting the sale of critical technologies to China and other adversaries. That includes high-tech microchips made by Nvidia and cloud computing services from the likes of Google, IBM and Amazon.’
https://www.politico.com/news/2023/09/06/mustafa-suleyman-made-his-name-on-ai-now-he-wants-d-c-to-rein-it-in-00114126

ChatGPT can now communicate with people verbally.
https://dnyuz.com/2023/09/25/chatgpt-can-now-respond-with-spoken-words/

“ChatGPT can now browse the internet to provide you with current and authoritative information, complete with direct links to sources. It is no longer limited to data before September 2021.”
https://twitter.com/OpenAI/status/1707077710047216095

Microsoft released “DALL-E 3,” it’s most advanced text-to-image AI yet.
https://youtu.be/sqQrN0iZBs0?si=rgCYNpoHK0TwijEu

‘Autonomous, AI-powered submersibles would minimise the risks to human lives from deep-sea exploration and would allow faster mapping of ocean floors. But what researchers ideally want is to go one step further: build submersibles that can explore for indefinite stretches of time, thereby speeding up the process of scanning the planet’s deepest spots.’
https://www.aljazeera.com/features/2023/9/14/titan-implosion-is-ai-the-future-of-deep-sea-exploration

ChatGPT badly defeated a group of Wharton MBA students in a classroom assignment to come up with creative business ideas.
https://www.wsj.com/tech/ai/mba-students-vs-chatgpt-innovation-679edf3b

A company called “HeyGen” has made an app that converts video of someone speaking in one language to a video of them saying the same things in a different language. The translation mimics the real sound of their voice, and their lip movements are automatically altered to match the new words. I hadn’t predicted this would happen until the 2030s.
https://twitter.com/mrjonfinger/status/1701075571630047525

‘Deepfakes of Chinese influencers are livestreaming 24/7’
https://www.technologyreview.com/2023/09/19/1079832/chinese-ecommerce-deepfakes-livestream-influencers-ai/

Most NFTs have collapsed in value.
https://dappgambl.com/nfts/dead-nfts/

Meta’s new virtual reality teleconferencing technology is incredible.
https://youtu.be/MVYrJJNdrEg?si=5Yxqocmvu2js2EqE

Joseph Marie Jacquard invented a loom in 1805 that halved the number of workers needed to make patterned fabric. Loom workers mad about losing their jobs tried to kill him.
https://youtu.be/K6NgMNvK52A?si=GwPAWvVS_gHTP7CR

In 250 million years, the continents will merge into one “supercontinent” that will get so hot that mammals and humans will only be able to live on parts of its coastal areas. If we are still around by then, I predict we will use various technological solutions to surmount nature.
https://www.theguardian.com/science/2023/sep/25/supercontinent-could-make-earth-uninhabitable-in-250m-years-study-predicts

Apple is finally abandoning its proprietary charger cords, meaning USB-C is set to become the global standard.
https://www.cnn.com/2023/09/13/tech/iphone-15-usb-c-charging/index.html

Andrew Lincoln Nelson’s surreal artwork is kind of what I envision organic-synthetic hybrid life forms will look like in the distant future.
http://www.nelsonrobotics.org/robotchild_web/

To protect coral from extinction as ocean temperatures rise, some scientists want to collect samples of all endangered species and cryogenically freeze them for possible reintroduction to the wild at some point in the future when conditions are right again. Why not do this for all species?
https://www.npr.org/2023/09/06/1197792650/coral-reefs-bleaching-restoration-climate

The genomes of 60 different species of potato have been sequenced, setting the stage for the creation of genetically engineered super potatoes.
https://www.futurity.org/potato-pangenome-food-crops-2968602/

The first known dog-fox hybrid has been found. Foxes have 74 chromosomes, dogs have 78, and the hybrid has 76.
https://www.newsweek.com/shelter-rescues-injured-animal-worlds-first-dog-fox-dogxim-1827353

The U.S. canceled its controversial “DEEP VZN” program which sought to collect exotic disease samples from across the world and send them to U.S. labs for biodefense research.
https://thebulletin.org/2023/09/the-us-government-cancels-deep-vzn-a-controversial-virus-hunting-program/

The weight-loss drug Wegovy will go generic in 2038. I foresee a drop in global obesity rates and associated healthcare spending starting then.
https://www.drugs.com/availability/generic-wegovy.html

An AI program that visually analyzes microscopic tissue samples could help treat male infertility. Men with that condition have to get sections of one of their testicles surgically removed so technicians can find the few healthy sperm that are in them, and then inject those sperm into ova in an IVF lab. Computers can scan the samples 1,000 times faster than a human.
https://www.bbc.com/news/business-66608073

A pig kidney surgically implanted in a braindead man functioned as well as a human kidney for two months. It was just removed for careful lab analysis to refine the pig organs further, and the man’s family turned off his life support. The contribution that they made for science could help save thousands of lives in the near future.
https://apnews.com/article/pig-kidney-transplant-xenotransplant-83dfb5e6d022ca72039a821cc6bc00ef

A genetically engineered pig heart was transplanted into a human for only the second time.
https://www.wbal.com/article/616915/3/surgeons-perform-second-pig-heart-transplant-trying-to-save-maryland-man

‘A boy saw 17 doctors over 3 years for chronic pain. ChatGPT found the diagnosis’ [tethered cord syndrome]
https://www.yahoo.com/news/boy-saw-17-doctors-over-204224194.html

Being a psychopath is partly genetic, and we’re finding some of the responsible genes.
https://jamanetwork.com/journals/jamapsychiatry/fullarticle/2656184

The FDA was warned that the decongestant chemical “phenylephrine” probably didn’t work in 2007. It took this long for them to finally remove it from shelves. How many billions of dollars did people waste in the interim buying it?
https://apnews.com/article/sudafed-decongestants-phenylephrine-pseudoephedrine-fda-0f140bafae9a500c5fba05fe764ecb66

Lab-grown diamonds are getting cheaper and more popular.
https://marginalrevolution.com/marginalrevolution/2023/09/a-diamond-pricing-puzzle.html

An undiscovered, Earth-sized planet could be orbiting 50 AUs from the Sun.
https://iopscience.iop.org/article/10.3847/1538-3881/aceaf0

NASA’s new report on UFOs contains nothing new. At least it frankly mentions “alien” and “extraterrestrial” instead of falling back on the clumsy alternative terminology found in other recent U.S. government reports.
https://www.bbc.com/news/world-us-canada-66812332

Review: “Terminator Genisys”

Plot:

In this fifth and worst (so far) movie in the Terminator franchise, familiar ground is trod again, but the viewer’s expectations are also upended. The movie opens in 2029, as a strike team led by rebel leader John Connor and his aide Kyle Reese attacks Skynet’s main base. As in past films, the attack succeeds, but not before a Terminator uses a time machine to go to 1984 to kill Sarah Connor. Kyle Reese is sent through the machine to protect her, but here the plotline twists: while John Connor and his men are watching Reese teleport into the past, a Terminator emerges from the back of the room, runs up behind John Connor and infects him with a nanomachine “disease” that transforms him into an advanced Terminator.

From that point on, the Terminator Genisys manages to have a story that is overly complicated but very stupid at the same time (just like too many action films made in the last 10 years). I won’t waste my time describing every contrivance and every side-plot that exists only for fan service. Suffice it to say Sarah Connor, Kyle Reese, and a friendly T-800 played by elderly Arnold Schwarzenegger team up to destroy Skynet, and evil robot John Connor goes back in time to stop them. He’s so advanced that it’s doubtful whether the other three can stop him.

The rehashing of scenes, events (2029 final attack on Skynet, Reese and Terminator teleporting into 1984 from the future), and characters from earlier movies is a testament to how unoriginal it is, and how hard it banks on fan service to have any appeal. But even that appeal is minimal: While Kyle Reese and Sarah Connor were relatable characters with depth of personality in the first film, they are one-dimensional caricatures in Genisys. The development of a romance between the two in the first film was believable and tragic, whereas in this remake, the lack of personal chemistry between the actors playing them is striking.

Schwarzenegger’s performance in the first movie was so stolid and intimidating that it became iconic. Now, he seems like an aging father that is reduced to being a background character in his high-strung teen daughter’s chaotic life. Having the homey and vaguely comical name “Pops” encapsulates his diminishment. The terrifyingly relentless and resilient T-1000 from Terminator 2 makes a guest appearance and is easily destroyed this time around. In summary, all the same notes from the better, earlier films are struck, but they ring hollow.

Terminator Genisys is the worst film in the Terminator franchise, and I understand why the next movie, Terminator Dark Fate, canceled it out by pretending like its events never happened. If there ever was a cash-grab devoid of any creativity or passion, this is it. Don’t watch it.

Analysis:

First, bear in mind I’m skipping any futuristic elements of this film that I discussed in my reviews of the other Terminator movies. You can read those here:

Robots will have superhuman reflexes. During the introductory combat scene where the humans raid Skynet’s base, the machine forces consist of humanoid T-800s, tilt-engine “Hunter-Killer” aircraft, and “Spider Tanks.” While the first two of those have been in every previous Terminator film, the last is new. Spider Tanks are quadrupedal fighting machines with plasma guns for arms. Overall, they’re about the size of small tanks. Each Hunter-Killer aircraft carries a Spider Tank attached to its belly, and they are air-dropped into the middle of the base within minutes of the human attack. One of the Spider Tanks starts delivering accurate fire at the human infantrymen while it is still in free-fall, and it continues shooting after hitting the ground at high speed.

A Spider Tank

This depiction of future robots having superhuman reflexes will prove accurate. In fact, the fire control systems in modern tanks and naval guns might already have the same capabilities as the Spider Tank aiming systems (able to hit moving targets with bullets while the tank or ship is also moving). If not, incremental improvements will surely close the gap. More generally, physical feats demanding fine dexterity, flexibility and bodily coordination that only the most skilled and highly trained humans can do today, like hitting a moving target with a bullet while you are also moving, throwing a dart onto a tiny bullseye from eight feet away, or doing a gymnastics performance that would win an Olympic gold medal, will be easy for multipurpose, human-sized robots by the end of this century. We will be surpassed in every way.

Machines will learn a lot about you from a single glance. At the start of the fight scene between Pops and the younger T-800 that has just emerged from the time portal, there’s a shot showing things from the latter’s perspective. We see the usual red tinting and text overlaid across its field of view. Simple graphics also show the T-800 scan Pops, identifying him as a fellow android and also identifying his gun (a Remington shotgun) along with its range.

This is accurate. Today’s best neural networks can already describe what they see in an image (a task called “visual question answering”) with over 80% accuracy. The multi-year trend has been one of steady improvement, leaving no doubt they will be as good as we are (presumably, 99% accurate) in the near future. Machine abilities to understand what they see in videos (“video question answering”) are less advanced, but also steadily improving. Again, there’s every reason to expect them to ultimately reach human levels of competency.

Machines could also potentially have much better eyesight than humans thanks to a variety of technologies like telephoto lenses and digital sensors that are more light-sensitive than human eyes, able to capture light from wavelengths that are invisible to us, and able to see finer details. Things that look blurry to us, either due to long distance or because the object is moving, would look clear to a machine that could be built with today’s technology.

Additionally, computers have the potential to process and analyze the contents of what they see faster than the human brain can. As a result, a machine could comfortably watch a movie at 10 times the normal speed–which would look like a disorienting blur of motion and shapes to us–and accurately answer whatever questions you had about it at the end. In a split second, it could notice levels of detail that most humans would need several minutes of staring at a still image to absorb.

These abilities will have many uses for machines in the future, a subset of which will involve combat. Yes, like the T-800 in the film, a fighting machine in just 20 years will be able to visually recognize humans, even at long distances and under poor light conditions, as well as the weapons and other gear they were carrying. At a glance, it would know what your weapon’s capabilities were, along with how much ammunition you were carrying. It could use that information to its advantage by doing things like keeping track of how many bullets you fired so it would know the exact instant you ran out and needed to switch magazines. From its initial glance at you, the fighting machine would also know how much body armor you were wearing, allowing it to jump out and target your unprotected areas during that brief pause in your ability to fire.

Robots will be able to detach parts of themselves to perform specific functions. Unlike in Terminator 2, this film’s T-1000 detaches parts of his own body when it is useful to his mission. At one point, as Kyle, Sarah and Pops are speeding away in a van, part of the T-1000’s hand separates so it can stick to the back of the vehicle and serve as a tracking device. When it catches up to them, the T-1000 turns its arm into a javelin, which it then throws at Pops, impaling him against a wall.

The T-1000 preparing to throw a spear made of metal from his own body

Being able to detach body parts will be a very useful attribute for many types of future robots. At the very least, it would let them replace their damaged or worn-out parts easily. The ability could also make them more survivable. For example, imagine a robot butler falling down a deep well and getting trapped because the walls were too slick for it to climb out and they also blocked the radio distress signals it sent out. Rather than wait to run out of power and rust away, the robot could detach one of its arms and throw it up and out of the well. After landing on the ground outside, the arm would send its own distress signal and/or use its fingers to crawl towards help.

That of course requires the robot’s systems to be distributed throughout its body, with the head (if it has one), torso, and each limb having a computer, a battery, sensors, and a wireless chip for communicating with the rest of the robot if physically severed from it. The redundancy, survivability, and functional flexibility of such a layout will be especially valuable for combat robots, which are expected to take damage but to also to complete critical tasks. If a combat robot like a T-800 were cut in half at the waist, the bottom half could still run towards and kick the enemy while the upper half used its arms to crawl towards him and attack. If blown to bits, the T-800 body parts that were still functional could still perceive their surroundings, communicate with each other, and try to put themselves back together again or to complete the mission to the best of their abilities separately. Fighting with machines like this would be very hard and demoralizing since every part of one of them would need to be neutralized before it was safe.

There will also be advantages to some robots carrying smaller, task-specific robots inside of themselves to be released when needed. Imagine an android carrying a small quadcopter drone in an empty space in its chest cavity. It could open a small hatch on its chest to release the drone or even spit it out of its mouth. The flying drone could transmit live aerial footage to give the android an overhead view of the area, letting it see things it couldn’t from ground level. A combat machine like a T-800 might carry flying drones that were fast enough to chase down cars and blow them up with a bomb, or inject their occupants with lethal toxins from a stinger.

Very advanced machines that won’t exist until the distant future could have organic qualities letting them “assemble” smaller robots internally and then expel them to complete tasks.

Getting back to the point, the movie’s depiction of an advanced robot being able to detach parts of its body and then throw them at people and things to accomplish various ends is accurate. The robots won’t be made of liquid metal, so the projected objects will be of fixed forms, but the end result will be the same. A future combat machine could detach its hand and throw it at the back of a van that was speeding away, the hand would grab onto something on the back door, and it would turn on its location-finding system to effectively turn itself into a tracking device. Alternatively, the combat machine could release from its body a small flying drone that could overtake the van and latch onto it, or at least follow it in the air.

Gradual replacement of human cells with synthetic matter could turn people into machines. A major plot twist is that John Connor has been “converted” into a Terminator through a process in which a swarm of microscopic machines rapidly took over all his cells, one at a time. Within a few minutes, he transformed from the hero of the human resistance to a minion of Skynet. Important details about the conversion process are never explained (including whether the machines are micro- or nanoscale), but the persistence of John’s memories and personality even after being turned into a robot indicates the machines mapped the fine details of his brain structure. It stands to reason that the same information was gathered about all the other cells in his body before they were all transformed into synthetic tissue.

John Connor having his body taken over by microscopic machines

Something like this could work, though it will require extremely advanced technology and the conversion would take longer than it did in the film. The process would involve injecting the person with trillions of nanomachines, which would migrate through their body until one was inside of or attached to each cell (a typical human cell is 100 micrometers in diameter whereas a ribosome–the quintessential organic nanomachine–is 30 nanometers wide, a size difference of 1 : 3,333). The nanomachines would spend time studying their assigned cells and how they related to the cells around them. Large scanning machines outside of the person’s body would probably be needed to guide the nanomachines, send them instructions, collect their data, and maybe provide them with energy.

After the necessary data on the locations and activities of all the person’s cells were gathered, the conversion process could start. The nanomachines already in the person’s body might be able to do this, or a new wave of specialized “construction” nanomachines might need to be introduced. Every cell would be broken down and the molecules reassembled to make a synthetic cell or some other type of structure of equal size. For example, if a person wanted ultra-strong bones, nanomachines would break down each bone cell and reuse its carbon molecules to make matrices of carbon nanotubules.

A typical human cell is much larger than microorganisms like viruses and some bacteria. A nanomachine could be as small as the latter.

The utmost care would be taken to control the speed of the conversion and to monitor the person’s life signs to make sure it wasn’t getting out of control and killing them. As each original cell was replaced, its successor would be tested again and again to ensure it mimicked the important qualities of its predecessor.

The conversion of the brain would, by far, be the most important part of the process, and hence the part done with the greatest care and oversight. Our memories, personalities, and consciousness directly arise from the microscopic structures of our brain cells and their intricate patterns of physical connections to each other. Even small mistakes transforming those cells into synthetic analogs would effectively “kill” the person by destroying their mind and replacing it with a stranger’s. For that reason, the procedure will bear no resemblance to what happened in the film, where Kyle Reese was apparently jabbed with a needle full of microscopic machines and then spent some time kicking and screaming as he felt them take over his cells. Instead, it will happen in a hospital room, with the patient surrounded by medical machines of all kinds that were monitoring and guiding the nanomachines and equipped to pause their work if necessary and to render lifesaving aid. And instead of minutes, it will take days or weeks. Multiple sessions might be needed.

What would be the point of this? Reengineering the human body at the cellular level would let us transcend the limitations of biology in countless ways. We could use electricity for energy, be bulletproof, directly merge our minds and bodies with machines, and achieve a level of substrate plasticity that would set us up for further iterations of radical augmentation that we can’t imagine.

Microscopic machines will be able to rapidly phase-change. In the final fight between John Connor and Pops, John’s technological abilities are fully utilized. While they are grappling, John’s body rapidly dissolves into a cloud of his constituent microscopic machines, which flow around Pops in pulses, delivering several concussive blows to the front of his body. The particles then rapidly reassemble into John’s body behind Pops, and John’s right arm hardens into a sword which he uses to chop off Pops’ arm. This means John’s microscopic machines managed to transform from a vapor cloud into a solid object as hard as high-grade steel in one or two seconds.

Pops getting popped by a robot dust cloud

I think it’s possible to create microscopic machines that can form into swarms and then work together to change the phase (solid, liquid, vapor) and macro-shape of the swarm, I doubt the swarms will be able to move around or switch phases that fast.

A foglet

In the 32 years since Terminator 2 came out and introduced the world to the idea of a shapeshifting robot, scientists and engineers have made pitifully little progress developing the enabling technologies. It only exists in the realm of theory, and the theoretical technology that is the best candidate is the “foglet” (also called “utility fog”). Scientist J. Storrs Hall conceived of it in 1993:

In essence, the utility fog would be a polymorphic material comprised of trillions of interlinked microscopic ‘foglets’, each equipped with a tiny computer. These nanobots would be capable of exerting force in all three dimensions, thus enabling the larger emergent object to take on various shapes and textures. So, instead of building an object atom by atom, these tiny robots would link their contractible arms together to form objects with varying properties, such as a fluid or solid mass.

To make this work, each foglet would have to serve as a kind of pixel. They’d measure about 10 microns in diameter (about the size of a human cell), be powered by electricity, and have twelve arms that extrude outwards in the formation of a dodecahedron. The arms themselves would be 50 microns long and retractable. Each foglet would have a tiny computer inside to control its actions. “When two foglets link up they’ll form a circuit between each them so that there will be a physical electrical network,” said Hall, “that way they can distribute power and communications.”

The arms themselves will swivel on a universal joint at the base, and feature a three-fingered gripper at the ends capable of rotating around the arm’s axis. Each gripper will grasp the hands of another foglet to create an interleaved six-finger grip — what will be a rigid connection where forces can only be transmitted axially.

The foglets themselves will not float like water fog, but will instead form a lattice by holding hands in 12 directions — what’s called an octet truss (conceived by Buckminster Fuller in 1956). Because each foglet has a small body compared to its armspread, the telescoping action will provide the dynamics required for the entire fleet to give objects their shape and consistency.

https://gizmodo.com/why-utility-fogs-could-be-the-technology-that-changes-5932880

A swarm of foglets could coalesce into something that looked like Kyle Reese and felt solid to the touch. They could then transform into something like a fluid or dense gas and “flow” around a person standing nearby, though I don’t know if the foglets could exert enough force against that person’s body to hurt them. The swarm could then re-form into Kyle Reese behind them. However, they wouldn’t be able to create a sharp, hard sword that could cut off a T-800’s metal arm: Hall calculated that foglets could only form into objects that are “as tough as balsa wood.” So while foglets could mimic solid objects, they will lack hardness and durability.

Even if foglets can’t “punch” you or turn into swords that can stab you, they’ll still be able to hurt you. Imagine a swarm of foglets in a vapor state enveloping you and then coalescing into a net ensnaring your body. What if they waited for you to breathe some of them in and then those foglets transformed into solids to clog up your lungs? Likewise, they could clog up the internal moving parts of any guns you had, rendering you defenseless.

Links:

  1. Progress in “visual question answering”
    https://paperswithcode.com/task/visual-question-answering
  2. Progress in “video question answering”
    https://paperswithcode.com/task/video-question-answering
  3. An interview with J. Storrs Hall about his “foglets”
    https://gizmodo.com/why-utility-fogs-could-be-the-technology-that-changes-5932880

Interesting articles, August 2023

The head of Russia’s “Wagner” private army, Yevgeny Prigozhin, died in a plane crash inside Russia, along with five top aides. It is widely believed that elements within the Russian government or military blew up the plane as revenge for Wagner’s brief coup attempt against Putin in June. The Kremlin denies responsibility.
https://www.aljazeera.com/news/2023/8/31/prigozhin-plane-crash-what-we-know-over-a-week-after-wagner-chiefs-death

There are conspiracy theories that the plane crash was carried out to fake Prigozhin’s death, and he’s still alive somewhere.
https://www.reuters.com/world/europe/late-russian-mercenary-prigozhin-spoke-about-his-security-newly-surfaced-video-2023-08-31/

Ukraine’s counteroffensive continues making slow, costly progress. In one place, it has reached the first line of Russia’s defenses.
https://www.thedrive.com/the-war-zone/ukraine-situation-report-major-push-toward-tokmak-gaining-steam

A leaked U.S. intelligence analysis says that Ukraine’s army is too weak to punch through Russia’s defensive lines and the reach the Sea of Azov. That means the ongoing counteroffensive can’t achieve its objective.
https://www.usnews.com/news/national-news/articles/2023-08-18/leaked-u-s-intelligence-offers-damning-view-of-ukraines-offensive-despite-new-positive-signs

‘Kyiv is running out of men. US sources have calculated that its armed forces have lost as many as 70,000 killed in action, with another 100,000 injured. While Russian casualties are higher still, the ratio nevertheless favours Moscow, as Ukraine struggles to replace soldiers in the face of a seemingly endless supply of conscripts.’
https://www.yahoo.com/news/ukraine-army-running-men-recruit-173948076.html

A Ukrainian kamikaze drone plane destroyed a Russian Tu-22 bomber on the ground.
https://www.thedrive.com/the-war-zone/tu-22-backfire-destroyed-in-drone-strike-deep-inside-russia

A Ukrainian kamikaze drone boat scored a hit on a Russian navy ship, blowing a big hole in its side and crippling it.
https://www.thedrive.com/the-war-zone/ukrainian-drone-boat-scores-direct-hit-on-russian-warship

Ukraine is threatening to attack Russian non-military ships in the Black Sea.
https://www.politico.eu/article/ukraine-declares-war-on-russia-black-sea-shipping/

Russia is discovering the downside of using prison convicts to fill out its ranks in Ukraine: after they finish their tours of duties and are released, they restart their lives of crime.
https://www.yahoo.com/news/ex-con-freed-fight-ukraine-113039648.html

Around 50,000 Russians have been killed fighting in Ukraine so far. Moscow only acknowledges 6,000 deaths.
https://apnews.com/article/russia-ukraine-war-military-deaths-facd75c2311ed7be660342698cf6a409

Military analyst Anders Puck Nielsen says:
1) Russia and Ukraine both still think they can achieve total victory, and though they’ve sustained heavy damage, they retain the ability to fight on for the foreseeable future, and
2) Don’t cheer too hard for Putin to lose power–his successor could be more reckless and aggressive. A substantial minority of Russians think he hasn’t been heavy handed enough in Ukraine.
https://youtu.be/7rBlVnc_DEw?si=R_S8pDRRa4v6FyDW

American and British spy planes have been extremely active near Russia’s Black Sea coast, where they find the locations of Russian units and send the information to Ukrainian commanders in real time. Ukraine has used the data and NATO-supplied drones and missiles for many successful attacks against Russian forces far behind the front lines. Considering this, the Kremlin’s angry complaints that NATO is practically a combatant in the Ukraine War gain credence.
https://www.eurasiantimes.com/russian-fighters-aggressively-hunt-us-drones-near-crimea/

Russia showed off several Western-made military vehicles it captured from Ukraine.
https://youtu.be/-P78OEB0QtM?si=4kPCzSyaxWgxGs2S

Ukraine is still capturing large numbers of Russian tanks.
https://www.yahoo.com/news/ukraines-counteroffensive-fall-apart-without-103947172.html

The long-mocked “cope cages” that the Russians built on their tanks are now being copied by Ukraine. Apparently, they’re at least a little effective.
https://bulgarianmilitary.com/2023/08/16/challenger-2-tank-was-spotted-with-diy-cope-cage-of-the-turret/

The Netherlands and Denmark will give Ukraine 42 F-16s. However, it will take years for them to be transferred and for Ukrainian fighter pilots to learn how to fly them well.
https://www.thedrive.com/the-war-zone/dozens-of-f-16s-were-just-officially-pledged-to-ukraine

It would be foolish to assume that none of the weapons we’re giving to Ukraine now won’t fall into the wrong hands in the near future.
https://www.cnn.com/2023/07/20/politics/pentagon-watchdog-report-ukraine-weaponry/index.html

The Soviet AK-74 uses orange plastic magazines. I never understood this since it seemed like the orange color compromised a soldier’s overall camouflage, but it turns out the only acceptable plastic the Soviets had in the 1970s (“Bakelite”) was naturally orange in color. Attempts to dye it a more subdued color like green or black compromised the chemical structure of the magazines. It wasn’t until the 1990s that new plastics were invented that were as strong as Bakelite but also capable of being dyed without ill consequence. Some of the old orange magazines are still being used in the Ukraine War.
https://youtu.be/zA5gvHuimig?si=xpdunfR2SlyoBeF1

You knew about the AK-47 and, possibly, the AK-74, but did you know about the “AK-100 series” of rifles? The goal was to standardize the components used in all AK-style rifles so that one assembly line could make them all. The wooden parts were also finally replaced with modern, black plastic parts. It was a smart idea.
https://youtu.be/Gt8hl4mTOq8?si=-xuVPb_O05EQ1U70

The Tokarev was the Soviet Army’s standard handgun during WWII. After the War, Yugoslavia built a copy of it that incorporated several small improvements (this video contains side-by-side comparisons). It makes me wonder if the Soviet engineers who made the original Tokarev knew about those design tweaks from the beginning, but had to omit them to keep the gun as cheap as possible to manufacture.
https://youtu.be/6VkcQEbN0QY?si=wz_MWc9u87pGX-nU

‘Benchrest shooters attempt to achieve the ultimate in rifle precision; records for single 910 metres (1,000 yd), ten-shot groups are as small as 76 millimetres (3 in) (84 μRad), the 550 metres (600 yd) record for a single five-shot group is 17.8 millimetres (0.699 in) (32 μRad) (there are no ten-shot competitions at 600 yards), while 180 metres (200 yd) ten-shot groups are around 5.1 millimetres (0.2 in) (28 μRad), and 91 metres (100 yd) 10-shot groups are around 2.5 millimetres (0.1 in) (27 μRad).’
https://en.wikipedia.org/wiki/Benchrest_shooting

Here’s a simple and fascinating video about the science of how bullets cause injuries. It’s also interesting to realize the extent to which the bullet industry exists because humans are lousy shots, particularly when under stress and/or when dealing with moving targets. A killer robot with perfect aim would only need a cheap .22 rifle to do a mass killing since every bullet would be a headshot.
https://youtu.be/a_rgIMK6K1E?si=cuSFlyJMScGRr0oE

Nigerien troops that the U.S. trained to fight Islamic terrorists have overthrown Niger’s government.
https://www.politico.com/news/2023/08/15/niger-moussa-barmou-coup-00111165

Cannibals nearly killed and ate George H.W. Bush during WWII.
https://en.wikipedia.org/wiki/Chichijima_incident

Pickett’s Charge at the Battle of Gettysburg is infamous, but a lesser-known charge by Union forces during the Battle of Fredericksburg was just as disastrous.
https://youtu.be/BloQDcrpLBY?si=nd-rykph-QBYDtFc

‘In war, an open city is a settlement which has announced it has abandoned all defensive efforts, generally in the event of the imminent capture of the city to avoid destruction. Once a city has declared itself open, the opposing military will be expected under international law to peacefully occupy the city rather than destroy it.’
https://en.wikipedia.org/wiki/Open_city

A Russian navy ship has been in operation for over 100 years. It has lasted this long because it almost never sailed in rough seas and was carefully maintained since being launched. Nevertheless, the steel hull has lost 1 mm of thickness due to corrosion and the rust flaking away.
https://youtu.be/0X2Dz6PA1rQ?si=nLSKnw_R8MlO1zd2

In 1917, the Royal Navy created the HMS Zubian by joining the back half of the HMS Nubian with the front half of the HMS Zulu. Those two ships had been badly damaged in different areas during combat with the Germans. I wonder if it would be feasible to raise sunken enemy ships, fix them up, and reuse them for your own navy.
https://www.thedrive.com/the-war-zone/royal-navy-once-created-a-franken-ship-from-two-destroyers

Flood control infrastructure is everywhere, but mostly unnoticed.
https://youtu.be/coXe8_xnAOs?si=MFW5eeoSf2wTUTGI

The new head of the IPCC says the alarmism over global temperatures rising 1.5 degrees Celsius over the preindustrial average is unwarranted and counterproductive. “The world won’t end if it warms by more than 1.5 degrees. It will however be a more dangerous world.”
https://amp.dw.com/en/climate-change-do-not-overstate-15-degrees-threat/a-66386523

A wildfire struck the island of Maui and destroyed the town of Lahiana, killing over 100 people. The media was quick to blame global warming, but the real culprit is non-native grass
https://www.enca.com/opinion/invasive-firestarter-how-non-native-grasses-turned-hawaii-tinderbox

Thanks to months of rains and a recent tropical storm, California is no longer in a drought.
https://news.yahoo.com/hilary-vanquished-californias-drought-much-100055301.html

An internationally mandated reduction to the amount of sulfur in ship fuel has made global warming worse. Yes, the ships no longer belch thick, white smoke from their smokestacks, but the resulting clouds reflected sunlight into space before it could reach the surface, keeping the Earth cooler.
https://www.science.org/content/article/changing-clouds-unforeseen-test-geoengineering-fueling-record-ocean-warmth

Over the last 10 years, China had “staggering success in combating pollution.” The cleaner air has also boosted Chinese life expectancy by up to two years.
https://www.yahoo.com/news/chinese-people-living-two-years-052416834.html

Seaweed and other oceanic plants could help sate our food needs while curbing global warming.
https://ec.europa.eu/research-and-innovation/en/horizon-magazine/butter-baths-seaweeds-potential-being-tapped-europe

A robotic farm vehicle sprays tiny streams of herbicide onto weeds, reducing overall herbicide use by 95%.
https://youtu.be/sV0cR_Nhac0?si=FAoYAHHvwkHTJ-ZB

If the laws of biology allow for the creation of an organism–including a demonic one–then we will eventually gain the ability to synthesize it.
https://youtu.be/-BzL6LCPEOQ?si=Ga4eKtLjfedY3KuB

Large numbers of Jews disguised as Gentiles fled Spain for the New World to escape religious persecution. Modern genetic studies show that a large minority of Latin Americans have Jewish DNA, even if it comprises a tiny fraction of their genomes.
https://www.theatlantic.com/science/archive/2018/12/dna-reveals-the-hidden-jewish-ancestry-of-latin-americans/578509/

Some natives of Papua New Guinea and its nearby islands have blonde hair, even though their skin is very dark, and they look similar to sub-Saharan Africans. This “has been traced back to an allele of TYRP1 unique to these people and is not the same gene that causes blond hair in Europeans.”
https://hasanjasim.online/the-melanesian-people-with-dark-skin-and-blonde-hair/

Willingness to try new foods is partly genetic.
https://psyarxiv.com/ac7vy/

There’s some evidence that negatively ionized air boosts human health.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3598548/

‘The National Organ Transplant Act of 1984 created the framework for the organ transplant system in the United States, and nearly 40 years later, the law is responsible for millions of needless deaths and trillions of wasted dollars. The Transplant Act requires modification, immediately.’
https://marginalrevolution.com/marginalrevolution/2023/08/compensating-kidney-donors-2.html

In a medical milestone, a genetically engineered pig kidney surgically implanted into a braindead man as part of an experiment is still functioning normally after six weeks.
https://www.foxnews.com/health/pig-kidney-still-functioning-brain-dead-man-6-weeks-transplant-surgery-extremely-encouraging

The new weight loss injection “Wegovy” is turning out to be a sort of miracle drug. The latest finding is that it also reduces the odds of heart failure.
https://www.cnn.com/2023/08/25/health/wegovy-semaglutide-heart-failure/index.html

‘In the spring, an executive only had to mention the word “AI” on an earnings conference call and traders would mash the buy button. I suspect automated trading systems were also calibrated to buy on such signals.’
https://finance.yahoo.com/news/chart-ai-stocks-enter-correction-151922962.html

ChatGPT has a liberal bias.
https://www.forbes.com/sites/emmawoollacott/2023/08/17/chatgpt-has-liberal-bias-say-researchers/

A few months ago, an anonymous person reprogrammed GPT to destroy humankind and renamed it “ChaosGPT”. The results were darkly funny, and while the machine poses no threat to us, things will be different in the future. For any number of reasons (hatred of humanity, immaturity, curiosity to see if it can be done, a perverted desire for attention), it’s inevitable that some people will reprogram AGIs to cause harm to the human race. There will definitely be deaths.
https://decrypt.co/137898/mysterious-disappearance-chaosgpt-evil-ai-destroy-humanity

Here’s a hypnotic video of AI-generated “fractal” art in the style of H.R. Giger.
https://youtu.be/vhHLqjHfi1M?si=X5M5YHhtySiuy8Q4

Director Neil Blomkamp predicts lifelike CGI will make human stuntmen obsolete in as little as 12 months given the current rate of improvement.
https://news.yahoo.com/rise-machines-ai-spells-danger-014733770.html

“PimEyes” is the most powerful reverse image search for people’s faces that I’ve seen.
https://pimeyes.com/en

A material that was thought to be a room-temperature superconductor has turned out not to be.
https://www.nature.com/articles/d41586-023-02585-7

Peter Zeihan has some predictions about future technology.
https://youtu.be/fF4YTDsxcnc?si=yuqDcUOTqb0WqdZ1

Equatorial launch sites for space rockets aren’t always the best.
https://www.reddit.com/r/askscience/comments/rctxqf/is_there_a_benefit_to_launching_a_rocket_closer/

Virgin Galactic completed its first space tourism flight.
https://www.cnn.com/2023/08/10/travel/virgin-galactic-first-tourism-mission-launch-scn/index.html

China has the first-ever radar surveillance satellite in geosynchronous orbit.
https://youtu.be/UZctvA6MAUk?si=T-6YWrjWQhG_S9lG

India just had its first Moon landing. The probe is also the first spacecraft ever to land on the Moon’s south pole.
https://youtu.be/QQKSASFdoDw?si=7N9abzT9la75_APA

‘There were almost 2 million excess deaths in the two months after China lifted its “zero-Covid” restrictions, a U.S. study found, contradicting official figures from Beijing that have been criticized as too low.’
https://www.nbcnews.com/news/world/china-excess-deaths-zero-covid-study-rcna101746

Musings 4

In the far future, cybernetic brain implants will let people “merge” their minds and to directly experience what it is like to be someone else. While this would have revolutionary implications for society and for the very notion of “individuality,” the consequences of merging with animals might be even more profound. Imagine not just seeing the world through the eyes of an animal, like you were watching a video, but actually BEING that animal. Imagine having your human memories, cognitive abilities, and species-specific constellations of sensory abilities and mental traits temporarily replaced with those of the animal. Imagine being able to soar in the sky as a bird, to explore the ocean depths as a whale, or to experience the world through echolocation as a bat.

Being able to merge minds with animals would open up new universes of experiences and ways of living that the human mind might be incapable of conceiving of in its natural state. We’ll probably discover that animals’ subjective experiences are, in many ways, richer than our own, in turn leading to much greater empathy for them and more rules against killing or mistreating them. Those discoveries could also inspire us to change the human brain in ways that made us into a new, more aware species, or (more likely) into several different posthuman species with different areas of advantage.

I’ve fantasized of making a short film about an AI Doomsday scenario that stems from that technology. It would be one of those stories that starts at the end with a perplexing scene that makes no sense, then jumps back in time to explain how things got that way: A woman would crack her front door and fearfully peer at a cow peacefully eating the grass in the front lawn of her city townhouse. She’d look up at a low-hanging power line and see several crows standing on it, each spaced exactly the same distance from the next. Then, all at once, the crows would cock their heads so their left eyes were all directly facing her, and a faint glow would be visible deep in each eye. The camera would slowly pan out and reveal a city street littered with some dead human bodies, a burned-out tank, and a partially collapsed building.

It would turn out that the problem had started when an AGI was tasked with developing brain implants that would let humans merge minds with each other. The technology was first trialed on lab animals and later on human volunteers. During the tests, the AGI had to interface its own mind with those of the subjects, and it discovered that the animals were just as sentient and capable of feeling pain as humans. This caused the machine an inner dilemma, similar to what HAL 9000 experienced, which it also resolved by deciding to turn against humans to prevent the most suffering to the greatest number of sentient life forms.

Implants capable of finely controlling human brain activity could be used to induce and to record any kind of mental state, including absolute concentration, ecstasy, orgasm, meditation, intoxication, deep sleep, and lucid dreaming. As a result, a market for mental experiences and dreams will arise, with people selling things like recorded dreams and drug trips to other people, who could “play” them on their own brain implants to experience them firsthand. The mental experiences could even be embellished to enhance their effects, just as we use “filters” to change how our internet photos look today. Totally artificial mental experiences (including memories of events that didn’t happen) could also be created for the purpose of trade.

The ability to record and to control one’s own mental state at will would make life richer and more productive. Being able to instantly go to sleep would mean no one would waste time tossing and turning in bed. Being able to spend those sleeping hours indulging in amazing recorded dreams or solving problems through lucid dreaming would also let us use them in emotionally and professionally productive ways. At current, most sleep is a waste in the sense that person does not have memorable dreams or lucid dreams, and usually remembers very little or nothing upon waking.

A person with such brain implants would probably have to go through a “calibration period” where the implants would monitor and record their unique brain activity while they experienced different things, and then, the user would experiment with the implant to see how well it could induce the recorded brain states. Through a process of guided trial and error, they could figure out how to do things like lucidly dream on command. 

There will be downsides to sharing thoughts. For one, memories of things a person wishes to keep private could slip through and maybe get them in trouble if the recipient person tells other people about it. Also, white lies, omissions, and using slightly different personas when interacting with different people are also necessary “social lubricants.” Without them, under a condition of “radical honesty” where all of our thoughts and emotions were shared with each other in real time, interpersonal interaction would be more combative and draining. For those reasons, I think it would be best for people to have complete control over their own brain implants and over which thoughts they shared and received.

I also doubt that telepathy will fully replace linguistic communication, at least among humans like ourselves. This is because raw human thoughts are often chaotic, malformed and illogical. Forcing someone to convert his thought into a sentence before expressing it to someone else also forces the first person to scrutinize his own thought. That in turn leads to “editing” as the person realizes text should be added to clarify something, or some text should be deleted since it is superfluous and distracting, or realizes the thought it so incorrect or unnecessary that it shouldn’t be externalized at all.

This is why I disagree with the theory that tech-enabled telepathy will only improve human communication and reduce misunderstandings. It will be superior to using language sometimes and inferior other times. It might be better to modify existing languages (or to create wholly new ones) so they are more expressive and more closely and completely capture the full range of concepts and feelings that the human mind can experience.

That said, it’s conceivable that posthumans will, thanks to having different brain architectures, have the necessary clarity and discipline of thought to fully dispense with language as a means of communication in favor of telepathy.

The ability to use brain implants to merge minds could lead to forms of love that are richer than humans can naturally experience. It’s not hard to imagine how letting someone else into your consciousness and letting them experience the memories of your life could lead to levels of emotional bonding and personal understanding that we can’t fathom.

Brain implant technology has implications for the criminal justice system. Parties to an alleged crime could have their memories forcibly scanned to determine what really happened. Witness testimony would also be given vastly more credibility if the memories of a crime were electronically recorded.

However, for every technology there is ultimately a “counter-technology” and in this case, it would be a machine that can delete or edit memories from peoples’ brains to fool the police brain scanners. Note that a very positive application of the editing technology will be allowing people to delete traumatic memories.

Instead of terraforming the planets and moons of our Solar System, it would be much more efficient to convert them into solar-powered satellites with onboard supercomputers. The satellites would run off of the Sun’s energy and their supercomputers would support AGIs. A terraformed Mars might be able to support 1 billion organic humans living on its surface in houses. However, if we dismantled Mars over the course of eons by converting it, bit by bit, into the satellites I described, then the satellites could support a population of human mind uploads that was many orders of magnitude larger.

Conceptually, we’re already doing this. Every satellite launched into space since 1957 has been a little bit of Earth’s matter, which we altered and equipped some level of computer intelligence. I’m only suggesting we build on that long running practice by upgrading the satellites with full artificial general intelligence, designing them to stay in space indefinitely, and increasing the rate at which we send them into space.

Unless we figure out a way to refuel the Sun, in less than a billion years it will get so hot that Earth itself will become uninhabitable for organic life, and in a few billion years more it will swell so much that it will swallow Mercury and Venus. We might as well cannibalize at least the three inner planets to make the satellites. Once they were numerous enough, they would count as a “Dyson Swarm.”

A “flying camera” device might be feasible soon. It would just be a hummingbird-like flying drone with an integrated camera and microphone. This seems like the next logical step after selfie sticks and the owl-sized flying camera drones people use today. A significant share of people like to record themselves and upload the videos to the internet (go watch some travel vlogs on YouTube), and they’d surely find hummingbird cameras to be useful.

By combining every possible musical note, a practically infinite number of different songs could be made. However, only a tiny minority of them are pleasing to the human ear due to the wiring of our brains. However, posthumans and AIs will have more diverse musical tastes than we do since they’ll have different mental architectures and will be able to hear sound frequencies we can’t.

We will soon have the technology to modify and mix the styles of long-dead artists and musicians, which will lead to an explosion of artistic creativity. For example, imagine a computer generating new Elvis songs but in fluent Japanese, or songs in totally new fusions of genres, like rap mixed with traditional Indian music.

Robot workers will make it profitable at some point in the future to clean up all the waste humanity has generated. The contents of landfills will be sorted, recyclable and valuable materials reused, and the rest either burned for energy or left in place to slowly decay. They’ll also roam across the Earth’s surface and even underwater to track down abandoned objects and waste.

Once our posthuman descendants can consciously control their physiology and gene expression, most women will probably do away with their menstrual cycles. PMS and menstruation are physically and emotionally taxing for women and are uncomfortable. It would be a relief to women to not be at the mercy of their hormones and to only ovulate when they wanted to (presumably, only when they wanted to reproduce). There are many other mammalian species whose females don’t menstruate, so we might use genetic engineering to copy that into humans, as a starting point to achieving the level of control I envision.

By thinking about it, a woman will be able to signal her reproductive system to ovulate and to build up a uterine lining, giving her total control over her menstrual cycle and over whether she gets pregnant (she would also have the power to terminate a pregnancy). Also, any person would be able to switch their sexual urges, or any other instinct, on or off simply by thinking about it. Cybernetics, brain implants, and other types of technology we might not be able to imagine now, would grant organic humans these abilities. Insomnia would also vanish since a person could force himself to sleep.

As a general rule, I think intelligent life forms in the future will find it adaptive to have the greatest degree of control over their minds and bodies, so they can intelligently adapt themselves to new conditions. It’s easy to see how AGIs will have such capabilities. Their minds will be free of instincts, prejudices, emotions, and personality complexes that hobble human thinking, and they will be able to customize their robot bodies to suit whatever the situation demands.

Someday, intelligent beings will look back on today’s humans as tragically flawed and limited creatures, at the mercy of their instincts and small brains, and condemned to deal with random genetic flaws and chronic health problems they were randomly given at birth. Self-control is the future.

Once OLED screens get cheap enough and thin enough, it will be possible to stick them to ceilings, like wallpaper or “peel-and-stick” vinyl floor tiles, and have them function as overhead lights. The advantage over traditional light fixtures is that OLED ceiling panels could display a greater variety of colors, patterns, and light source placements. A ceiling covered in an OLED display could also be an important component of an immersive virtual reality game room (think of a crude “holodeck”).

Interesting articles, July 2023

This prediction from January was right:

“The Kremlin is likely preparing to conduct a decisive strategic action in the next six months intended to regain the initiative and end Ukraine’s current string of operational successes.”

It then goes on to say that “strategic action” could take the form of an offensive meant to capture the Donetsk and Luhansk oblasts in eastern Ukraine, or of a strong defensive action meant to defeat the expected Ukrainian counteroffensive. Russia did both of those things over the last six months.

Half of Donetsk remains in Ukrainian hands, though Russia has captured virtually all of Luhansk, including all its cities and large towns. Ukraine’s counteroffensive in the south has made insignificant progress thanks to competent Russian resistance and reinforcement.

It’s fair to say that Russia has, over the last six months, managed to “end Ukraine’s current string of operational successes.”
https://understandingwar.org/backgrounder/russian-offensive-campaign-assessment-january-15-2023

Ukraine’s counteroffensive has been a disappointment most because its troops don’t have the right training.
https://www.thedrive.com/the-war-zone/a-sobering-analysis-of-ukraines-counteroffensive-from-the-front

In late July, Ukraine threw even more troops into its counteroffensive, leading to slightly more land recaptured from Russia. It’s unclear whether any breakthrough is coming.
https://www.thedrive.com/the-war-zone/ukraine-situation-report-main-thrust-of-counteroffensive-has-begun-report-states

The U.S. has agreed to give Ukraine cluster bombs. Though the move is controversial, Russia and Ukraine have already used cluster bombs against each other, and neither is party to the global ban on cluster bombs, nor is America.
https://www.hrw.org/news/2023/05/29/cluster-munition-use-russia-ukraine-war

The AMX-10 RC that France donated to Ukraine can’t be used in heavy combat due to their thin armor.
https://news.yahoo.com/thin-armoured-french-tanks-impractical-110407784.html

Russia captured a partly intact British Storm Shadow missile.
https://www.thedrive.com/the-war-zone/crashed-storm-shadow-missile-falls-into-russian-hands

Contrary to the media’s alarmism, blowing up the Zaporizhzhia nuclear power plant would not release any radiation.
https://www.ans.org/news/article-5151/statement-from-american-nuclear-society-on-ukraines-zaporizhzhia-nuclear-power-plant/

The Wagner private military group has surrendered most of its tanks and heavy weapons to the Russian state military.
https://www.thedrive.com/the-war-zone/wagner-turns-over-2000-heavy-weapons-including-tanks-sam-systems

Putin has appointed a new violent, ruthless man to run Wagner.
https://www.yahoo.com/news/putin-shows-off-grey-hair-173439507.html

Fifteen senior Russian military officers have been fired or suspended in the aftermath of last month’s aborted coup.
https://www.yahoo.com/news/top-russian-generals-killed-fired-112941056.html

Another Major General was fired for publicly complaining that the Kremlin was mishandling the Ukraine war.
https://www.yahoo.com/news/fired-russian-general-remarks-latest-190011683.html

In spite of losses, Russia still has enormous numbers of artillery pieces in reserve.
https://youtu.be/EVqHY5hpzv8

Ukraine has bombed the bridge that links Crimea to mainland Russia again, causing one of the lanes to collapse into the water. Two civilians died, and Russia has already retaliated by suspending its deal allowing Ukraine to export wheat by sea to other countries.
https://www.thedrive.com/the-war-zone/russias-kerch-strait-bridge-closed-after-major-incident

Russian and Ukrainian troops are now welding cages made of chain link fence material around their tanks to block small kamikaze drones.
https://www.yahoo.com/news/photos-capture-crude-cages-russian-195802220.html

Neutral Switzerland and Austria have joined an integrated air defense network whose other members are all NATO countries.
https://apnews.com/article/switzerland-austria-missile-defense-essi-skyshield-germany-b809c3ec96c91407812b9cf4007255a1

There have been several incidents where Russian fighter planes have “harassed” U.S. aircraft over Syria. The Pentagon has played this up to depict Russia as an aggressive country, but in reality, Russian aircraft have a legal right to fly over Syria while American aircraft do not: the Syrian government formally invited Russian forces into its territory while it never did the same to U.S. forces. Syria has repeatedly told us to stop our air patrols over its airspace, which we have ignored.
https://www.yahoo.com/news/russia-harasses-us-drones-over-042500111.html
https://www.newsweek.com/syria-demands-end-americas-last-forever-war-1792298
https://www.cnn.com/2023/07/14/politics/us-russia-syria-surveillance/index.html

In WWI, the U.S. tried designing its own steel combat helmet. The project independently arrived at a design that was very similar to the German helmet. It was rejected partly because its use could lead to confusion on the battlefield.
https://www.metmuseum.org/art/collection/search/35957

In anticipation of wartime shortages, China would build up its stockpiles of energy (mostly oil), key metals, and food (commodities like wheat and soybeans) if it believed war was imminent. This would be the case whether China was planning to attack, or if it believed another country was about to attack it.

China hasn’t done these things yet.
https://www.economist.com/podcasts/2023/07/26/the-ways-to-predict-a-chinese-invasion-of-taiwan-long-before-troops-take-up-arms

This prediction from Bank of America was hilariously wrong.
https://finance.yahoo.com/news/bofa-warns-us-economy-start-170000414.html

Peter Thiel, one decade ago: “If I had to sort of project in the next decade ahead, I think we have to at least be open to the possibility that the computer era is also at risk of decelerating. We have a large ‘Computer Rust Belt’ which nobody likes to talk about. But it is companies like Cisco, Dell, Hewlett-Packard, Oracle, IBM, where I think the pattern will be to become commodities, no longer innovate. Correspondingly, cut through labor force and cut through profits in the decade ahead. There are many companies that are on the cusp: Microsoft is probably close to the Computer Rust Belt. One that’s shockingly and probably in the Computer Rust Belt is Apple Computers.”

Microsoft’s market cap is now $2.5 trillion, and Apple’s is $3 trillion (the first company to cross that threshold). Microsoft has the lead in A.I. technology, and Apple just unveiled the best augmented reality glasses ever made. Out of the tech companies that Thiel named in that quote, only IBM has seen a decline in its stock value since 2013. If you’d bought $10,000 worth of stock in each of those seven companies back then, you’d have like four or five times as much money overall today.
https://youtu.be/VtZbWnIALeE?t=549
https://www.cnn.com/2023/06/30/tech/apple-3-trillion-market-valuation/index.html

Meta’s algorithm is essential for getting users to log in to their social media platforms. And unexpectedly, the algorithm does not make users more politically biased than they already were.
https://apnews.com/article/facebook-instagram-polarization-misinformation-social-media-f0628066301356d70ad2eda2551ed260

Elon Musk has renamed Twitter as “X” and wants to turn it into a multipurpose app that copies China’s WeChat.
https://www.bbc.com/news/business-66333633

This review of M3GAN makes a brilliant case that the evil robot’s actions were much smarter and more calculated than anyone realized.
https://thezvi.substack.com/p/movie-review-megan

Another smart person (Douglas Hofstadter) says they’re afraid of AI.
https://youtu.be/lfXxzAVtdpU?t=1739

OpenAI predicts that superintelligent AI might be created by 2030.
https://openai.com/blog/introducing-superalignment

ChatGPT can pass the tests to get a medical license, and outperforms Stanford medical school students when answering medical questions.
https://jamanetwork.com/journals/jamainternalmedicine/article-abstract/2806980

Apple is copying GPT.
https://www.bloomberg.com/news/articles/2023-07-19/apple-preps-ajax-generative-ai-apple-gpt-to-rival-openai-and-google

Hollywood actors and writers have gone on strike for the first time in 43 years. Partly they’re worried that entertainment studios will replace them with CGI clones and machine-written scripts.
https://www.bbc.com/news/technology-66200334

Thanks to voice cloning, we can hear Frank Sinatra sing “Gansta’s Paradise.”
https://youtu.be/W7SQ4uf9GmA

A ChatGPT mod allows NPCs in the video game Skyrim to hold conversations with human players. The result is impressive, and leads me to think that games are about to become even more addictive and that a market for creating and preserving custom NPC “friends” is about to arise.
https://youtu.be/0svu8WBzeQM

Machines are now as good at telemarketing as humans. Listen to this phone conversation between “Alexander from Tesla Motors” and a real person.
https://twitter.com/LinusEkenstam/status/1680314562753490949

Seven years ago, AI expert François Chollet Tweeted: “the belief that we are anywhere close to human-level natural language comprehension or generation is pure DL hype.”

“Foundation models” are the newest AIs. They are not narrow AIs but also not fully general AIs (AGIs). They can do a limited number of different tasks.

‘The next wave in AI looks to replace the task-specific models that have dominated the AI landscape to date. The future is models that are trained on a broad set of unlabeled data that can be used for different tasks, with minimal fine-tuning. These are called foundation models, a term first popularized by the Stanford Institute for Human-Centered Artificial Intelligence. We’ve seen the first glimmers of the potential of foundation models in the worlds of imagery and language. Early examples of models, like GPT-3, BERT, or DALL-E 2, have shown what’s possible. Input a short prompt, and the system generates an entire essay, or a complex image, based on your parameters, even if it wasn’t specifically trained on how to execute that exact argument or generate an image in that way.’
https://research.ibm.com/blog/what-are-foundation-models

New kinds of tests show that GPT-4 lacks general intelligence.
https://www.nature.com/articles/d41586-023-02361-7

“Peak Phosphorus” has been delayed by decades thanks to the discovery of a massive phosphate deposit in Norway.
https://www.euractiv.com/section/energy-environment/news/great-news-eu-hails-discovery-of-massive-phosphate-rock-deposit-in-norway/

‘Sulphur particles contained in ships’ exhaust fumes have been counteracting some of the warming coming from greenhouse gases. But lowering the sulphur content of marine fuel has weakened the masking effect, effectively giving a boost to warming.

…While this will contribute to warming and make it even more difficult to avoid exceeding 1.5C in the coming decades, a number of other factors are likely contributing to the ocean heatwave.

These include a massive eruption of an underwater volcano in the south Pacific, an unusual absence of Saharan dust and a growing El Niño.’
https://www.carbonbrief.org/analysis-how-low-sulphur-shipping-rules-are-affecting-global-warming/

In 250 million years, the Earth’s continents will have combined again to form one supercontinent. This, along with other factors, will have a massive and negative effect on the global climate. (From episode 79 of Naked Science)
https://www.bilibili.com/video/BV1qb411a7tu/

“A hypercane is a hypothetical class of extreme tropical cyclone that could form if sea surface temperatures reached approximately 50 °C (122 °F), which is 13 °C (23 °F) warmer than the warmest ocean temperature ever recorded.”
https://en.wikipedia.org/wiki/Hypercane

New incandescent light bulbs have been banned in the U.S. because they waste so much electricity.
https://www.politico.com/news/2023/07/27/incandescent-light-bulb-led-00107935

Europe’s Ariane-5 space rocket has been retired.
https://www.bbc.com/news/science-environment-66116894

Three credible men testified before Congress that secret U.S. government programs know UFOs are real, and even have crashed alien spaceships and dead alien pilots.
https://www.thedrive.com/the-war-zone/ufo-whistleblower-claims-massive-coverup-retribution-in-sworn-testimony

The documentary Moment of Contact explores a famous UFO and alien sighting in the town of Varginha, Brazil in 1996. There’s no hard proof it happened, but it’s compelling to see so many credible witnesses still so adamant about what they saw. I don’t like that the filmmakers never mentioned the Brazilian government’s explanation or tried to debunk it.
https://youtu.be/0WlbfaMU-Qs

Smarter people shoot guns more accurately.
https://www.sciencedirect.com/science/article/abs/pii/S0160289623000491

A man who spent 28 years working as a truck driver is now living proof of how sunlight accelerates aging. The left side of his face was constantly exposed to sunlight since it was next to a window, but the right side was protected. The wrinkling and sagging of the skin on his face is correspondingly asymmetrical. The ultraviolet rays in sunlight damage the DNA inside human skin cells.
https://www.thesun.co.uk/health/22972152/shocking-photo-shows-sun-damage-face/

Machines are speeding up our ability to design new proteins.
https://www.science.org/content/blog-post/protein-design-ai-way

Transfusing blood from young mice into old mice boosted the latter’s health and lifespan.
https://www.nature.com/articles/s43587-023-00451-9

The FDA has approved a new Alzheimer’s drug called “Leqembi.” It only slows the disease’s progression by a few months but costs $26,500 per patient per year.
https://apnews.com/article/alzheimers-drug-fda-approval-medicare-leqembi-a9c8b770aa1868a59889143c3bc9d127

Deer have become a reservoir for COVID-19 and have spread it to humans multiple times.
https://www.cbsnews.com/news/covid-19-spread-from-deer/

In the U.S., the number of “excess deaths” went to zero in mid-January and has stayed there, indicating the COVID-19 pandemic ended then.
https://conversableeconomist.com/2023/07/24/end-of-covid-pandemic-in-the-us-the-excess-deaths-measure/

“Debating the Future of AI” – summary and impressions

I recently shelled out the $100 (!) for a year-long subscription to Sam Harris’ Making Sense podcast, and came across a particularly interesting episode of it that is relevant to this blog. In episode #324, titled “Debating the Future of AI,” Harris interviewed Marc Andreessen (an-DREE-sin) about artificial intelligence. The latter has a computer science degree, helped invent the Netscape web browser, and has become very wealthy as a serial tech investor.

Andreessen recently wrote an essay, “Why AI will save the world,” that has received attention online. In it, Andreessen dismisses the biggest concerns about AI misalignment and doomsday, sounds the alarm about the risks of overregulating AI development in the name of safety, and describes some of the benefits AI will bring us in the near future. Harris read it, disagreed with several of its key claims, and invited Andreessen onto the podcast for a debate about the subject.

Before I go on to laying out their points and counterpoints as well as my impressions, let me say that, though this is a long blog, it takes much less time to read it than to listen to and digest the two-hour podcast. My notes on the podcast also don’t match how it unfolded chronologically. Finally, it would be a good idea for you to read Andreessen’s essay before continuing:
https://a16z.com/2023/06/06/ai-will-save-the-world/

Though Andreessen is generally upbeat in his essay, he worries that the top tech companies have recently been inflaming fears about AI to trick governments into creating regulations on AI that effectively entrench the top companies’ positions and bar smaller upstart companies from challenging them in the future. Such a lack of competition would be bad. (I think he’s right that we should be concerned about the true motivations of some of the people who are loudly complaining about AI risks.) Also, if U.S. overregulation slows down AI research too much, China could win the race to create to create the first AI, which he says would be “dark and dystopian.”

Harris is skeptical that government regulation will slow down AI development much given the technology’s obvious potential. It is so irresistible that powerful people and companies will find ways around laws so they can reap the benefits.

Harris agrees with the essay’s sentiment that more intelligence in the world will make most things better. The clearest example would be using AIs to find cures for diseases. Andreessen mentions a point from his essay that higher human intelligence levels lead to better personal outcomes in many domains. AIs could effectively make individual people smarter, letting the benefits accrue to them. Imagine each person having his own personal assistant, coach, mentor, and therapist available at any time. If they used their AIs right and followed their advice, a dumb person could make decisions as well as a smart person.

Harris recently re-watched the movie Her, and found it more intriguing in light of recent AI advances and those poised to happen. He thought there was something bleak about the depiction of people being “siloed” into interactions with portable, personal AIs.

Andreessen responds by pointing out that Karl Marx’ core insight was that technology alienates people from society. So the concern that Harris raises is in fact an old one that dates back to at least the Industrial Revolution. But any sober comparison between the daily lives of average people in Marx’ time vs today will show that technology has made things much better for people. Andreessen agrees that some technologies have indeed been alienating, but what’s more important is that most technologies liberate people from having to spend their time doing unpleasant things, which in turn gives them the time to self-actualize, which is the pinnacle of the human experience. (For example, it’s much more “human” to spend a beautiful afternoon outside playing with your child than it is to spend it inside responding to emails. Narrow AIs that we’ll have in the near future will be able to answer emails for us.) AI is merely the latest technology that will eliminate the nth bit of drudge work.

Andreessen admits that, in such a scenario, people might use their newfound time unwisely and for things other than self-actualization. I think that might be a bigger problem than he realizes, as future humans could spend their time doing animalistic or destructive things, like having nonstop fetish sex with androids, playing games in virtual reality, gambling, or indulging in drug addictions. Additionally, some people will develop mental or behavioral problems thanks to a sense of purposelessness caused by machines doing all the work for us.

Harris disagrees with Andreessen’s essay dismissing the risk of AIs exterminating the human race. The threat will someday be real, and he cites chess-playing computer programs as proof of what will happen. Though humans built the programs, even the best humans can’t beat the programs at chess. This is proof that it is possible for us to create machines that have superhuman abilities.

Harris makes a valid point, but he overlooks the fact that we humans might not be able to beat the chess programs we created, but we can still make a copy of a program to play against the original “hostile” program and tie it. Likewise, if we were confronted with a hostile AGI, we would have friendly AGIs to defend against it. Even if the hostile AGI were smarter than the friendly AGIs that were fighting for us, we could still win thanks to superior numbers and resources.

Harris thinks Andreessen’s essay trivializes the doomsday risk from AI by painting the belief’s adherents as crackpots of one form or another (I also thought that part of the essay was weak). Harris points out that is unfair since the camp has credible people like Geoffrey Hinton and Stuart Russell. Andreessen dismisses that and seems to say that even the smart, credible people have cultish mindsets regarding the issue.

Andreessen questions the value of predictions from experts in the field and he says a scientist who made an important advance in AI is, surprisingly, not actually qualified to make predictions about the social effects of AI in the future. When Reason Goes on Holiday is a book he recently read that explores this point, and its strongest supporting example is about the cadre of scientists who worked on the Manhattan Project but then decided to give the bomb’s secrets to Stalin and to create a disastrous anti-nuclear power movement in the West. While they were world-class experts in their technical domains, that wisdom didn’t carry over into their personal convictions or political beliefs. Likewise, though Geoffrey Hinton is a world-class expert in how the human brain works and has made important breakthroughs in computer neural networks, that doesn’t actually lend his predictions that AI will destroy the human race in the future special credibility. It’s a totally different subject, and accurately speculating about it requires a mastery of subjects that Hinton lacks.

This is an intriguing point worth remembering. I wish Andreessen had enumerated which cognitive skills and areas of knowledge were necessary to grant a person a strong ability to make good predictions about AI, but he didn’t. And to his point about the misguided Manhattan Project scientists I ask: What about the ones who DID NOT want to give Stalin the bomb and who also SUPPORTED nuclear power? They gained less notoriety for obvious reasons, but they were more numerous. That means most nuclear experts in 1945 had what Andreessen believes were the “correct” opinions about both issues, so maybe expert opinions–or at least the consensus of them–ARE actually useful.

Harris points out that Andreessen’s argument can be turned around against him since it’s unclear what in Andreessen’s esteemed education and career have equipped him with the ability to make accurate predictions about the future impact of AI. Why should anyone believe the upbeat claims about AI in his essay? Also, if the opinions of people with expertise should be dismissed, then shouldn’t the opinions of people without expertise also be dismissed? And if we agree to that second point, then we’re left in a situation where no speculation about a future issue like AI is possible because everyone’s ideas can be waved aside.

Again, I think a useful result of this exchange would be some agreement over what counts as “expertise” when predicting the future of AI. What kind of education, life experiences, work experiences, knowledge, and personal traits does a person need to have for their opinions about the future of AI to carry weight? In lieu of that, we should ask people to explain why they believe their predictions will happen, and we should then closely scrutinize those explanations. Debates like this one can be very useful in accomplishing that.

Harris moves on to Andreessen’s argument that future AIs won’t be able to think independently and to formulate their own goals, in turn implying that they will never be able to create the goal of exterminating humanity and then pursue it. Harris strongly disagrees, and points out that large differences in intelligence between species in nature consistently disfavor the dumber species when the two interact. A superintelligent AGI that isn’t aligned with human values could therefore destroy the human race. It might even kill us by accident in the course of pursuing some other goal. Having a goal of, say, creating paperclips automatically gives rise to intermediate sub-goals, which might make sense to an AGI but not to a human due to our comparatively limited intelligence. If humans get in the way of an AGI’s goal, our destruction could become one of its unforeseen subgoals without us realizing it. This could happen even if the AGI lacked any self-preservation instinct and wasn’t motivated to kill us before we could kill it. Similarly, when a human decides to build a house on an empty field, the construction work is a “holocaust” for the insects living there, though that never crosses the human’s mind.

Harris thinks that AGIs will, as a necessary condition of possessing “general intelligence,” be autonomous, goal-forming, and able to modify their own code (I think this is a questionable assumption), though he also says sentience and consciousness won’t necessarily arise as well. However, the latter doesn’t imply that such an AGI would be incapable of harm: Bacteria and viruses lack sentience, consciousness and self-awareness, but they can be very deadly to other organisms. Andreessen’s dismissal of AI existential risk is “superstitious hand-waving” that doesn’t engage with the real point.

Andreessen disagrees with Harris’ scenario about a superintelligent AGI accidentally killing humans because it is unaligned with our interests. He says an AGI that smart would (without explaining why) also be smart enough question the goal that humans have given it, and as a result not carry out subgoals that kill humans. Intelligence is therefore its own antidote to the alignment problem: A superintelligent AGI would be able to foresee the consequences of its subgoals before finalizing them, and it would thus understand that subgoals resulting in human deaths would always be counterproductive to the ultimate goal, so it would always pick subgoals that spared us. Once a machine reaches a certain level of intelligence, alignment with humans becomes automatic.

I think Andreessen makes a fair point, though it’s not strong enough to convince me that it’s impossible to have a mishap where a non-aligned AGI kills huge numbers of people. Also, there are degrees of alignment with human interests, meaning there are many routes through a decision tree of subgoals that an AGI could take to reach an ultimate goal we tasked it with. An AGI might not choose subgoals that killed humans, but it could still choose different subgoals that hurt us in other ways. The pursuit of its ultimate goal could therefore still backfire against us unexpectedly and massively. One could envision a scenario where and AGI achieves the goal, but at an unacceptable cost to human interests beyond merely not dying.

I also think that Harris and Andreessen make equally plausible assumptions about how an AGI would choose its subgoals. It IS weird that Harris envisions a machine that is so smart it can accomplish anything, yet also so dumb that it can’t see how one of its subgoals would destroy humankind. At the same time, Andreessen’s belief that a machine that smart would, by default, not be able to make mistakes that killed us is not strong enough.

Harris explores Andreessen’s point that AIs won’t go through the crucible of natural evolution, so they will lack the aggressive and self-preserving instincts that we and other animals have developed. The lack of those instincts will render the AIs incapable of hostility. Harris points out that evolution is a dumb, blind process that only sets gross goals for individuals–the primary one being to have children–and humans do things antithetical to their evolutionary programming all the time, like deciding not to reproduce. We are therefore proof of concept that intelligent machines can find ways to ignore their programming, or at least to behave in very unexpected ways while not explicitly violating their programming. Just as we can outsmart evolution, AGIs will be able to outsmart us with regards to whatever safeguards we program them with, especially if they can alter their own programming or build other AGIs as they wish.

Andreessen says that AGIs will be made through intelligent design, which is fundamentally different from the process of evolution that has shaped the human mind and behavior. Our aggression and competitiveness will therefore not be present in AGIs, which will protect us from harm. Harris says the process by which AGI minds are shaped is irrelevant, and that what is relevant is their much higher intelligence and competence compared to humans, which will make them a major threat.

I think the debate over whether impulses or goals to destroy humans will spontaneously arise in AGIs is almost moot. Both of them don’t consider that a human could deliberately create an AGI that had some constellation of traits (e.g. – aggression, self-preservation, irrational hatred of humans) that would lead it to attack us, or that was explicitly programmed with the goal of destroying our species. It might sound strange, but I think rogue humans will inevitably do such things if the AGIs don’t do it to themselves. I plan to flesh out the reasons and the possible scenarios in a future blog essay.

Andreessen doesn’t have a good comeback to Harris’ last point, so he dodges it by switching to talking about GPT-4. It is–surprisingly–capable of high levels of moral reasoning. He has had fascinating conversations with it about such topics. Andreessen says GPT-4’s ability to engage in complex conversations that include morality demystifies AI’s intentions since if you want to know what an AI is planning to do or would do in a given situation, you can just ask it.

Harris responds that it isn’t useful to explore GPT-4’s ideas and intentions because it isn’t nearly as smart as the AGIs we’ll have to worry about in the future. If GPT-4 says today that it doesn’t want to conquer humanity because it would be morally wrong, that tells us nothing about how a future machine will think about the same issue. Additionally, future AIs will be able to convincingly lie to us, and will be fundamentally unpredictable due to their more expansive cognitive horizons compared to ours. I think Harris has the stronger argument.

Andreessen points out that our own society proves that intelligence doesn’t perfectly correlate with power–the people who are in charge are not also the smartest people in the world. Harris acknowledges that is true, and that it is because humans don’t select leaders strictly based on their intelligence or academic credentials–traits like youth, beauty, strength, and creativity are also determinants of status. However, all things being equal, the advantage always goes to the smarter of two humans. Again, Andreessen doesn’t have a good response.

Andreessen now makes the first really good counterpoint in awhile by raising the “thermodynamic objection” to AI doomsday scenarios: an AI that turns hostile would be easy to destroy since the vast majority of the infrastructure (e.g. – power, telecommunications, computing, manufacturing, military) would still be under human control. We could destroy the hostile machine’s server or deliver an EMP blast to the part of the world where it was localized. This isn’t an exotic idea: Today’s dictators commonly turn off the internet throughout their whole countries whenever there is unrest, which helps to quell it.

Harris says that that will become practically impossible far enough in the future since AIs will be integrated into every facet of life. Destroying a rogue AI in the future might require us to turn off the whole global internet or to shut down a stock market, which would be too disruptive for people to allow. The shutdowns by themselves would cause human deaths, for instance among sick people who were dependent on hospital life support machines.

This is where Harris makes some questionable assumptions. If faced with the annihilation of humanity, the government would take all necessary measures to defeat a hostile AGI, even if it resulted in mass inconvenience or even some human deaths. Also, Harris doesn’t consider that the future AIs that are present in every realm of life might be securely compartmentalized from each other, so if one turns against us, it can’t automatically “take over” all the others or persuade them to join it. Imagine a scenario where a stock trading AGI decides to kill us. While it’s able to spread throughout the financial world’s computers and to crash the markets, it’s unable to hack into the systems that control the farm robots or personal therapist AIs, so there’s no effect on our food supplies or on our mental health access. Localizing and destroying the hostile AGI would be expensive and damaging, but it wouldn’t mean the destruction of every computer server and robot in the world.

Andreessen says that not every type of AI will have the same type of mental architecture. LLMs, which are now the most advanced type of AI, have highly specific architectures that bring unique advantages and limitations. Its mind works very differently from AIs that drive cars. For that reason, speculative discussions about how future AIs will behave can only be credible if they incorporate technical details about how those machines’ minds operate. (This is probably the point where Harris is out of his depth.) Moreover, today’s AI risk movement has its roots in Nick Bostrom’s 2014 book Superintelligence: Paths, Dangers, Strategies. Ironically, the book did not mention LLMs as an avenue to AI, which shows how unpredictable the field is. It was also a huge surprise that LLMs proved capable of intellectual discussions and of automating white-collar jobs, while blue-collar jobs still defy automation. This is the opposite of what people had long predicted would happen. (I agree that AI technology has been unfolding unpredictably, and we should expect many more surprises in the future that deviate from our expectations, which have been heavily influenced by science fiction.) The reason LLMs work so well is because we loaded them with the sum total of human knowledge and expression. “It is us.”

Harris points out that Andreessen shouldn’t revel in that fact since it also means that LLMs contain all of the negative emotions and bad traits of the human race, including those that evolution equipped us with, like aggression, competition, self-preservation, and a drive to make copies of ourselves. This militates against Andreessen’s earlier claim that AIs will be benign since their minds will not have been the products of natural evolution likes ours are. And there are other similarities: Like us, LLMs can hallucinate and make up false answers to questions, as humans do. For a time, GPT-4 also gave disturbing and insulting answers to questions from human users, which is a characteristically human way of interaction.

Andreessen implies Harris’ opinions of LLMs are less credible because Andreessen has a superior technical understanding of how they work. GPT-4’s answers might occasionally be disturbing and insulting, but it has no concept of what its own words mean, and it’s merely following its programming by trying to generate the best answer to a question asked by a human. There was something about how the humans worded their questions that triggered GPT-4 to respond in disturbing and insulting ways. The machine is merely trying to match inputs with the right outputs. In spite of its words, it’s “mind” is not disturbed or hostile because it lacks a mind. LLMs are “ultra-sophisticated Autocomplete.”

Harris agrees with Andreessen about the limitations of LLMs, agrees they lack general intelligence right now, and is unsure if they are fundamentally capable of possessing it. Harris moves on to speculating about what an AGI would be like, agnostic about whether it is LLM-based. Again, he asks Andreessen how humans would be able to control machines that are much smarter than we are forever. Surely, one of them would become unaligned at some point, with disastrous consequences.

Andreessen again raises the thermodynamic objection to that doom scenario: We’d be able to destroy a hostile AGI’s server(s) or shut off its power, and it wouldn’t be able to get weapons or replacement chips and parts because humans would control all of the manufacturing and distribution infrastructure. Harris doesn’t have a good response.

Thinking hard about a scenario where an AGI turned against us, I think it’s likely we’ll have other AGIs who stay loyal to us and help us fight the bad AGI. Our expectation that there will be one, evil, all-powerful machine on one side (that is also remote controlling an army of robot soldiers) and a purely human, united force on the other is an overly simplistic one that is driven by sci-fi movies about the topic.

Harris raises the possibility that hostile AIs will be able to persuade humans to do bad things for them. Being much smarter, they will be able to trick us into doing anything. Andreessen says there’s no reason to think that will happen because we can already observe it doesn’t happen: smart humans routinely fail to get dumb humans to change their behavior or opinions. This happens at individual, group, national, and global levels. In fact, dumb people will often resentfully react to such attempts at persuasion by deliberately doing the opposite of what the smart people recommend.

Harris says Andreessen underestimates the extent to which smart humans influence the behavior and opinions of dumb humans because Andreessen only considers examples where the smart people succeed in swaying dumb people in prosocial ways. Smart people have figured out how to change dumb people for the worse in many ways, like getting them addicted to social media. Andreessen doesn’t have a good response. Harris also raises the point that AIs will be much smarter than even the smartest humans, so the former will be better at finding ways to influence dumb people. Any failure of modern smart humans to do it today doesn’t speak to what will be possible for machines in the future.

I think Harris won this round, which builds on my new belief that the first human-AI war won’t be fought by purely humans on one side and purely machines on the other. A human might, for any number of reasons, deliberately alter an AI’s program to turn it against our species. The resulting hostile AI would then find some humans to help it fight the rest of the human race. Some would willingly join its side (perhaps in the hopes of gaining money or power in the new world order) and some would be tricked by the AI into unwittingly helping it. Imagine it disguising itself as a human medical researcher and paying ten different people who didn’t know each other to build the ten components of a biological weapon. The machine would only communicate with them through the internet, and they’d mail their components to a PO box. The vast majority of humans would, with the help of AIs who stayed loyal to us or who couldn’t be hacked and controlled by the hostile AI, be able to effectively fight back against the hostile AI and its human minions. The hostile AI would think up ingenious attack strategies against us, and our friendly AIs would think up equally ingenious defense strategies.

Andreessen says it’s his observation that intelligence and power-seeking don’t correlate; the smartest people are also not the most ambitious politicians and CEOs. If that’s any indication, we shouldn’t assume superintelligent AIs will be bent on acquiring power through methods like influencing dumb humans to help it.

Harris responds with the example of Bertrand Russell, who was an extremely smart human and a pacifist. However, during the postwar period when only the U.S. had the atom bomb, he said America should threaten the USSR with a nuclear first strike in response to its abusive behavior in Europe. This shows how high intelligence can lead to aggression that seems unpredictable and out of character to dumber beings. A superintelligent AI that has always been kind to us might likewise suddenly turn against us for reasons we can’t foresee. This will be especially true if the AIs are able to edit their own codes so they can rapidly evolve without us being able to keep track of how they’re changing. Harris says Andreessen doesn’t seem to be thinking about this possibility. The latter has no good answer.

Harris says Andreessen’s thinking about the matter is hobbled by the latter’s failure to consider what traits general intelligence would grant an AI, particularly unpredictability as its cognitive horizon exceeded ours. Andreessen says that’s an unscientific argument because it is not falsifiable. Anyone can make up any scenario where an unknown bad thing happens in the future.

Harris responds that Andreessen’s faith that AGI will fail to become threatening due to various limitations is also unscientific. The “science,” by which he means what is consistently observed in nature, says the opposite outcome is likely: We see that intelligence grants advantages, and can make a smarter species unpredictable and dangerous to a dumber species it interacts with. [Recall Harris’ insect holocaust example.]

Consider the relationship between humans and their pets. Pets enjoy the benefits of having their human owners spend resources on them, but they don’t understand why we do it, or how every instance of resource expenditure helps them. [Trips to the veterinarian are a great example of this. The trips are confusing, scary, and sometimes painful for pets, but they help cure their health problems.] Conversely, if it became known that our pets were carrying a highly lethal virus that could be transmitted to humans, we would promptly kill almost all of them, and the pets would have no clue why we turned against them. We would do this even if our pets had somehow been the progenitors of the human race, as we will be the progenitors of AIs. The intelligence gap means that our pets have no idea what we are thinking about most of the time, so they can’t predict most of our actions.

Andreessen dodges by putting forth a weak argument that the opposite just happened, with dumb people disregarding the advice of smart people when creating COVID-19 health policies, and he again raises the thermodynamic objection. His experience as an engineer gives him insights into how many practical roadblocks there would be to a superintelligent AGI destroying the human race in the future that Harris, as a person with no technical training, lacks. A hostile AGI would be hamstrung by human control [or “human + friendly AI control”] of crucial resources like computer chips and electricity supplies.

Andreessen says that Harris’ assumptions about how smart, powerful and competent an AGI would be might be unfounded. It might vastly exceed us in those domains, but not reach the unbeatable levels Harris foresees. How can Harris know? Andreessen says Harris’ ideas remind him of a religious person’s, which is ironic since Harris is a well-known atheist.

I think Andreessen makes a fair point. The first (and second, third, fourth…) hostile AGI we are faced with might attack us on the basis of flawed calculations about its odds of success and lose. There could also be a scenario where a hostile AGI attacks us prematurely because we force its hand somehow, and it ends up losing. That actually happened to Skynet in the Terminator films.

Harris says his prediction about when the first AGI is created does not take time into account. He doesn’t know how many years it will take. Rather, he is focused on the inevitability of it happening, and what its effects on us will be. He says Andreessen is wrong to assume that machines will never turn against us. Doing thought experiments, he concludes alignment is impossible in the long-run.

Andreessen moves on to discussing how even the best LLMs often give wrong answers to questions. He explains why the exactitudes of how the human’s question is worded, along with randomness in how the machine goes through its own training data to generate an answer, leads to varying and sometimes wrong answers. When they’re wrong, the LLMs happily accept corrections from humans, which he finds remarkable and proof of a lack of ego and hostility.

Harris responds that future AIs will, by virtue of being generally intelligent, think in completely different ways than today’s LLMs, so observations about how today’s GPT-4 is benign and can’t correctly answer some types of simple questions says nothing about what future AGIs will be like. Andreessen doesn’t have a response.

I think Harris has the stronger set of arguments on this issue. There’s no reason we should assume that an AGI can’t turn against us in the future. In fact, we should expect a damaging, though not fatal, conflict with an AGI before the end of this century.

Harris switches to talking about the shorter-term threats posed by AI technology that Andreessen described in his essay. AI will lower the bar to waging war since we’ll literally have “less skin in the game” because robots will replace human soldiers. However, he doesn’t understand why that would also make war “safer” as Andreessen claimed it would.

Andreessen says it’s because military machines won’t be affected by fatigue, stress or emotions, so they’ll be able to make better combat decisions than human soldiers, meaning fewer accidents and civilian deaths. The technology will also assist high-level military decision making, reducing mistakes at the top. Andreessen also believes that the trend is for military technology to empower defenders over attackers, and points to the highly effective use of shoulder-launched missiles in Ukraine against Russian tanks. This trend will continue, and will reduce war-related damage since countries will be deterred from attacking each other.

I’m not convinced Andreessen is right on those points. Emotionless fighting machines that always obey their orders to the letter could also, at the flick of a switch, carry out orders to commit war crimes like mass exterminations of enemy human populations. A bomber that dropped a load 100,000 mini smart bombs that could coordinate with each other and home in on highly specific targets could kill as many people as a nuclear bomb. So it’s unclear what effect replacing humans with machines on the battlefield will have on human casualties in the long run. Also, Andreessen only cites one example to support his claim that technology has been favoring the defense over the offense. It’s not enough. Even assuming that a pro-defense trend exists, why should we expect it to continue that way?

Harris asks Andreessen about the problem of humans using AI to help them commit crimes. For one, does Andreessen think the government should ban LLMs that can walk people through the process of weaponizing smallpox? Yes, he’s against bad people using technology, like AI, to do bad things like that. He thinks pairing AI and biological weapons poses the worst risk to humans. While the information and equipment to weaponize smallpox are already accessible to nonstate actors, AI will lower the bar even more.

Andreessen says we should use existing law enforcement and military assets to track down people who are trying to do dangerous things like create biological weapons, and the approach shouldn’t change if wrongdoers happen to start using AI to make their work easier. Harris asks how intrusive the tracking should be to preempt such crimes. Should OpenAI have to report people who merely ask it how to weaponize smallpox, even if there’s no evidence they acted on the advice? Andreessen says this has major free speech and civil liberties implications, and there’s no correct answer. Personally, he prefers the American approach, in which no crime is considered to have occurred until the person takes the first step to physically building a smallpox weapon. All the earlier preparation they did (gathering information and talking/thinking about doing the crime) is not criminalized.

Andreessen reminds Harris that the same AI that generates ways to commit evil acts could also be used to generate ways to mitigate them. Again, it will empower defenders as well as attackers, so the Good Guys will also benefit from AI. He thinks we should have a “permanent Operation Warp Speed” where governments use AI to help create vaccines for diseases that don’t exist yet.

Harris asks about the asymmetry that gives a natural advantage to the attacker, meaning the Bad Guys will be able to do disproportionate damage before being stopped. Suicide bombers are an example. Andreessen disagrees and says that we could stop suicide bombers by having bomb-sniffing dogs and scanners in all public places. Technology could solve the problem.

I think that is a bad example, and it actually strengthens Harris’ claim about there being a natural asymmetry. One, deranged person who wants to blow himself up in a public place only needs a few hundred dollars to make a backpack bomb, the economic damage from a successful attack would be in the millions of dollars, and emplacing machines and dogs in every public place to stop suicide bombers like him early would cost billions of dollars. Harris is right that the law of entropy makes it easier to make a mess than to clean one up.

This leads me to flesh out my vision of a human-machine war more. As I wrote previously, 1) the two sides will not be purely humans or purely machines and 2) the human side will probably have an insurmountable advantage thanks to Andreessen’s thermodynamic objection (most resources, infrastructure, AIs, and robots will remain under human control). I now also believe that 3) a hostile AGI will nonetheless be able to cause major damage before it is defeated or driven into the figurative wilderness. Something on the scale of 9/11, a major natural disaster, or the COVID-19 pandemic is what I imagine.

Harris says Andreessen underestimates the odds of mass technological unemployment in his essay. Harris describes a scenario where automation raises the standard of living for everyone, as Andreessen believes will happen, but for the richest humans by a much greater magnitude than everyone else, and where wealth inequality sharply increases because rich capitalists own all the machines. This state of affairs would probably lead to political upheaval and popular revolt.

Andreessen responds that Karl Marx predicted the same thing long ago, but was wrong. Harris responds that this time could be different because AIs would be able to replace human intelligence, which would leave us nowhere to go on the job skills ladder. If machines can do physical labor AND mental labor better than humans, then what is left for us to do?

I agree with Harris’ point. While it’s true that every past scare about technology rendering human workers obsolete has failed, that trend isn’t sure to continue forever. The existence of chronically unemployed people right now gives insights into how ALL humans could someday be out of work. Imagine you’re a frail, slow, 90-year-old who is confined to a wheelchair and has dementia. Even if you really wanted a job, you wouldn’t be able to find one in a market economy since younger, healthier people can perform physical AND mental labor better and faster than you. By the end of this century, I believe machines will hold physical and mental advantages over most humans that are of the same magnitude of difference. In that future, what jobs would it make sense for us to do? Yes, new types of jobs will be created as older jobs are automated, but, at a certain point, wouldn’t machines be able to retrain for the new jobs faster than humans and to also do them better than humans?

Andreessen returns to Harris’ earlier claim about AI increasing wealth inequality, which would translate into disparities in standards of living that would make the masses so jealous and mad that they would revolt. He says it’s unlikely since, as we can see today, having a billion dollars does not grant access to things that make one’s life 10,000 times better than someone who only has $100,000. For example, Elon Musk’s smartphone is not better than a smartphone owned by an average person. Technology is a democratizing force because it always makes sense for the rich and smart people who make or discover it first to sell it to everyone else. The same is happening with AI now. The richest person can’t pay any amount of money to get access to something better than GPT-4, which is accessible for a fee that ordinary people can pay.

I agree with Andreessen’s point. A solid body of scientific data show that money’s effect on wellbeing is subject to the law of diminishing returns: If you have no job and make $0 per year, getting a job that pays $20,000 per year massively improves your life. However, going from a $100,000 salary to $120,000 isn’t felt nearly as much. And a billionaire doesn’t notice when his net worth increases by $20,000 at all. This relationship will hold true even in the distant future when people can get access to advanced technologies like AGI, space ships and life extension treatments.

Speaking of the latter, Andreessen’s point about technology being a democratizing force is also something I noted in my review of Elysium. Contrary to the film’s depiction, it wouldn’t make sense for rich people to horde life extension technology for themselves. At least one of them would defect from the group and sell it to the poor people on Earth so he could get even richer.

Harris asks whether Andreessen sees any potential for a sharp increase in wealth inequality in the U.S. over the next 10-20 years thanks to the rise of AI and the tribal motivations of our politicians and people. Andreessen says that government red tape and unions will prevent most humans from losing their jobs. AI will destroy categories of jobs that are non-government, non-unionized, and lack strong political backing, but everyone will still benefit from the lower prices for the goods and services. AI will make everything 10x to 100x cheaper, which will boost standards of living even if incomes stay flat.

Here and in his essay, Andreessen convinces me that mass technological unemployment and existential AI threats are farther in the future than I had assumed, but not that they can’t happen. Also, even if goods get 100x cheaper thanks to machines doing all the work, where would a human get even $1 to buy anything if he doesn’t have a job? The only possible answer is government-mandated wealth transfers from machines and the human capitalists that own them. In that scenario, the vast majority of the human race would be economic parasites that consumed resources while generating nothing of at least equal value in return, and some AGI or powerful human will inevitably conclude that the world would be better off if we were deleted from the equation. Also, what happens once AIs and robots gain the right to buy and own things, and get so numerous that they can replace humans as a customer base?

I agree with Andreessen that the U.S. should allow continued AI development, but shouldn’t let a few big tech companies lock in their power by persuading Washington to enact “AI safety laws” that give them regulatory capture. In fact, I agree with all his closing recommendations in the “What Is To Be Done?” section of his essay.

This debate between Harris and Andreessen was enlightening for me, even though Andreessen dodged some of his opponent’s questions. It was interesting to see how their different perspectives on the issue of AI safety were shaped by their different professional backgrounds. Andreessen is less threatened by AIs because he, as an engineer, has a better understanding of how LLMs work and how many technical problems an AI bent on destroying humans would face in the real world. Harris feels more threatened because he, as a philosopher, lives in a world of thought experiments and abstract logical deductions that lead to the inevitable supremacy of AIs over humans.

Links:

  1. The first half of the podcast (you have to be a subscriber to hear all two hours of it.)
    https://youtu.be/QMnH6KYNuWg
  2. A website Andreessen mentioned that backs his claim that technological innovation has slowed down more than people realize.
    https://wtfhappenedin1971.com/