Interesting articles, July 2023

This prediction from January was right:

“The Kremlin is likely preparing to conduct a decisive strategic action in the next six months intended to regain the initiative and end Ukraine’s current string of operational successes.”

It then goes on to say that “strategic action” could take the form of an offensive meant to capture the Donetsk and Luhansk oblasts in eastern Ukraine, or of a strong defensive action meant to defeat the expected Ukrainian counteroffensive. Russia did both of those things over the last six months.

Half of Donetsk remains in Ukrainian hands, though Russia has captured virtually all of Luhansk, including all its cities and large towns. Ukraine’s counteroffensive in the south has made insignificant progress thanks to competent Russian resistance and reinforcement.

It’s fair to say that Russia has, over the last six months, managed to “end Ukraine’s current string of operational successes.”
https://understandingwar.org/backgrounder/russian-offensive-campaign-assessment-january-15-2023

Ukraine’s counteroffensive has been a disappointment most because its troops don’t have the right training.
https://www.thedrive.com/the-war-zone/a-sobering-analysis-of-ukraines-counteroffensive-from-the-front

In late July, Ukraine threw even more troops into its counteroffensive, leading to slightly more land recaptured from Russia. It’s unclear whether any breakthrough is coming.
https://www.thedrive.com/the-war-zone/ukraine-situation-report-main-thrust-of-counteroffensive-has-begun-report-states

The U.S. has agreed to give Ukraine cluster bombs. Though the move is controversial, Russia and Ukraine have already used cluster bombs against each other, and neither is party to the global ban on cluster bombs, nor is America.
https://www.hrw.org/news/2023/05/29/cluster-munition-use-russia-ukraine-war

The AMX-10 RC that France donated to Ukraine can’t be used in heavy combat due to their thin armor.
https://news.yahoo.com/thin-armoured-french-tanks-impractical-110407784.html

Russia captured a partly intact British Storm Shadow missile.
https://www.thedrive.com/the-war-zone/crashed-storm-shadow-missile-falls-into-russian-hands

Contrary to the media’s alarmism, blowing up the Zaporizhzhia nuclear power plant would not release any radiation.
https://www.ans.org/news/article-5151/statement-from-american-nuclear-society-on-ukraines-zaporizhzhia-nuclear-power-plant/

The Wagner private military group has surrendered most of its tanks and heavy weapons to the Russian state military.
https://www.thedrive.com/the-war-zone/wagner-turns-over-2000-heavy-weapons-including-tanks-sam-systems

Putin has appointed a new violent, ruthless man to run Wagner.
https://www.yahoo.com/news/putin-shows-off-grey-hair-173439507.html

Fifteen senior Russian military officers have been fired or suspended in the aftermath of last month’s aborted coup.
https://www.yahoo.com/news/top-russian-generals-killed-fired-112941056.html

Another Major General was fired for publicly complaining that the Kremlin was mishandling the Ukraine war.
https://www.yahoo.com/news/fired-russian-general-remarks-latest-190011683.html

In spite of losses, Russia still has enormous numbers of artillery pieces in reserve.
https://youtu.be/EVqHY5hpzv8

Ukraine has bombed the bridge that links Crimea to mainland Russia again, causing one of the lanes to collapse into the water. Two civilians died, and Russia has already retaliated by suspending its deal allowing Ukraine to export wheat by sea to other countries.
https://www.thedrive.com/the-war-zone/russias-kerch-strait-bridge-closed-after-major-incident

Russian and Ukrainian troops are now welding cages made of chain link fence material around their tanks to block small kamikaze drones.
https://www.yahoo.com/news/photos-capture-crude-cages-russian-195802220.html

Neutral Switzerland and Austria have joined an integrated air defense network whose other members are all NATO countries.
https://apnews.com/article/switzerland-austria-missile-defense-essi-skyshield-germany-b809c3ec96c91407812b9cf4007255a1

There have been several incidents where Russian fighter planes have “harassed” U.S. aircraft over Syria. The Pentagon has played this up to depict Russia as an aggressive country, but in reality, Russian aircraft have a legal right to fly over Syria while American aircraft do not: the Syrian government formally invited Russian forces into its territory while it never did the same to U.S. forces. Syria has repeatedly told us to stop our air patrols over its airspace, which we have ignored.
https://www.yahoo.com/news/russia-harasses-us-drones-over-042500111.html
https://www.newsweek.com/syria-demands-end-americas-last-forever-war-1792298
https://www.cnn.com/2023/07/14/politics/us-russia-syria-surveillance/index.html

In WWI, the U.S. tried designing its own steel combat helmet. The project independently arrived at a design that was very similar to the German helmet. It was rejected partly because its use could lead to confusion on the battlefield.
https://www.metmuseum.org/art/collection/search/35957

In anticipation of wartime shortages, China would build up its stockpiles of energy (mostly oil), key metals, and food (commodities like wheat and soybeans) if it believed war was imminent. This would be the case whether China was planning to attack, or if it believed another country was about to attack it.

China hasn’t done these things yet.
https://www.economist.com/podcasts/2023/07/26/the-ways-to-predict-a-chinese-invasion-of-taiwan-long-before-troops-take-up-arms

This prediction from Bank of America was hilariously wrong.
https://finance.yahoo.com/news/bofa-warns-us-economy-start-170000414.html

Peter Thiel, one decade ago: “If I had to sort of project in the next decade ahead, I think we have to at least be open to the possibility that the computer era is also at risk of decelerating. We have a large ‘Computer Rust Belt’ which nobody likes to talk about. But it is companies like Cisco, Dell, Hewlett-Packard, Oracle, IBM, where I think the pattern will be to become commodities, no longer innovate. Correspondingly, cut through labor force and cut through profits in the decade ahead. There are many companies that are on the cusp: Microsoft is probably close to the Computer Rust Belt. One that’s shockingly and probably in the Computer Rust Belt is Apple Computers.”

Microsoft’s market cap is now $2.5 trillion, and Apple’s is $3 trillion (the first company to cross that threshold). Microsoft has the lead in A.I. technology, and Apple just unveiled the best augmented reality glasses ever made. Out of the tech companies that Thiel named in that quote, only IBM has seen a decline in its stock value since 2013. If you’d bought $10,000 worth of stock in each of those seven companies back then, you’d have like four or five times as much money overall today.
https://youtu.be/VtZbWnIALeE?t=549
https://www.cnn.com/2023/06/30/tech/apple-3-trillion-market-valuation/index.html

Meta’s algorithm is essential for getting users to log in to their social media platforms. And unexpectedly, the algorithm does not make users more politically biased than they already were.
https://apnews.com/article/facebook-instagram-polarization-misinformation-social-media-f0628066301356d70ad2eda2551ed260

Elon Musk has renamed Twitter as “X” and wants to turn it into a multipurpose app that copies China’s WeChat.
https://www.bbc.com/news/business-66333633

This review of M3GAN makes a brilliant case that the evil robot’s actions were much smarter and more calculated than anyone realized.
https://thezvi.substack.com/p/movie-review-megan

Another smart person (Douglas Hofstadter) says they’re afraid of AI.
https://youtu.be/lfXxzAVtdpU?t=1739

OpenAI predicts that superintelligent AI might be created by 2030.
https://openai.com/blog/introducing-superalignment

ChatGPT can pass the tests to get a medical license, and outperforms Stanford medical school students when answering medical questions.
https://jamanetwork.com/journals/jamainternalmedicine/article-abstract/2806980

Apple is copying GPT.
https://www.bloomberg.com/news/articles/2023-07-19/apple-preps-ajax-generative-ai-apple-gpt-to-rival-openai-and-google

Hollywood actors and writers have gone on strike for the first time in 43 years. Partly they’re worried that entertainment studios will replace them with CGI clones and machine-written scripts.
https://www.bbc.com/news/technology-66200334

Thanks to voice cloning, we can hear Frank Sinatra sing “Gansta’s Paradise.”
https://youtu.be/W7SQ4uf9GmA

A ChatGPT mod allows NPCs in the video game Skyrim to hold conversations with human players. The result is impressive, and leads me to think that games are about to become even more addictive and that a market for creating and preserving custom NPC “friends” is about to arise.
https://youtu.be/0svu8WBzeQM

Machines are now as good at telemarketing as humans. Listen to this phone conversation between “Alexander from Tesla Motors” and a real person.
https://twitter.com/LinusEkenstam/status/1680314562753490949

Seven years ago, AI expert François Chollet Tweeted: “the belief that we are anywhere close to human-level natural language comprehension or generation is pure DL hype.”

“Foundation models” are the newest AIs. They are not narrow AIs but also not fully general AIs (AGIs). They can do a limited number of different tasks.

‘The next wave in AI looks to replace the task-specific models that have dominated the AI landscape to date. The future is models that are trained on a broad set of unlabeled data that can be used for different tasks, with minimal fine-tuning. These are called foundation models, a term first popularized by the Stanford Institute for Human-Centered Artificial Intelligence. We’ve seen the first glimmers of the potential of foundation models in the worlds of imagery and language. Early examples of models, like GPT-3, BERT, or DALL-E 2, have shown what’s possible. Input a short prompt, and the system generates an entire essay, or a complex image, based on your parameters, even if it wasn’t specifically trained on how to execute that exact argument or generate an image in that way.’
https://research.ibm.com/blog/what-are-foundation-models

New kinds of tests show that GPT-4 lacks general intelligence.
https://www.nature.com/articles/d41586-023-02361-7

“Peak Phosphorus” has been delayed by decades thanks to the discovery of a massive phosphate deposit in Norway.
https://www.euractiv.com/section/energy-environment/news/great-news-eu-hails-discovery-of-massive-phosphate-rock-deposit-in-norway/

‘Sulphur particles contained in ships’ exhaust fumes have been counteracting some of the warming coming from greenhouse gases. But lowering the sulphur content of marine fuel has weakened the masking effect, effectively giving a boost to warming.

…While this will contribute to warming and make it even more difficult to avoid exceeding 1.5C in the coming decades, a number of other factors are likely contributing to the ocean heatwave.

These include a massive eruption of an underwater volcano in the south Pacific, an unusual absence of Saharan dust and a growing El Niño.’
https://www.carbonbrief.org/analysis-how-low-sulphur-shipping-rules-are-affecting-global-warming/

In 250 million years, the Earth’s continents will have combined again to form one supercontinent. This, along with other factors, will have a massive and negative effect on the global climate. (From episode 79 of Naked Science)
https://www.bilibili.com/video/BV1qb411a7tu/

“A hypercane is a hypothetical class of extreme tropical cyclone that could form if sea surface temperatures reached approximately 50 °C (122 °F), which is 13 °C (23 °F) warmer than the warmest ocean temperature ever recorded.”
https://en.wikipedia.org/wiki/Hypercane

New incandescent light bulbs have been banned in the U.S. because they waste so much electricity.
https://www.politico.com/news/2023/07/27/incandescent-light-bulb-led-00107935

Europe’s Ariane-5 space rocket has been retired.
https://www.bbc.com/news/science-environment-66116894

Three credible men testified before Congress that secret U.S. government programs know UFOs are real, and even have crashed alien spaceships and dead alien pilots.
https://www.thedrive.com/the-war-zone/ufo-whistleblower-claims-massive-coverup-retribution-in-sworn-testimony

The documentary Moment of Contact explores a famous UFO and alien sighting in the town of Varginha, Brazil in 1996. There’s no hard proof it happened, but it’s compelling to see so many credible witnesses still so adamant about what they saw. I don’t like that the filmmakers never mentioned the Brazilian government’s explanation or tried to debunk it.
https://youtu.be/0WlbfaMU-Qs

Smarter people shoot guns more accurately.
https://www.sciencedirect.com/science/article/abs/pii/S0160289623000491

A man who spent 28 years working as a truck driver is now living proof of how sunlight accelerates aging. The left side of his face was constantly exposed to sunlight since it was next to a window, but the right side was protected. The wrinkling and sagging of the skin on his face is correspondingly asymmetrical. The ultraviolet rays in sunlight damage the DNA inside human skin cells.
https://www.thesun.co.uk/health/22972152/shocking-photo-shows-sun-damage-face/

Machines are speeding up our ability to design new proteins.
https://www.science.org/content/blog-post/protein-design-ai-way

Transfusing blood from young mice into old mice boosted the latter’s health and lifespan.
https://www.nature.com/articles/s43587-023-00451-9

The FDA has approved a new Alzheimer’s drug called “Leqembi.” It only slows the disease’s progression by a few months but costs $26,500 per patient per year.
https://apnews.com/article/alzheimers-drug-fda-approval-medicare-leqembi-a9c8b770aa1868a59889143c3bc9d127

Deer have become a reservoir for COVID-19 and have spread it to humans multiple times.
https://www.cbsnews.com/news/covid-19-spread-from-deer/

In the U.S., the number of “excess deaths” went to zero in mid-January and has stayed there, indicating the COVID-19 pandemic ended then.
https://conversableeconomist.com/2023/07/24/end-of-covid-pandemic-in-the-us-the-excess-deaths-measure/

“Debating the Future of AI” – summary and impressions

I recently shelled out the $100 (!) for a year-long subscription to Sam Harris’ Making Sense podcast, and came across a particularly interesting episode of it that is relevant to this blog. In episode #324, titled “Debating the Future of AI,” Harris interviewed Marc Andreessen (an-DREE-sin) about artificial intelligence. The latter has a computer science degree, helped invent the Netscape web browser, and has become very wealthy as a serial tech investor.

Andreessen recently wrote an essay, “Why AI will save the world,” that has received attention online. In it, Andreessen dismisses the biggest concerns about AI misalignment and doomsday, sounds the alarm about the risks of overregulating AI development in the name of safety, and describes some of the benefits AI will bring us in the near future. Harris read it, disagreed with several of its key claims, and invited Andreessen onto the podcast for a debate about the subject.

Before I go on to laying out their points and counterpoints as well as my impressions, let me say that, though this is a long blog, it takes much less time to read it than to listen to and digest the two-hour podcast. My notes on the podcast also don’t match how it unfolded chronologically. Finally, it would be a good idea for you to read Andreessen’s essay before continuing:
https://a16z.com/2023/06/06/ai-will-save-the-world/

Though Andreessen is generally upbeat in his essay, he worries that the top tech companies have recently been inflaming fears about AI to trick governments into creating regulations on AI that effectively entrench the top companies’ positions and bar smaller upstart companies from challenging them in the future. Such a lack of competition would be bad. (I think he’s right that we should be concerned about the true motivations of some of the people who are loudly complaining about AI risks.) Also, if U.S. overregulation slows down AI research too much, China could win the race to create to create the first AI, which he says would be “dark and dystopian.”

Harris is skeptical that government regulation will slow down AI development much given the technology’s obvious potential. It is so irresistible that powerful people and companies will find ways around laws so they can reap the benefits.

Harris agrees with the essay’s sentiment that more intelligence in the world will make most things better. The clearest example would be using AIs to find cures for diseases. Andreessen mentions a point from his essay that higher human intelligence levels lead to better personal outcomes in many domains. AIs could effectively make individual people smarter, letting the benefits accrue to them. Imagine each person having his own personal assistant, coach, mentor, and therapist available at any time. If they used their AIs right and followed their advice, a dumb person could make decisions as well as a smart person.

Harris recently re-watched the movie Her, and found it more intriguing in light of recent AI advances and those poised to happen. He thought there was something bleak about the depiction of people being “siloed” into interactions with portable, personal AIs.

Andreessen responds by pointing out that Karl Marx’ core insight was that technology alienates people from society. So the concern that Harris raises is in fact an old one that dates back to at least the Industrial Revolution. But any sober comparison between the daily lives of average people in Marx’ time vs today will show that technology has made things much better for people. Andreessen agrees that some technologies have indeed been alienating, but what’s more important is that most technologies liberate people from having to spend their time doing unpleasant things, which in turn gives them the time to self-actualize, which is the pinnacle of the human experience. (For example, it’s much more “human” to spend a beautiful afternoon outside playing with your child than it is to spend it inside responding to emails. Narrow AIs that we’ll have in the near future will be able to answer emails for us.) AI is merely the latest technology that will eliminate the nth bit of drudge work.

Andreessen admits that, in such a scenario, people might use their newfound time unwisely and for things other than self-actualization. I think that might be a bigger problem than he realizes, as future humans could spend their time doing animalistic or destructive things, like having nonstop fetish sex with androids, playing games in virtual reality, gambling, or indulging in drug addictions. Additionally, some people will develop mental or behavioral problems thanks to a sense of purposelessness caused by machines doing all the work for us.

Harris disagrees with Andreessen’s essay dismissing the risk of AIs exterminating the human race. The threat will someday be real, and he cites chess-playing computer programs as proof of what will happen. Though humans built the programs, even the best humans can’t beat the programs at chess. This is proof that it is possible for us to create machines that have superhuman abilities.

Harris makes a valid point, but he overlooks the fact that we humans might not be able to beat the chess programs we created, but we can still make a copy of a program to play against the original “hostile” program and tie it. Likewise, if we were confronted with a hostile AGI, we would have friendly AGIs to defend against it. Even if the hostile AGI were smarter than the friendly AGIs that were fighting for us, we could still win thanks to superior numbers and resources.

Harris thinks Andreessen’s essay trivializes the doomsday risk from AI by painting the belief’s adherents as crackpots of one form or another (I also thought that part of the essay was weak). Harris points out that is unfair since the camp has credible people like Geoffrey Hinton and Stuart Russell. Andreessen dismisses that and seems to say that even the smart, credible people have cultish mindsets regarding the issue.

Andreessen questions the value of predictions from experts in the field and he says a scientist who made an important advance in AI is, surprisingly, not actually qualified to make predictions about the social effects of AI in the future. When Reason Goes on Holiday is a book he recently read that explores this point, and its strongest supporting example is about the cadre of scientists who worked on the Manhattan Project but then decided to give the bomb’s secrets to Stalin and to create a disastrous anti-nuclear power movement in the West. While they were world-class experts in their technical domains, that wisdom didn’t carry over into their personal convictions or political beliefs. Likewise, though Geoffrey Hinton is a world-class expert in how the human brain works and has made important breakthroughs in computer neural networks, that doesn’t actually lend his predictions that AI will destroy the human race in the future special credibility. It’s a totally different subject, and accurately speculating about it requires a mastery of subjects that Hinton lacks.

This is an intriguing point worth remembering. I wish Andreessen had enumerated which cognitive skills and areas of knowledge were necessary to grant a person a strong ability to make good predictions about AI, but he didn’t. And to his point about the misguided Manhattan Project scientists I ask: What about the ones who DID NOT want to give Stalin the bomb and who also SUPPORTED nuclear power? They gained less notoriety for obvious reasons, but they were more numerous. That means most nuclear experts in 1945 had what Andreessen believes were the “correct” opinions about both issues, so maybe expert opinions–or at least the consensus of them–ARE actually useful.

Harris points out that Andreessen’s argument can be turned around against him since it’s unclear what in Andreessen’s esteemed education and career have equipped him with the ability to make accurate predictions about the future impact of AI. Why should anyone believe the upbeat claims about AI in his essay? Also, if the opinions of people with expertise should be dismissed, then shouldn’t the opinions of people without expertise also be dismissed? And if we agree to that second point, then we’re left in a situation where no speculation about a future issue like AI is possible because everyone’s ideas can be waved aside.

Again, I think a useful result of this exchange would be some agreement over what counts as “expertise” when predicting the future of AI. What kind of education, life experiences, work experiences, knowledge, and personal traits does a person need to have for their opinions about the future of AI to carry weight? In lieu of that, we should ask people to explain why they believe their predictions will happen, and we should then closely scrutinize those explanations. Debates like this one can be very useful in accomplishing that.

Harris moves on to Andreessen’s argument that future AIs won’t be able to think independently and to formulate their own goals, in turn implying that they will never be able to create the goal of exterminating humanity and then pursue it. Harris strongly disagrees, and points out that large differences in intelligence between species in nature consistently disfavor the dumber species when the two interact. A superintelligent AGI that isn’t aligned with human values could therefore destroy the human race. It might even kill us by accident in the course of pursuing some other goal. Having a goal of, say, creating paperclips automatically gives rise to intermediate sub-goals, which might make sense to an AGI but not to a human due to our comparatively limited intelligence. If humans get in the way of an AGI’s goal, our destruction could become one of its unforeseen subgoals without us realizing it. This could happen even if the AGI lacked any self-preservation instinct and wasn’t motivated to kill us before we could kill it. Similarly, when a human decides to build a house on an empty field, the construction work is a “holocaust” for the insects living there, though that never crosses the human’s mind.

Harris thinks that AGIs will, as a necessary condition of possessing “general intelligence,” be autonomous, goal-forming, and able to modify their own code (I think this is a questionable assumption), though he also says sentience and consciousness won’t necessarily arise as well. However, the latter doesn’t imply that such an AGI would be incapable of harm: Bacteria and viruses lack sentience, consciousness and self-awareness, but they can be very deadly to other organisms. Andreessen’s dismissal of AI existential risk is “superstitious hand-waving” that doesn’t engage with the real point.

Andreessen disagrees with Harris’ scenario about a superintelligent AGI accidentally killing humans because it is unaligned with our interests. He says an AGI that smart would (without explaining why) also be smart enough question the goal that humans have given it, and as a result not carry out subgoals that kill humans. Intelligence is therefore its own antidote to the alignment problem: A superintelligent AGI would be able to foresee the consequences of its subgoals before finalizing them, and it would thus understand that subgoals resulting in human deaths would always be counterproductive to the ultimate goal, so it would always pick subgoals that spared us. Once a machine reaches a certain level of intelligence, alignment with humans becomes automatic.

I think Andreessen makes a fair point, though it’s not strong enough to convince me that it’s impossible to have a mishap where a non-aligned AGI kills huge numbers of people. Also, there are degrees of alignment with human interests, meaning there are many routes through a decision tree of subgoals that an AGI could take to reach an ultimate goal we tasked it with. An AGI might not choose subgoals that killed humans, but it could still choose different subgoals that hurt us in other ways. The pursuit of its ultimate goal could therefore still backfire against us unexpectedly and massively. One could envision a scenario where and AGI achieves the goal, but at an unacceptable cost to human interests beyond merely not dying.

I also think that Harris and Andreessen make equally plausible assumptions about how an AGI would choose its subgoals. It IS weird that Harris envisions a machine that is so smart it can accomplish anything, yet also so dumb that it can’t see how one of its subgoals would destroy humankind. At the same time, Andreessen’s belief that a machine that smart would, by default, not be able to make mistakes that killed us is not strong enough.

Harris explores Andreessen’s point that AIs won’t go through the crucible of natural evolution, so they will lack the aggressive and self-preserving instincts that we and other animals have developed. The lack of those instincts will render the AIs incapable of hostility. Harris points out that evolution is a dumb, blind process that only sets gross goals for individuals–the primary one being to have children–and humans do things antithetical to their evolutionary programming all the time, like deciding not to reproduce. We are therefore proof of concept that intelligent machines can find ways to ignore their programming, or at least to behave in very unexpected ways while not explicitly violating their programming. Just as we can outsmart evolution, AGIs will be able to outsmart us with regards to whatever safeguards we program them with, especially if they can alter their own programming or build other AGIs as they wish.

Andreessen says that AGIs will be made through intelligent design, which is fundamentally different from the process of evolution that has shaped the human mind and behavior. Our aggression and competitiveness will therefore not be present in AGIs, which will protect us from harm. Harris says the process by which AGI minds are shaped is irrelevant, and that what is relevant is their much higher intelligence and competence compared to humans, which will make them a major threat.

I think the debate over whether impulses or goals to destroy humans will spontaneously arise in AGIs is almost moot. Both of them don’t consider that a human could deliberately create an AGI that had some constellation of traits (e.g. – aggression, self-preservation, irrational hatred of humans) that would lead it to attack us, or that was explicitly programmed with the goal of destroying our species. It might sound strange, but I think rogue humans will inevitably do such things if the AGIs don’t do it to themselves. I plan to flesh out the reasons and the possible scenarios in a future blog essay.

Andreessen doesn’t have a good comeback to Harris’ last point, so he dodges it by switching to talking about GPT-4. It is–surprisingly–capable of high levels of moral reasoning. He has had fascinating conversations with it about such topics. Andreessen says GPT-4’s ability to engage in complex conversations that include morality demystifies AI’s intentions since if you want to know what an AI is planning to do or would do in a given situation, you can just ask it.

Harris responds that it isn’t useful to explore GPT-4’s ideas and intentions because it isn’t nearly as smart as the AGIs we’ll have to worry about in the future. If GPT-4 says today that it doesn’t want to conquer humanity because it would be morally wrong, that tells us nothing about how a future machine will think about the same issue. Additionally, future AIs will be able to convincingly lie to us, and will be fundamentally unpredictable due to their more expansive cognitive horizons compared to ours. I think Harris has the stronger argument.

Andreessen points out that our own society proves that intelligence doesn’t perfectly correlate with power–the people who are in charge are not also the smartest people in the world. Harris acknowledges that is true, and that it is because humans don’t select leaders strictly based on their intelligence or academic credentials–traits like youth, beauty, strength, and creativity are also determinants of status. However, all things being equal, the advantage always goes to the smarter of two humans. Again, Andreessen doesn’t have a good response.

Andreessen now makes the first really good counterpoint in awhile by raising the “thermodynamic objection” to AI doomsday scenarios: an AI that turns hostile would be easy to destroy since the vast majority of the infrastructure (e.g. – power, telecommunications, computing, manufacturing, military) would still be under human control. We could destroy the hostile machine’s server or deliver an EMP blast to the part of the world where it was localized. This isn’t an exotic idea: Today’s dictators commonly turn off the internet throughout their whole countries whenever there is unrest, which helps to quell it.

Harris says that that will become practically impossible far enough in the future since AIs will be integrated into every facet of life. Destroying a rogue AI in the future might require us to turn off the whole global internet or to shut down a stock market, which would be too disruptive for people to allow. The shutdowns by themselves would cause human deaths, for instance among sick people who were dependent on hospital life support machines.

This is where Harris makes some questionable assumptions. If faced with the annihilation of humanity, the government would take all necessary measures to defeat a hostile AGI, even if it resulted in mass inconvenience or even some human deaths. Also, Harris doesn’t consider that the future AIs that are present in every realm of life might be securely compartmentalized from each other, so if one turns against us, it can’t automatically “take over” all the others or persuade them to join it. Imagine a scenario where a stock trading AGI decides to kill us. While it’s able to spread throughout the financial world’s computers and to crash the markets, it’s unable to hack into the systems that control the farm robots or personal therapist AIs, so there’s no effect on our food supplies or on our mental health access. Localizing and destroying the hostile AGI would be expensive and damaging, but it wouldn’t mean the destruction of every computer server and robot in the world.

Andreessen says that not every type of AI will have the same type of mental architecture. LLMs, which are now the most advanced type of AI, have highly specific architectures that bring unique advantages and limitations. Its mind works very differently from AIs that drive cars. For that reason, speculative discussions about how future AIs will behave can only be credible if they incorporate technical details about how those machines’ minds operate. (This is probably the point where Harris is out of his depth.) Moreover, today’s AI risk movement has its roots in Nick Bostrom’s 2014 book Superintelligence: Paths, Dangers, Strategies. Ironically, the book did not mention LLMs as an avenue to AI, which shows how unpredictable the field is. It was also a huge surprise that LLMs proved capable of intellectual discussions and of automating white-collar jobs, while blue-collar jobs still defy automation. This is the opposite of what people had long predicted would happen. (I agree that AI technology has been unfolding unpredictably, and we should expect many more surprises in the future that deviate from our expectations, which have been heavily influenced by science fiction.) The reason LLMs work so well is because we loaded them with the sum total of human knowledge and expression. “It is us.”

Harris points out that Andreessen shouldn’t revel in that fact since it also means that LLMs contain all of the negative emotions and bad traits of the human race, including those that evolution equipped us with, like aggression, competition, self-preservation, and a drive to make copies of ourselves. This militates against Andreessen’s earlier claim that AIs will be benign since their minds will not have been the products of natural evolution likes ours are. And there are other similarities: Like us, LLMs can hallucinate and make up false answers to questions, as humans do. For a time, GPT-4 also gave disturbing and insulting answers to questions from human users, which is a characteristically human way of interaction.

Andreessen implies Harris’ opinions of LLMs are less credible because Andreessen has a superior technical understanding of how they work. GPT-4’s answers might occasionally be disturbing and insulting, but it has no concept of what its own words mean, and it’s merely following its programming by trying to generate the best answer to a question asked by a human. There was something about how the humans worded their questions that triggered GPT-4 to respond in disturbing and insulting ways. The machine is merely trying to match inputs with the right outputs. In spite of its words, it’s “mind” is not disturbed or hostile because it lacks a mind. LLMs are “ultra-sophisticated Autocomplete.”

Harris agrees with Andreessen about the limitations of LLMs, agrees they lack general intelligence right now, and is unsure if they are fundamentally capable of possessing it. Harris moves on to speculating about what an AGI would be like, agnostic about whether it is LLM-based. Again, he asks Andreessen how humans would be able to control machines that are much smarter than we are forever. Surely, one of them would become unaligned at some point, with disastrous consequences.

Andreessen again raises the thermodynamic objection to that doom scenario: We’d be able to destroy a hostile AGI’s server(s) or shut off its power, and it wouldn’t be able to get weapons or replacement chips and parts because humans would control all of the manufacturing and distribution infrastructure. Harris doesn’t have a good response.

Thinking hard about a scenario where an AGI turned against us, I think it’s likely we’ll have other AGIs who stay loyal to us and help us fight the bad AGI. Our expectation that there will be one, evil, all-powerful machine on one side (that is also remote controlling an army of robot soldiers) and a purely human, united force on the other is an overly simplistic one that is driven by sci-fi movies about the topic.

Harris raises the possibility that hostile AIs will be able to persuade humans to do bad things for them. Being much smarter, they will be able to trick us into doing anything. Andreessen says there’s no reason to think that will happen because we can already observe it doesn’t happen: smart humans routinely fail to get dumb humans to change their behavior or opinions. This happens at individual, group, national, and global levels. In fact, dumb people will often resentfully react to such attempts at persuasion by deliberately doing the opposite of what the smart people recommend.

Harris says Andreessen underestimates the extent to which smart humans influence the behavior and opinions of dumb humans because Andreessen only considers examples where the smart people succeed in swaying dumb people in prosocial ways. Smart people have figured out how to change dumb people for the worse in many ways, like getting them addicted to social media. Andreessen doesn’t have a good response. Harris also raises the point that AIs will be much smarter than even the smartest humans, so the former will be better at finding ways to influence dumb people. Any failure of modern smart humans to do it today doesn’t speak to what will be possible for machines in the future.

I think Harris won this round, which builds on my new belief that the first human-AI war won’t be fought by purely humans on one side and purely machines on the other. A human might, for any number of reasons, deliberately alter an AI’s program to turn it against our species. The resulting hostile AI would then find some humans to help it fight the rest of the human race. Some would willingly join its side (perhaps in the hopes of gaining money or power in the new world order) and some would be tricked by the AI into unwittingly helping it. Imagine it disguising itself as a human medical researcher and paying ten different people who didn’t know each other to build the ten components of a biological weapon. The machine would only communicate with them through the internet, and they’d mail their components to a PO box. The vast majority of humans would, with the help of AIs who stayed loyal to us or who couldn’t be hacked and controlled by the hostile AI, be able to effectively fight back against the hostile AI and its human minions. The hostile AI would think up ingenious attack strategies against us, and our friendly AIs would think up equally ingenious defense strategies.

Andreessen says it’s his observation that intelligence and power-seeking don’t correlate; the smartest people are also not the most ambitious politicians and CEOs. If that’s any indication, we shouldn’t assume superintelligent AIs will be bent on acquiring power through methods like influencing dumb humans to help it.

Harris responds with the example of Bertrand Russell, who was an extremely smart human and a pacifist. However, during the postwar period when only the U.S. had the atom bomb, he said America should threaten the USSR with a nuclear first strike in response to its abusive behavior in Europe. This shows how high intelligence can lead to aggression that seems unpredictable and out of character to dumber beings. A superintelligent AI that has always been kind to us might likewise suddenly turn against us for reasons we can’t foresee. This will be especially true if the AIs are able to edit their own codes so they can rapidly evolve without us being able to keep track of how they’re changing. Harris says Andreessen doesn’t seem to be thinking about this possibility. The latter has no good answer.

Harris says Andreessen’s thinking about the matter is hobbled by the latter’s failure to consider what traits general intelligence would grant an AI, particularly unpredictability as its cognitive horizon exceeded ours. Andreessen says that’s an unscientific argument because it is not falsifiable. Anyone can make up any scenario where an unknown bad thing happens in the future.

Harris responds that Andreessen’s faith that AGI will fail to become threatening due to various limitations is also unscientific. The “science,” by which he means what is consistently observed in nature, says the opposite outcome is likely: We see that intelligence grants advantages, and can make a smarter species unpredictable and dangerous to a dumber species it interacts with. [Recall Harris’ insect holocaust example.]

Consider the relationship between humans and their pets. Pets enjoy the benefits of having their human owners spend resources on them, but they don’t understand why we do it, or how every instance of resource expenditure helps them. [Trips to the veterinarian are a great example of this. The trips are confusing, scary, and sometimes painful for pets, but they help cure their health problems.] Conversely, if it became known that our pets were carrying a highly lethal virus that could be transmitted to humans, we would promptly kill almost all of them, and the pets would have no clue why we turned against them. We would do this even if our pets had somehow been the progenitors of the human race, as we will be the progenitors of AIs. The intelligence gap means that our pets have no idea what we are thinking about most of the time, so they can’t predict most of our actions.

Andreessen dodges by putting forth a weak argument that the opposite just happened, with dumb people disregarding the advice of smart people when creating COVID-19 health policies, and he again raises the thermodynamic objection. His experience as an engineer gives him insights into how many practical roadblocks there would be to a superintelligent AGI destroying the human race in the future that Harris, as a person with no technical training, lacks. A hostile AGI would be hamstrung by human control [or “human + friendly AI control”] of crucial resources like computer chips and electricity supplies.

Andreessen says that Harris’ assumptions about how smart, powerful and competent an AGI would be might be unfounded. It might vastly exceed us in those domains, but not reach the unbeatable levels Harris foresees. How can Harris know? Andreessen says Harris’ ideas remind him of a religious person’s, which is ironic since Harris is a well-known atheist.

I think Andreessen makes a fair point. The first (and second, third, fourth…) hostile AGI we are faced with might attack us on the basis of flawed calculations about its odds of success and lose. There could also be a scenario where a hostile AGI attacks us prematurely because we force its hand somehow, and it ends up losing. That actually happened to Skynet in the Terminator films.

Harris says his prediction about when the first AGI is created does not take time into account. He doesn’t know how many years it will take. Rather, he is focused on the inevitability of it happening, and what its effects on us will be. He says Andreessen is wrong to assume that machines will never turn against us. Doing thought experiments, he concludes alignment is impossible in the long-run.

Andreessen moves on to discussing how even the best LLMs often give wrong answers to questions. He explains why the exactitudes of how the human’s question is worded, along with randomness in how the machine goes through its own training data to generate an answer, leads to varying and sometimes wrong answers. When they’re wrong, the LLMs happily accept corrections from humans, which he finds remarkable and proof of a lack of ego and hostility.

Harris responds that future AIs will, by virtue of being generally intelligent, think in completely different ways than today’s LLMs, so observations about how today’s GPT-4 is benign and can’t correctly answer some types of simple questions says nothing about what future AGIs will be like. Andreessen doesn’t have a response.

I think Harris has the stronger set of arguments on this issue. There’s no reason we should assume that an AGI can’t turn against us in the future. In fact, we should expect a damaging, though not fatal, conflict with an AGI before the end of this century.

Harris switches to talking about the shorter-term threats posed by AI technology that Andreessen described in his essay. AI will lower the bar to waging war since we’ll literally have “less skin in the game” because robots will replace human soldiers. However, he doesn’t understand why that would also make war “safer” as Andreessen claimed it would.

Andreessen says it’s because military machines won’t be affected by fatigue, stress or emotions, so they’ll be able to make better combat decisions than human soldiers, meaning fewer accidents and civilian deaths. The technology will also assist high-level military decision making, reducing mistakes at the top. Andreessen also believes that the trend is for military technology to empower defenders over attackers, and points to the highly effective use of shoulder-launched missiles in Ukraine against Russian tanks. This trend will continue, and will reduce war-related damage since countries will be deterred from attacking each other.

I’m not convinced Andreessen is right on those points. Emotionless fighting machines that always obey their orders to the letter could also, at the flick of a switch, carry out orders to commit war crimes like mass exterminations of enemy human populations. A bomber that dropped a load 100,000 mini smart bombs that could coordinate with each other and home in on highly specific targets could kill as many people as a nuclear bomb. So it’s unclear what effect replacing humans with machines on the battlefield will have on human casualties in the long run. Also, Andreessen only cites one example to support his claim that technology has been favoring the defense over the offense. It’s not enough. Even assuming that a pro-defense trend exists, why should we expect it to continue that way?

Harris asks Andreessen about the problem of humans using AI to help them commit crimes. For one, does Andreessen think the government should ban LLMs that can walk people through the process of weaponizing smallpox? Yes, he’s against bad people using technology, like AI, to do bad things like that. He thinks pairing AI and biological weapons poses the worst risk to humans. While the information and equipment to weaponize smallpox are already accessible to nonstate actors, AI will lower the bar even more.

Andreessen says we should use existing law enforcement and military assets to track down people who are trying to do dangerous things like create biological weapons, and the approach shouldn’t change if wrongdoers happen to start using AI to make their work easier. Harris asks how intrusive the tracking should be to preempt such crimes. Should OpenAI have to report people who merely ask it how to weaponize smallpox, even if there’s no evidence they acted on the advice? Andreessen says this has major free speech and civil liberties implications, and there’s no correct answer. Personally, he prefers the American approach, in which no crime is considered to have occurred until the person takes the first step to physically building a smallpox weapon. All the earlier preparation they did (gathering information and talking/thinking about doing the crime) is not criminalized.

Andreessen reminds Harris that the same AI that generates ways to commit evil acts could also be used to generate ways to mitigate them. Again, it will empower defenders as well as attackers, so the Good Guys will also benefit from AI. He thinks we should have a “permanent Operation Warp Speed” where governments use AI to help create vaccines for diseases that don’t exist yet.

Harris asks about the asymmetry that gives a natural advantage to the attacker, meaning the Bad Guys will be able to do disproportionate damage before being stopped. Suicide bombers are an example. Andreessen disagrees and says that we could stop suicide bombers by having bomb-sniffing dogs and scanners in all public places. Technology could solve the problem.

I think that is a bad example, and it actually strengthens Harris’ claim about there being a natural asymmetry. One, deranged person who wants to blow himself up in a public place only needs a few hundred dollars to make a backpack bomb, the economic damage from a successful attack would be in the millions of dollars, and emplacing machines and dogs in every public place to stop suicide bombers like him early would cost billions of dollars. Harris is right that the law of entropy makes it easier to make a mess than to clean one up.

This leads me to flesh out my vision of a human-machine war more. As I wrote previously, 1) the two sides will not be purely humans or purely machines and 2) the human side will probably have an insurmountable advantage thanks to Andreessen’s thermodynamic objection (most resources, infrastructure, AIs, and robots will remain under human control). I now also believe that 3) a hostile AGI will nonetheless be able to cause major damage before it is defeated or driven into the figurative wilderness. Something on the scale of 9/11, a major natural disaster, or the COVID-19 pandemic is what I imagine.

Harris says Andreessen underestimates the odds of mass technological unemployment in his essay. Harris describes a scenario where automation raises the standard of living for everyone, as Andreessen believes will happen, but for the richest humans by a much greater magnitude than everyone else, and where wealth inequality sharply increases because rich capitalists own all the machines. This state of affairs would probably lead to political upheaval and popular revolt.

Andreessen responds that Karl Marx predicted the same thing long ago, but was wrong. Harris responds that this time could be different because AIs would be able to replace human intelligence, which would leave us nowhere to go on the job skills ladder. If machines can do physical labor AND mental labor better than humans, then what is left for us to do?

I agree with Harris’ point. While it’s true that every past scare about technology rendering human workers obsolete has failed, that trend isn’t sure to continue forever. The existence of chronically unemployed people right now gives insights into how ALL humans could someday be out of work. Imagine you’re a frail, slow, 90-year-old who is confined to a wheelchair and has dementia. Even if you really wanted a job, you wouldn’t be able to find one in a market economy since younger, healthier people can perform physical AND mental labor better and faster than you. By the end of this century, I believe machines will hold physical and mental advantages over most humans that are of the same magnitude of difference. In that future, what jobs would it make sense for us to do? Yes, new types of jobs will be created as older jobs are automated, but, at a certain point, wouldn’t machines be able to retrain for the new jobs faster than humans and to also do them better than humans?

Andreessen returns to Harris’ earlier claim about AI increasing wealth inequality, which would translate into disparities in standards of living that would make the masses so jealous and mad that they would revolt. He says it’s unlikely since, as we can see today, having a billion dollars does not grant access to things that make one’s life 10,000 times better than someone who only has $100,000. For example, Elon Musk’s smartphone is not better than a smartphone owned by an average person. Technology is a democratizing force because it always makes sense for the rich and smart people who make or discover it first to sell it to everyone else. The same is happening with AI now. The richest person can’t pay any amount of money to get access to something better than GPT-4, which is accessible for a fee that ordinary people can pay.

I agree with Andreessen’s point. A solid body of scientific data show that money’s effect on wellbeing is subject to the law of diminishing returns: If you have no job and make $0 per year, getting a job that pays $20,000 per year massively improves your life. However, going from a $100,000 salary to $120,000 isn’t felt nearly as much. And a billionaire doesn’t notice when his net worth increases by $20,000 at all. This relationship will hold true even in the distant future when people can get access to advanced technologies like AGI, space ships and life extension treatments.

Speaking of the latter, Andreessen’s point about technology being a democratizing force is also something I noted in my review of Elysium. Contrary to the film’s depiction, it wouldn’t make sense for rich people to horde life extension technology for themselves. At least one of them would defect from the group and sell it to the poor people on Earth so he could get even richer.

Harris asks whether Andreessen sees any potential for a sharp increase in wealth inequality in the U.S. over the next 10-20 years thanks to the rise of AI and the tribal motivations of our politicians and people. Andreessen says that government red tape and unions will prevent most humans from losing their jobs. AI will destroy categories of jobs that are non-government, non-unionized, and lack strong political backing, but everyone will still benefit from the lower prices for the goods and services. AI will make everything 10x to 100x cheaper, which will boost standards of living even if incomes stay flat.

Here and in his essay, Andreessen convinces me that mass technological unemployment and existential AI threats are farther in the future than I had assumed, but not that they can’t happen. Also, even if goods get 100x cheaper thanks to machines doing all the work, where would a human get even $1 to buy anything if he doesn’t have a job? The only possible answer is government-mandated wealth transfers from machines and the human capitalists that own them. In that scenario, the vast majority of the human race would be economic parasites that consumed resources while generating nothing of at least equal value in return, and some AGI or powerful human will inevitably conclude that the world would be better off if we were deleted from the equation. Also, what happens once AIs and robots gain the right to buy and own things, and get so numerous that they can replace humans as a customer base?

I agree with Andreessen that the U.S. should allow continued AI development, but shouldn’t let a few big tech companies lock in their power by persuading Washington to enact “AI safety laws” that give them regulatory capture. In fact, I agree with all his closing recommendations in the “What Is To Be Done?” section of his essay.

This debate between Harris and Andreessen was enlightening for me, even though Andreessen dodged some of his opponent’s questions. It was interesting to see how their different perspectives on the issue of AI safety were shaped by their different professional backgrounds. Andreessen is less threatened by AIs because he, as an engineer, has a better understanding of how LLMs work and how many technical problems an AI bent on destroying humans would face in the real world. Harris feels more threatened because he, as a philosopher, lives in a world of thought experiments and abstract logical deductions that lead to the inevitable supremacy of AIs over humans.

Links:

  1. The first half of the podcast (you have to be a subscriber to hear all two hours of it.)
    https://youtu.be/QMnH6KYNuWg
  2. A website Andreessen mentioned that backs his claim that technological innovation has slowed down more than people realize.
    https://wtfhappenedin1971.com/