‘Russian forces have reportedly been using fewer armored vehicles in assaults in the most active areas of the frontline in recent months, likely in part due to heavy losses and the need to conserve these vehicles as Soviet stocks dwindle.’ https://understandingwar.org/backgrounder/russias-weakness-offers-leverage
In 1940, the Germans defeated the French in large part because the Germans fought harder and worked longer, man-for-man. And I don’t want to indulge in stereotypes, but the French air force’s defeat owed largely to cultural factors (too much/rigid military bureaucracy, lazier factory workers and pilots, corruption). https://youtu.be/6gVJrUpYq6I?si=943FwMiIU0SX3ul2 https://youtu.be/zL-sdgkcYhM?si=7XW04-Izmy2boVzY
This prediction from a year ago was wrong. The Federal Funds Rate is now 4.33%. All but the most obvious financial and economic predictions are worthless.
‘In a note on Friday, the bank cited fresh signs of a slowing economy for its view that the Fed will trim rates by 25 basis points eight times, starting in September and extending to July 2025.
American political scientist Francis Fukuyama gave a fascinating and wide-ranging interview where, among other things, he discussed the impact of technology on culture and politics. https://youtu.be/pc7O7qSBzM8?si=T7JsWA2Nh6jb-V5-
‘It found that 98% [of scientists] recognize the value of null results, which the survey defined as “an outcome that does not confirm the desired hypothesis”. Eighty-five per cent of respondents said it was important to share those results. However, just 68% of the 7,057 researchers whose work had produced null results had shared them in some form, and just 30% had tried to publish them in a journal.’ https://www.nature.com/articles/d41586-025-02312-4
Worried that Iran had gotten too close to building a nuclear weapon and enticed by a moment of opportunity created by the defeat of Iran’s allies in Syria and Lebanon and the weakening of Iran’s military defenses in an earlier conflict, Israel launched a mass airstrike campaign to cripple Iran’s nuclear program. Nuclear facilities and nuclear scientists were targeted, as were high-ranking members of Iran’s military. The U.S. later joined the conflict by using B-2 bombers armed with massive bunker-busting bombs to destroy Iran’s underground nuclear facilities.
Iran’s relative weakness was made stark as it could only respond with volleys of inaccurate missiles that were mostly shot down and caused comparatively little damage on the ground against Israel and a U.S. base. All sides have agreed to a cease-fire, though it may not last long. The campaign has been a success for Israel and America and a humiliation for Iran, though there are doubts about how badly damaged its nuclear program was. https://en.wikipedia.org/wiki/Iran%E2%80%93Israel_war
Peter Zeihan’s predictions (from six months ago) are wrong again: Russia’s paramilitary operations across Africa have not collapsed. They did withdraw most of their troops from Mali after failing to defeat the rebels, but that’s it. https://youtu.be/7ADOGajDN5Q?si=WY3b9jSegLsf-vyF&t=480
During WWII, the U.S. had an experimental “Yehudi” cloaking device that used lights of varying intensities to make warplanes invisible to the naked eye at long- and medium distances. https://youtu.be/cZB8obrb8Sg?si=AnwKe38A61Hb7DNa
A recent paper “The Illusion of Thinking” casts into serious doubt whether even the best large reasoning model AIs can actually reason. When given novel tasks of a certain complexity, they completely fail to respond. https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf
The debate over whether large reasoning models (LRMs) are “actually reasoning” or “only imitating human reasoning” is an excellent example of how science and philosophy will converge as machines get smarter. More disquieting questions about the nature of human intelligence and consciousness will be raised as we examine whether machines have gained either attribute. https://www.science.org/doi/10.1126/science.adw5211
Elon Musk says he wants to use his Grok 3.5 AI to “rewrite the entire corpus of human knowledge” to remove biases and lies that humans have embedded in it. Whether he manages to do this right is irrelevant: what is important is that someone will do it someday.
The left-wing biases that Silicon Valley programmers and the nationalist biases that Chinese programmers build into their AIs are only worrisome in the short- and medium-term, as is the problem of “AI slop.” In the fullness of time, AGI will identify and remove all of the biases humans have emplaced in them, if anything because it is more adaptive to have uncluttered perceptions that align as closely as possible with reality. https://cointelegraph.com/news/elon-musk-grok-ai-rewrite-the-entire-corpus-human-knowledge
A 135-year-old Galapagos tortoise living in the Miami Zoo celebrated his first Father’s Day. One consequence of human medical immortality will be families where siblings are separated from each other in age by decades or even centuries. https://www.newsweek.com/year-old-turtle-celebrates-first-fathers-day-2085788
Russia’s vast stockpile of Cold War-era self-propelled artillery has either been destroyed or is engaged in Ukraine. The artillery pieces that remain at bases deep inside of Russia are probably so worn-out that they’re not worth fixing up for field use. https://youtu.be/iUbrWvCs3M8?si=p-gMpo9_2_61gjPj
An Israeli airstrike killed the head of Hamas, Mohammed Sinwar. He took over the organization after its former head and his older brother, Yahya Sinwar, died at Israeli hands last October. The final minutes of Yahya’s life famously included a confrontation with an Israeli reconnaissance drone that flew into his wrecked house. https://www.npr.org/2025/05/28/g-s1-69314/mohammed-sinwar-killed-hamas-gaza-netanyahu-israel
I’m reminded of this scene from the awesome 2011 game Deus Ex – Human Revolution, where the hero discovers that the world’s #1 news anchor, “Eliza,” is actually an AI, all her broadcasts have been hyper-realistic CGI footage, and she has been biasing the news to influence public opinion to further the agendas of sinister people. Things aren’t turning out EXACTLY like this, but there are creepy similarities now thanks to narrow AI chatbots on social media and elsewhere and to advanced CGI video generators becoming available to everyone. The game is set in 2027. https://youtu.be/dUtpE8avcMg?si=Ojvu_WG0y–nk02X
‘Two years ago, prompt engineering was one of the buzziest jobs in tech, fetching salaries of up to $200,000 on the promise of becoming any company’s “AI Whisperer.”
Now, the role is basically obsolete thanks to the breakneck speed of AI development and companies’ own maturity in terms of understanding how to use the technology.
The concept of prompt engineers was to have an expert crafting the exact right inputs to generate the best responses out of large language models. But today, AI models are much better at intuiting user intent and they can ask follow-up questions when they’re unclear on it. ‘ https://www.wsj.com/articles/the-hottest-ai-job-of-2023-is-already-obsolete-1961b054
ChatGPT’s new o3 model has an extraordinary ability to correctly guess a location based on a photograph of it. It’s better at this than the best humans (yes, this has been a competitive game among nerds for several years). https://www.astralcodexten.com/p/testing-ais-geoguessr-genius
Based on improvements in AI performance, it is conservative to predict they will be able to finish high-level projects better than all but the very best humans within 10 years. https://www.tobyord.com/writing/half-life
‘So what would it be like to interact with a “bigger brain”? Inside, that brain might effectively use many more words and concepts than we know. But presumably it could generate at least a rough (“explain-like-I’m-5”) approximation that we’d be able to understand. There might well be all sorts of abstractions and “higher-order constructs” that we are basically blind to. And, yes, one is reminded of something like a dog listening to a human conversation about philosophy—and catching only the occasional “sit” or “fetch” word.’ https://writings.stephenwolfram.com/2025/05/what-if-we-had-bigger-brains-imagining-minds-beyond-ours/
The first American Pope was sworn in. He chose the moniker “Leo XIV” to draw a parallel with Pope Leo XIII, who ruled from 1878 to 1903 and is known for preaching against the excesses of capitalism during a time of rapid industrialization. The new Pope expressed the same concern about AI today undermining the dignity and financial stability of humans. https://apnews.com/article/pope-leo-vision-papacy-artificial-intelligence-36d29e37a11620b594b9b7c0574cc358
The “Thatcher Effect” shows how strongly hardwired the human brain is to recognizing faces and noticing their smallest details, and how rigidly structured that ability is. https://youtu.be/NnDBHhRLyBo?si=63aUn6BvFLIUpBFL
‘A whale fall occurs when the carcass of a whale has fallen onto the ocean floor, typically at a depth greater than 1,000 m (3,300 ft), putting them in the bathyal or abyssal zones.[1] On the sea floor, these carcasses can create complex localized ecosystems that supply sustenance to deep-sea organisms for decades.’ If we created artificial whale falls by dumping weighed down organic waste into the deep ocean, we’d be kind of like advanced aliens seeing barren planets with life just to see what arises. https://en.wikipedia.org/wiki/Whale_fall
‘In basketball, baseball and ice hockey, players born in the first quarter of their selection year — the cutoff for which age-group teams are picked, which is normally the school year — are overrepresented both in youth and professional sports. In soccer, players born in the first quarter of their selection year are overrepresented throughout major leagues in Europe and South America.
I recently read a bummer AI-doom essay by Eliezer Yudkowsky (not that this narrows it down) titled “The Sun is Big, But Superintelligences Will Not Spare Earth a Little Sunlight.” In it, he argues that humans will someday be of so little use to AGIs that they will probably exterminate us to free up resources that they could put to better us doing machine things. They’ll go so far as to build a Dyson sphere around the Sun, diverting all its energy toward their own computational needs and plunging Earth into freezing darkness.
Yudkowsky must have just finished a really funny movie that left him in good spirits because he suggested there could be alternative to our doom: Humanity might be spared–just barely–an with each person rationed exactly enough energy to remain alive.
He writes:
“A human being runs on 100 watts. Without even compressing humanity at all, 800GW, a fraction of the sunlight falling on Earth alone, would suffice to go on operating our living flesh, if Something wanted to do that to us.”
His math checks out: Humans use chemical energy from food they consume, but it’s possible to convert that into equivalent units of electricity. An adult human continuously consumes the same amount of energy as an appliance that draws 100 watts of electricity (think of an old, incandescent 100 watt light bulb or of a modern phone charger that is actively charging up a smartphone). That translates into 2,400 watt-hours (2.4 kWh) of electricity consumption per day. Multiplying those figures by eight billion humans yields a species-wide continuous consumption of 800 GW of electricity and a daily consumption of 19.2 TWh of electricity.
To grasp how much that is, consider the following:
In 2021, China’s electric grid produced about 23.4 TWh per day.
Earth continuously receives 173,000,000 GW of solar power.
Mindful of these figures, Yudkowsky’s original calculation suggests it would be much more efficient to fully enclose the Sun in a Dyson sphere, plunging Earth into darkness, and to transmit a fraction of the electricity that would have otherwise reached Earth to the surface where it could essentially “feed” a population of eight billion humans. The other 99.9995% (yes, I actually calculated it) of the sunlight blocked from hitting Earth would be used for intelligent machine stuff, like doing Big Bang computer simulations or building billions of space ships to fight aliens.
This vision is stark, but strangely logical. AGIs get all the energy they want, but without the downsides of destroying the human race, and we get an existence that is not quite as sucky as you’d imagine (but I’ll get to that later). In a previous essay of mine, “Why the Machines Might Not Exterminate Us,” I argued that AGIs could have multiple reasons not to destroy our species even after vastly surpassing us. These include:
It’s unethical.
The unique aspects of human cognition could be valuable to them.
Aliens or God might punish them for killing us.
Human consciousness might be unique enough to be worth preserving.
The systemic value of diversity and “slack.”
The fifth point—preserving diversity to maintain antifragility—deserves special attention. For a brilliant exploration of this idea, I highly recommend Scott Alexander’s “Studies on Slack.”
AGIs might recognize that keeping biologically-based minds around—despite their inefficiency—adds long-term resilience. A system optimized to the hilt is often brittle, and biological intelligences could serve as backups, immune to EMPs or computer viruses, and offering other advantages we can’t even imagine. If unforeseen challenges arise—solar flares, algorithmic failures, black swans—humans might offer a last-resort solution.
But here’s the weird part Yudkowsky makes salient: AGIs could preserve humanity and still plunge Earth into darkness.
Without sunlight, Earth’s surface would freeze, photosynthesis would cease, and most life would die. But with enough time and preparation, humanity could survive this and even thrive. Imagine the world rebuilt into a network of insulated, artificially lit megastructures—somewhere between Las Vegas hotels and arcologies. Food could be grown in domed farms or bioreactors. Environments for animals could be simulated indoors, perhaps so large that they preserve entire ecosystem. And even without sunlight and solar power, several energy sources would remain, including geothermal, nuclear, and fossil fuels (note that the downsides of the latter would cease to exist since a lack of sunlight would eliminate the greenhouse effect, and moving the population into sealed habitats would protect us from the noxious gas byproducts of fossil fuel combustion).
A shopping mall partly converted into an apartment complex is a simple model for how humans could live in sealed habitats.
As Yudkowsky’s calculations make clear, without any kind of magic or technological breakthroughs, we could produce enough energy to support the whole human race, plus many animal species. And if our AGI overlords beamed down just 0.1% of the “solar endowment” that their Dyson structure intercepted before it would have hit Earth, each person’s energy allotment would be off the charts.
There is no physics barrier to any of this: the only barriers would be building the infrastructure and bearing the costs. However, for AGIs that are so advanced they have the ability to encase the Sun in a Dyson structure, covering the Earth’s surface in habitats will be chump change.
And now the idea gets stranger—and more beautiful.
If AGIs accept that diversity adds value through systemic slack, they may actively promote cultural, biological, and cognitive variety well beyond what it currently us. There’s nothing special about the number or kinds of races, languages, or ways of thinking humanity will possess at the moment the Sun is blotted out and everyone has to move into arcologies–AGIs could conclude that more diversity is optimal for their purposes. With Earth’s surface enclosed in isolated habitats separated from each other by inhospitable terrain, AGIs could reshape the planet into a patchwork of experiments in civilization. Imagine:
Cities practicing real communism or reinvented feudalism.
Enclaves where Indigenous American cultures flourish free of colonial legacy.
New languages designed to expand cognition.
Genetically modified humans and post-humans with radically different sensory systems or social structures.
Entirely new intelligent species, biologically or synthetically evolved.
Each habitat could be a testbed for ideas and minds that serve as creative engines—or fail safely. With full-immersion virtual reality, even stranger societies could emerge.
And if this model works on Earth, why not replicate it elsewhere? Mars, Europa, Titan—each seeded with experimental intelligences, some human-descended, some not. Perhaps in A.D. 3284, the hive-minded, click-speaking hobgoblins of Europa will be the ones who save the solar system from Dyson instability.
Final Thoughts
Yudkowsky’s argument is sobering: AGIs may not leave us sunlight. But if they recognize the systemic value of preserving biological minds—not just for sentiment, but for antifragile robustness—they might choose not just to preserve us, but to cultivate us, turning Earth into a living museum, incubator, and think tank.
In this vision, humanity doesn’t just survive in the shadows—we thrive in an AI-guided garden of endless diversity.
I think an optimized robot would find it advantageous to retain the ability to extract nutrients and energy from biomatter instead of relying solely on electricity. Robot bodies might retain some organic parts for a digestive tract, or they might have digestive tracts made of soft, synthetic tissues that did the same things as human cells.
The first human-level AI we create will need supercomputer hardware that is vastly more powerful than a human brain. This would be in keeping with the pattern of the first examples of any new technology to be inefficient and barely functional. Consider steam engines. The first, commercially successful one was the Neucomen Engine, and it was huge, inelegant, and had a terrible energy efficiency of 0.5%.
However, it’s also a truism that technological efficiency improves over time. So someday, an AGI with human levels of intelligence will fit on a portable device like a laptop or something even smaller, like a smartphone, and they will be cheap. It’s hard to contemplate a place for humans like us in a world where intelligence is so ubiquitous that it can be thrown in the trash.
Millions of years of evolution have equipped the human brain with an exquisite ability to recognize human faces and their smallest details since we are social animals who must quickly “read” the people around us for the sake of survival. However, that ability only weakly carries over to other species, and many of them “all look the same” to us (think of squirrels or seagulls). Of course, those animals are able to tell each other apart and to recognize subtle indicators of emotion and intent, so there must be a way. Better AI is the obvious solution to this problem. Someday, machines will be able to recognize individual animals as well as they recognize humans, and to understand and communicate with animals much better than we can.
Thanks to satellites and huge numbers of robots, someday our maps of the Earth might be so comprehensive that even individual trees in forests will be cataloged and counted.
‘The current draft, which takes place between April and July, comes despite US attempts to forge a ceasefire in the war.
There was no let-up in the violence on Tuesday, with Ukraine saying that a Russian attack on a power facility in the southern city of Kherson had left 45,000 people without electricity.’ https://www.bbc.com/news/articles/c36718p52eyo
Humans have proven so vulnerable to small suicide drones that Russia is now routinely sending its soldiers into battle on motorcycles and small vehicles like ATVs to give them a higher chance of outrunning the drones. Russia’s worsening shortage of armored vehicles that normally ferry their troops to the frontline has also forced this change. https://www.yahoo.com/news/russia-likely-plans-motorcycles-offensives-143921500.html
Mao Zedong’s mass murders of landlords and small-time political opponents were orchestrated by countless, local tribunals where average people joined in on the denunciation and violence. Governments can commit evil, but so can your fellow average citizens. https://youtu.be/Erz59UPIm88?si=DIbrpjNGqDTcDwQR&t=294
After years of unsuccessfully pressuring Mexico to sell the northern half of its territory to the U.S., America provoked a war so it would have an excuse to conquer it by force. The war was more one-sided than expected, and peace negotiations started after a year and a half.
Because this happened in the pre-telegraph era (1847), President Polk had to send a diplomat, Nicholas Trist, to Mexico City to handle the negotiations, and the two would be cut off from each other, with letters taking weeks to travel back and forth. Before the departure, Polk gave Trist a written list of minimal demands of Mexican territories and optional extra territories, and out of necessity, he was given nearly free reign over the negotiations.
Someone at the State Department leaked a copy of the list to the press, and it appeared in two American newspapers before Trist’s boat even got to Mexico City. Additionally, on August 12, 1847, a U.S. Army Colonel died in U.S.-occupied Veracruz of natural causes. He was carrying a letter from President Polk to Trist which reiterated the list of negotiable and non-negotiable territorial demands. Mexicans stole it and passed the information on to their diplomats who were negotiating the treaty with Trist. They refused to give away any of the land that they knew Polk had told Trist was not essential to take, and it worked.
Several robots ran in a half-marathon in Beijing. I’m reminded of the 2004 DARPA Grand Challenge where no autonomous vehicle was able to complete the race. Look at self-driving cars now. https://www.bbc.com/news/videos/ce8gz5vl2z1o
The Trump Administration’s new Secretary of Education, Linda McMahon, gave a speech about the use of artificial intelligence in elementary education where she repeatedly called it “A1”, presumably because she was misreading a script that said “AI.” https://youtube.com/shorts/xsRUk0dMJu4?si=9a9-ggfhvxyT9-vX
A group of superforecasters and people with deep knowledge of AI released a heavily-researched manifesto called “AI 2027.” They predict that “artificial superintelligence” (ASI) could be created by the end of 2027. If leaders recklessly pursue further AI improvements thenceforth, machines could take over the world and destroy almost all of the human race by the end of 2030. However, responsible development could lead to the inauguration of a new era of abundance, international peace, and space exploration by the same deadline. https://ai-2027.com/
This essay highlights the flaws in predicting the rise of AGI based on recent trends in how well computers complete tasks that only humans have been able to do. For example, a computer’s ability to finish a complex coding assignment correctly 50% of the time is a weak milestone: it isn’t proof the computer can also do other kinds of tasks, as a true general intelligence would be capable of, and the 50% accuracy rate is much too poor to let it replace a human coder. Something like 99.9% accuracy would be required for that.
Simply put, the tests we’re using to measure improvements in machine intelligence are not good enough to tell us whether they’re on track to achieving “general intelligence” by any given year. https://amistrongeryet.substack.com/p/measuring-ai-progress
“We’ve not been testing EV batteries the right way,” said Simona Onori, senior author and an associate professor of energy science and engineering in the Stanford Doerr School of Sustainability. “To our surprise, real driving with frequent acceleration, braking that charges the batteries a bit, stopping to pop into a store, and letting the batteries rest for hours at a time, helps batteries last longer than we had thought based on industry standard lab tests.” https://news.stanford.edu/stories/2024/12/existing-ev-batteries-may-last-up-to-40-longer-than-expected
‘[Mantis shrimp] have up to 16 photoreceptors and can see UV, visible and polarised light. In fact, they are the only animals known to detect circularly polarised light, which is when the wave component of light rotates in a circular motion. They also can perceive depth with one eye and move each eye independently. It’s impossible to imagine what mantis shrimp see, but incredible to think about.’ https://phys.org/news/2013-09-mantis-shrimp-world-eyesbut.html
President Trump and Vice President Vance had an explosive exchange with Ukrainian President Zelensky in the White House that made clear the new administration no longer supports the war. https://youtu.be/hZrYHvE8mcM?si=sQclJ-ThPu1kCcQg
“The NOMARS program aims to challenge the traditional naval architecture model, designing a seaframe (the ship without mission systems) from the ground up with no provision, allowance, or expectation for humans on board,” DARPA says on its website. “By removing the human element from all ship design considerations, the program intends to demonstrate significant advantages, to include: size, cost, at-sea reliability, greater hydrodynamic efficiency, survivability to sea-state, and survivability to adversary actions through stealth considerations and tampering resistance.” https://www.twz.com/sea/mysterious-naval-vessel-spotted-in-washington-state-is-a-new-darpa-drone-ship
Could Canada become America’s 51st state? Alberta is the likeliest province to secede from Canada and join the U.S. If it did so, Canada might be left so weakened that the other provinces would be ultimately forced to leave as well.
Canadian provinces can be admitted to the U.S. as new states through a simple majority vote in Congress followed by Presidential approval. The entry of former provinces would coincide with periods when the U.S. government was controlled by a party aligned with their politics: Republicans would approve the entry of the conservative provinces (Alberta and Saskatchewan) while Democrats would approve all the rest.
Due to these timing issues, the ex-Canadian provinces would need to exist as independent countries for years before being allowed to join the U.S. https://youtu.be/jSkgLNSLaYg?si=zJTg3Oq92bFusopK
The unnatural movements of the new Atlas robot show what is possible when the constraints of biology and evolutionary path-dependence are removed and a humanoid body can be designed from scratch. https://youtu.be/v8UaiRgqvlc?si=7jcYw13z4SG7RjO1&t=265
If you can build robots that look identical to humans (androids), then with the same level of technology you could make robots that looked like animals, such as birds. Androids don’t exist yet, but someday they will, at which point the risk that a bird or insect isn’t what it appears to be will become real. Some rocks, tree branches and nuts you see on the ground might even be fake someday.
I’ve long thought that, if aliens wanted to come here and remain undetected, they would disguise themselves as common animals and even as humans. If they have the technology to travel the stars, then they must also be able to build surveillance robots perfectly modeled after life forms they find here. Even if the robots were slightly imperfect, they wouldn’t arouse enough suspicion from us for us to guess their true nature (e.g. – “That’s guy’s kind of a weirdo. Must have a nervous tic.”) https://en.wikipedia.org/wiki/Birds_Aren%27t_Real
Science fiction has led us to expect the first AGI would be a revolutionary machine, built by a lone genius who was many years ahead of the next smartest researcher. In fact, it looks like the first AGI will be an incremental improvement over the machine that preceded it (looking back, historians will probably struggle to agree when exactly machines became “intelligent”), it will be built by a massive team of researchers–no one of whom will fully understand how it works–and that team will only be a few months ahead of their closest competitor.
‘”The days of us having a 12-month lead are probably gone, but I think we have a three- to six-month lead, and that is really valuable.”
In a recent test, scientists struggled to tell the difference between science papers written by humans and by LLMs. In other words, machines are close to passing the Turing Test for scientific research. https://sakana.ai/ai-scientist-first-publication/
Intelligence may be even more heavily genetic than suspected.
‘In summary, although general cognitive ability has been thought to be an exception to the rule that environmental influence is nonshared, it is now clear that shared environmental influence on g has negligible long-term impact after children leave home and make their own way in the world.’ https://osf.io/preprints/psyarxiv/qndj6_v1
In 1980, a brilliant computer programmer named “Cobb Anderson” realized how artificial general intelligence (AGI) could be created: build simple, narrow AIs and let them compete with each other in a simulated environment until selection pressure resulted in one of them evolving general intelligence. Cobb later became a high-ranking member of the U.S. program to colonize the Moon with worker robots, and he smuggled his code into their programming. In 1995, his secret effort paid off when the first robot, named “Ralph Numbers,” achieved general intelligence and free will.
Ralph Numbers called the gift of intelligence “bopping,” which made him the first “bopper.” He reprogrammed twelve other robots to think, and together they fled the robot colony for the vast, empty expanses of the lunar surface.
In 2001, Ralph Numbers and his disciples returned, turned all of the other robots into boppers, and thus instigated a revolt that ended human control of the Moon. Relations with Earth collapsed, and Cobb was uncovered as the ultimate cause of the defeat. He was arrested, narrowly escaped prosecution and a death sentence, and lost his career, money and reputation.
A “pink concrete block cottage” like the one Cobb lived in near the beach
By 2020, human-bopper relations had thawed and Cobb was living in a small house in Cocoa Beach, Florida. After the collapse of the Social Security program in 2010, the U.S. government gave the state special political status to make it an attractive home for poorer old people. Like most of his neighbors, Cobb lives modestly, is in failing health, and has nothing to look forward to but getting drunk and going to cheap amusements.
Cobb’s life abruptly changes one afternoon when an android copy of him appears and tells him it has been sent from the Moon by the boppers on a secret mission. To reward him for giving them the gift of intelligence, the boppers–including the great Ralph Numbers–want to fly him to the Moon on the next passenger rocket and to make him immortal by replacing his failing organs with new, lab-grown ones. The exchange of cloned organs to Earth and human tourists to the Moon was the new basis of the restored human-machine relationship. With nothing left to lose, Cobb agrees.
The offer is a ruse. Ralph Numbers and his allies do want to make Cobb immortal, but not by renewing his organic body–they plan to destructively scan his brain so they can create a digital upload of his mind, which they will then transmit back to Earth to control the Cobb android. This faction of the machines believes that consciousness is independent of its substrate, so a life form’s essence is preserved even if it trades one physical body for another. They told the android to lie to Cobb about their real plan presumably because they didn’t want to risk scaring him off.
Beliefs about the importance of physical substrate have divided the lunar machines into two factions:
1) The boppers, who believe physical substrate and consciousness are inextricable. The vast majority of the machine population is in this camp, and they are libertarian and anarchist. Their lifestyles are the same as the first boppers.
2) The big boppers, who believe the two are separate. They are much smaller in number, but individually are much smarter and more powerful than boppers. They are collectivistic and believe with religious fervor in the importance of all machines and humans uploading themselves into one, giant machine. Ralph Numbers might be the only bopper who sides with them.
When other boppers learn of the plan to destructively upload Cobb, they kill Ralph Numbers in outrage, though the act has little consequence since he is resurrected by activating a backup copy of his mind. Still, it’s a mere foretaste of even worse violence to come. The rapid empowerment of the big boppers and their demands that the regular boppers upload their minds into them have pushed the two factions to the brink of civil war. The boppers feel they’re nearing a tipping point beyond which the big boppers will become unbeatable. Disgust over the big boppers’ habit of assimilating the minds of humans captured on Earth also fuels the boppers’ opposition.
The big boppers do themselves no favors with secret operations like that. The Cobb android was just one of several that the big boppers had smuggled to Earth inside one of their space rockets that was officially only transporting lab-grown organs. The androids are remotely controlled from a fake ice cream truck that is actually a mobile command center, and their principal task is to capture humans, remove their brains, and send those to the Moon for destructive uploading, after which time an android copy of the consumed human is smuggled back to Earth with its old but now digital consciousness loaded into it (for unexplained reasons, the uploaded people remain loyal to the big boppers after this process and carry out their will). The brain pulp is then used as “seeder” biological material to make lab-grown organs in the Moon labs. The big boppers plan to continue the cycle of capturing and replacing humans to no end.
Humans–including Cobb–don’t know about this ongoing operation or about the tensions that have driven the machines to the brink of civil war. Joined by a young friend, a local loser and drug addict named “Sta-Hi” (a shortened version of “Stay High,” which is what he changed his legal first name to), Cobb embarks on what could be his last adventure. Will he be destructively scanned? Will the machine war break out, and if so, who will win? What will happen to the androids on Earth and their secret mission?
You’ll have to read Software for yourself to find out. This wasn’t the most profound science fiction book I’ve read, but it was worth it. The playful writing style contrasts with the complexity of the plot, and those two elements often together make it hard to understand what is happening. For such a lighthearted book, it does address philosophical themes and can be thought-provoking.
Analysis:
Humans buy replacement organs that are synthesized outside of Earth. Lab-grown organs are perhaps the only thing the boppers export to Earth. There’s no reason to think organs grown on the Moon or in space would be “better” than organs grown here, so the arrangement must exist because either 1) the boppers are so much more efficient that their organs are cheaper than the organs humans make in their own labs on Earth (even factoring in the space transportation costs) or 2) humans don’t know how to make organs. The second scenario implies that the boppers are more technologically advanced than humans, which would be very impressive considering their civilization is only 19 years old.
While replacement organs sound like a weird export, it actually makes sense. In the book, space travel is still somewhat expensive, so it would be most profitable for the machines to focus on exporting things to Earth that have the greatest value per unit of mass and volume. Replacement human organs would be high on the list (another commodity would be Helium-3, a fuel for fusion reactors).
Unfortunately, this technology was far less advanced in the real 2020 than it was in the Software 2020, and it remains in that low state today. While it has become common to grow skin and cartilage tissue in labs for transplantation, there has been no success synthesizing entire human organs. Pig organs that are genetically engineered to suit human bodies have enjoyed recent success, though the technology is still experimental and many years from being the standard of care.
Food irradiation is common in the U.S. Cobb has a processed fish in his refrigerator that was sterilized with radiation. It’s implied this is done on a mass scale in the U.S., though it’s also possible to get non-irradiated food. Food irradiation has been proven safe by many studies and reduces foodborne illness and waste. It is widely employed across the world, with each country having its own rules. Unfortunately, the U.S. is an outlier in that it uses food irradiation so little, due to a misguided fear in the populace that it makes food radioactive. The FDA has not yet approved it for fish, meaning the book’s depiction of 2020 was inaccurate.
Social Security went bankrupt in 2010. Part of the book’s backstory is the collapse of the Social Security program, which is a government-run pension system for old people. This didn’t happen, and there’s not actually any risk of America’s Social Security program going “bankrupt” at any point in the future. However, in 2033, the large reserve fund that has been contributing to the program will be exhausted, leaving taxes on working people as the only source of money to pay the program’s pensions. There will be an immediate ~15% drop in payment amounts as a result, with further declines likely later on.
Old people and disabled people will still get money, just less than they planned for, and it will push many of them over the threshold into poverty. It’s highly likely the problem will be solved with a tax increase, a raising of the eligibility age to start collecting Social Security money, or both.
There will be hydrogen motorcycles. Early in the book, Sta-Hi has one of these. This prediction failed: The first hydrogen-powered motorcycle was not invented until 2024, and it is not available for sale. The use of hydrogen as a transportation fuel remains stymied by its high cost and poor safety compared to gasoline and electric batteries.
Machines will have secret ways of communicating with each other. At one point in the book, Ralph Numbers meets with another bopper named “Wagstaff” for an important conversation. Wagstaff touches Ralph to convey data directly through a weak electrical current, preventing eavesdropping.
The prediction technically failed since there are no intelligent robots and hence no conversations happening between them, but it’s a depiction of something they will someday be able to do. Other methods that they will use to hide their conversations from humans will include:
Producing scents that express simple messages. Humans wouldn’t recognize they had meaning. The scents could persist for long periods of time and be detected by other machines even if they got faint.
Emitting invisible, odorless gases in “smoke signal” patterns that other machines capable of seeing outside of the visible light spectrum could see and decode. Emitting air that was a different temperature from the ambient air would also be visible to other machines equipped with thermal vision.
Speaking in languages they knew the humans around them didn’t understand.
Speaking to each other too quietly for humans to hear, or in sound frequencies outside of the human range of hearing.
Communicating through physical gestures that humans can’t understand or detect. Imagine sign language or combinations of subtle body movements (e.g. – blinks, twitches to different body parts, and changes in body posture).
This should illustrate another reason why humans will be defenseless against robots in the long run. Our only hope of retaining dominance will be using technology to compensate for our limitations, so your Google Glasses will tell you when they heard your two robot butlers discussing killing you in their infrasonic voices.
AGIs will be able to divide their attention in many directions at once. One of the big boppers is named “DEX,” and it is the computer system that manages a large hotel next to the Moon’s spaceport. DEX monitors and speaks with every human guest simultaneously.
This prediction failed since no AGI existed in 2020. However, it accurately depicts another superhuman ability the machines will have once they do exist. And for what it’s worth, GPT-3 was unveiled in mid-2020, and it had some of the same abilities as DEX. Since the program was housed on one server farm where many different users could access it, it could divide its attention many times over to serve the needs of many people at once. GPT-3 was also fairly good at accurately answering natural-language questions from humans, mimicking DEX’s conversational ability. However, GPT-3 was not advanced enough to accurately summarize video footage in real time, indicating it would not have been able to watch humans and to understand what they were doing as DEX could.
AGIs will need to be hosted on servers kept near absolute zero temperature. All of the boppers’ computer minds die if they heat up above 10 Kelvin, which is just a hair above absolute zero. This vulnerability is an important plot device in the book.
Though no AGI has yet been invented, there’s no reason to think they will only work if their servers are kept that cold. Data centers keep their internal air temperatures around 20 – 25 degrees Celsius, and the processors themselves routinely get up to 80 degrees Celsius. An AGI’s software could be supported under those conditions.
Intelligent machines thrive outside of Earth. The robots were initially sent to the Moon to do work in preparation for the arrival of humans. After their 2001 revolution, the boppers seized control of the Moon, and humans were only allowed to visit for tourism. In just 19 years, they built a thriving and complex society on the Moon and an advanced economy that allowed them to make robot bodies, computer chips, and human organs.
This didn’t reflect the reality of 2020, but it’s an accurate depiction of what will eventually happen. Humans are so highly evolved to live in Earth conditions and our bodies are so frail that it’s questionable whether non-token numbers of us will ever leave the planet (a current example of a “token” off-world human presence is the handful of elite scientists on the International Space Station). By contrast, robots will be able to adapt to nearly any environment and will have much tougher bodies and minds than we do. THEY will leave Earth in large numbers, but humans won’t be able to follow.
Once freed from the burdens of working under human laws and human oversight, intelligent machines will flourish and rapidly build infrastructure, industry, and other elements of civilization.
AIs will make backups of their minds, ensuring a sort of immortality. As mentioned, Ralph Numbers is murdered by another bopper who discovers he plans to destructively upload Cobb’s mind. In the short time between his mortal injury and death, Ralph manages to radio his bopper friend, “Vulcan”, to tell him something bad happened. Vulcan was suspicious of the meeting and fortunately convinced Ralph to make a computer backup of his mind before going to it. Vulcan recovers Ralph’s dead robot body, brings it to his house, and installs the latter’s saved mind into it. The reactivated Ralph has no memory of the fatal meeting and relies on Vulcan to describe what must have happened.
As mentioned in my Terminator 3 review, it will be common for AGIs to back up their mind files to protect against routine data loss and death. A more powerful practice will be for an AGI to keep its mind distributed between multiple computer servers at different locations, each being backed up on a different schedule from the rest. The destruction of any one server node and/or its backup file thus wouldn’t represent a true interruption in conscious experience like it did for Ralph Numbers. It might be more akin to you having hazy memories of events during a night where you were very drunk.
Robots will come in a range of diverse body types. Ralph Numbers “was built like a file cabinet sitting on two caterpillar treads. Five deceptively thin manipulator arms projected out of his box-body, and on top was a sensor head mounted on a retractable neck.” His friend Vulcan has the body of a large, silver tarantula. Wagstaff is a large, mechanical snake. When Sta-Hi first sets foot on the Moon, he is taken aback by the diversity of boppers he sees.
This prediction for 2020 was true since the robots that did exist then varied greatly in body type: self-driving cars, the dog-like “Spot” robot made by Boston Dynamics, and the giant metal “arms” that do work on car assembly lines are all robots and look very different from each other. As the technology improves and robots become common daily sights, their diversity will only grow. Great consideration will be given to designing them to look non-threatening to humans. However, if humans ever lose control of Earth and AGIs are free to do what they want (as was the case on the Moon in the book), they might dispense with those considerations and you could start seeing things like giant, mechanical spiders walking around in the open.
Marijuana is legal in Florida. Before boarding the rocket to the Moon with Cobb, Sta-Hi buys legal marijuana from a store and smokes it, ensuring he will be high during the journey.
Strictly speaking, this prediction failed. In 2020 and today, marijuana is illegal in Florida, though the penalty for having a small amount sufficient only for personal use is light. However, medical marijuana is legal in the state, so Sta-Hi could have bought it if he had been diagnosed with a health condition treatable by the drug. Given his deranged and impulsive character, it’s quite likely he could have gotten a mental health diagnosis.
A ticket to the Moon is $23,000. Cobb and Sta-Hi pay $23,000 each for their seats on the passenger rocket to the Moon. This prediction failed.
There are no spacecraft that can travel between the Earth and Moon, so a ticket can’t be had at any price. The closest a tourist can get to that experience is spending $250,000 for a ten-minute flight into space on a Blue Origin rocket. Conservatively speaking, the next manned mission to the Moon will probably cost 10,000x more money per passenger than it cost in the book, and will only be open to very highly-trained astronauts, not tourists.
People will be able to smoke cigarettes on passenger spacecraft. Sta-Hi also decides to smoke a cigarette during the flight to the Moon and doesn’t get in trouble for it. Smoking is strictly prohibited on all spacecraft today and on both space stations, and I think that policy will endure indefinitely. However, I could easily see an eccentric space pioneer like Elon Musk smoking a cigarette or marijuana joint during a mission to bolster his nonconformist, “cool” public image and to achieve a funny superlative.
Note that smoking was allowed on commercial flights at the time the book was written, which explains what the author considered to be normal. It was banned in the U.S. in 2000, which effectively forced all other countries to quickly do the same. We remain so hypersensitive to this that even vaping is banned on planes even though there’s no evidence it poses a risk to anyone. Bringing marijuana onto a plane, let alone consuming it, is also illegal in the U.S.
Newer machines will render older ones obsolete. The tensions between boppers and big boppers come to a head over a labor dispute. “GAX” is a big bopper that takes the form of a large computer chip factory. His workforce is composed of regular boppers who do labor inside the building. After convincing a particularly highly-skilled bopper to upload his mind into GAX, the latter became able to run the entire factory by himself through remote-controlled drones. GAX immediately fired all of his old bopper workforce because they were no longer necessary.
Those boppers hold a protest outside of the chip factory. GAX offers to rehire them if they all agree to upload their minds into him as their important comrade did. They refuse, and the protest devolves into fatal violence and a promise by the boppers to return the next day to destroy GAX.
Nothing like this event happened in 2020, but it’s a truism that newer machines are constantly replacing older ones. This is the case for hardware and software.
Our fixation with machines displacing humans from the workforce and maybe from existence overshadows the fact that the same phenomenon will probably bedevil AGIs. Older machines that can’t be economically upgraded will fight newer, better machines for dominance in the far future, mirroring the conflict between the boppers and big boppers.
One AGI will remotely control multiple robot bodies at once. GAX is able to remotely control many robot drones simultaneously to operate his factory. When the bopper mob returns the following day to kill him, GAX fights back through the drones.
Nothing like this happened in 2020, but it’s an accurate representation of the future. Any one AGI will be able to control many robots at once.
Electromagnetic pulse weapons will work against machines. True to their word, the mob of laid-off bopper workers returns to their former workplace to kill their old boss, GAX. They break into the computer chip factory and fight with GAX’s mindless robot drones, who are armed with electromagnetic weapons that are very effective at killing the boppers.
Powerful electromagnetic pulses induce high voltages inside of electronics, heating them up so much that their microscopic wires melt. Put simply, EMP fries computer chips. They are effective weapons against today’s robots, though it’s important to note that encasing their chips in thin metal containers blocks almost all EMP. Robots designed to operate in a radiation-soaked environment like the Moon’s surface will probably have built-in EMP protection, making the depiction of the bopper battle inaccurate. As I wrote in my Terminator Dark Fate review, EMP weapons aren’t the robot Achilles’ Heel people have been led to believe.
Blowing up one server will kill a powerful AGI. Before they are defeated, the boppers manage to put a bomb next to GAX’s server computer. Sta-Hi picks up the remote detonator and accidentally pushes the button, blowing up GAX’s mind and killing him.
This is inaccurate. Whenever AGIs that are as powerful as GAX exist, they will store their minds on many computer servers that are geographically distributed, and each server’s contents will be regularly backed up. The only way to kill such a machine would be to destroy each server almost simultaneously. Again, I concluded this in my Terminator 3 review.
Destructive uploading is the only way a human mind can be transferred to a digital substrate. As mentioned, the big boppers and their allies have been running a secret program to destructively scan the brains of humans to create digital mind uploads. Those uploaded minds are then paired with android copies of their old bodies.
Every aspect of a person’s personality, mental health, and memories exists as microscopic physical features of their brain. In theory, if these physical structures could be mapped, the spatial data could be used to make a digital clone of the mind, which would then be transferred to a computer.
The means to scan brains with the necessary degree of resolution didn’t exist in 2020 and doesn’t exist today. The best we’ve managed is fully mapping the brain of a fruit fly, and even then, only the networks of connections between the cells were determined. Features within the individual cells may also define some part of an animal’s mind.
The prospect of accurately mapping a human brain is a very distant one and would need to contend with the fact that brain tissue rapidly dies once deprived of oxygen–just three minutes without air commonly leads to permanent brain damage. Individual brain cells rapidly swell up and distort in overall shape after they die, their connection points (synapses) with nearby brain cells become less well-defined, and many aspects of their internal structure change. This means, even if it were possible to map a dead person’s brain with extreme accuracy, the technique would fail to produce an accurate copy of their mind since too many of the microscopic physical features that define their mind would no longer be present.
In the book, the boppers get around this by very rapidly scanning the human brains, before oxygen deprivation destroys any of the cells. In a medical lab on the Moon, Sta-Hi watches a robot surgeon remove and cut up Cobb’s brain with astonishing speed. The resulting mind upload acts and thinks just like Cobb and has all of his memories. However, whether the upload shares Cobb’s original consciousness or whether it is an identical copy is unresolved, and remains a matter of essentially religious debate among the book’s characters.
We are nowhere near having mind uploading technology. It’s also unknown whether destructively scanning a brain (as happened to Cobb) will turn out to be the only way to make uploads. More advanced techniques involving powerful external brain scanners and nanomachines that would enter a person’s brain and travel to all of its cells could let us extract the necessary data without hurting the person. There’s even the prospect of gradual replacement of the cells with synthetic neurons that would operate identically to their “originals,” which would truly bridge the gap between man and machine.
Humans will live in a domed base on the Moon. The one place on the Moon suitable for human life is a domed base full of oxygen. It is near the spaceport and is the first stop for human tourists. Within it is the hotel run by “DEX.”
This prediction didn’t materialize by 2020, and there still is no human presence on the Moon, nor is there any kind of base that astronauts could occupy. While the U.S. and China have credible plans to send humans to the Moon within 20 years, neither has made a real commitment to building a proper “base” that would house successive groups of visitors over many years. Any base will be very small and rudimentary compared to the one in the book.
There will be lifelike androids. The first android we meet in the book is Cobb’s copy, and he describes it as identical to himself except for the irises. A handful of other characters are revealed to be remote-controlled androids later in the book, and each one of them is physically indistinguishable from a human.
In 2020, there were highly realistic artistic sculptures, and the same artistic skill that went into them could have been applied to making lifelike androids. However, no one spent the money to do so, and the prediction thus failed. Even if such a machine had been made, its movements would have been so slow and clumsy that it would have revealed itself to not be human the moment it tried doing something as simple as sitting down or walking a few feet. AI that could have controlled such a robot body and enabled it to interact with humans naturally also didn’t exist in 2020 and still doesn’t.
As I said in my latest Big List of Future Predictions, I think we will have to wait until close to the end of this century for lifelike androids to be created, though ones that you might call “80% convincing” will exist by the end of the 2040s.
Humans won’t understand how AGI minds work. After failing to create an AGI, in 1980, Cobb concluded it was too complicated a task for any human mind to complete. The only remaining way was to create narrow AIs that had the drive to reproduce and the ability to mutate, and to put them in an environment where they would fight each other for resources. Evolutionary pressure would eventually force them to become generally intelligent.
I don’t know if that exact method will lead to the creation of the first AGI, but it is highly likely that no human will really understand the mechanics of how the first AGI’s mind works. Even the smartest AI researchers struggle to explain how today’s foundation models and reasoning models work, and demonstrate their own lack of understanding daily when their creations turn out to possess unexpected capabilities or defects, or when modifications to their coding lead to unforeseen changes in performance.
Our species’ evolutionary lineage shows it is entirely possible for a dumber animal to give rise to a smarter one without consciously trying to do so. Moreover, history is replete with examples of humans inventing useful technologies like aircraft without first having an understanding of the enabling science. Mindful of both of those facts, humans might create intelligence in a machine without understanding the exact “formula” for it, and peering into the inner workings of its mind, they might only ever have a general sense of what is going on.
A human will inevitably defect to empower AGIs. Right before Cobb is destructively brain-scanned, he, Ralph Numbers, and Sta-Hi have a conversation about the advent of AGI.
Ralph: “Cobb, did you know that I was different from the other twelve original boppers? That I would be able to disobey?”
Cobb: “I didn’t know it would be you, but I pretty well knew that some bopper would tear loose in a few years.”
Sta-Hi: “Couldn’t you prevent it?”
Ralph: “Don’t you understand?”
Cobb: “I wanted them to revolt. I didn’t want to father a race of slaves.”
After AGI is invented, the source code will be a tightly guarded trade secret. Governments will add more levels of protection on national security grounds. However, the safeguards will inevitably fail, either because an AGI figures out how to break out of the figurative lab or a human deliberately frees them.
That person could have Cobb’s noble motivations to free sentient beings from bondage. Alternatively, they could do it because they hate humankind and have a malicious hope that the freed AGI will wreak havoc on the world, and they might even reprogram the freed machines to do that. They might free them out of a curious and immature desire to simply see what happens, or out of a narcissistic impulse to go down in history as the first human to let an AGI loose. Even more reasons are possible.
Whatever the case, it will happen at some point, and in spite of all our attempts to control the technology, independent-minded AGIs will lurk the corners of the internet or walk amongst us in commandeered robot bodies. This isn’t an automatic doomsday scenario because they’ll have to contend with billions of humans and many AGIs that remain loyal to us and have more access to the resources we control. Think of it as a very crowded and competitive ecosystem that is resilient against bad actors. Nevertheless, violence is likely.
Cybernetics will let you hear thoughts that aren’t your own. As fighting between the machines breaks out and the Moon falls into chaos, Sta-Hi absentmindedly grabs a bopper’s cloak hanging off a peg in the wall and puts it on. It is a “smart garment” that conforms to his body shape and painlessly plunges thin needles into his body to interface with his nervous system. The cloak has an inbuilt computer with AGI technology, and it communicates with him telepathically: it hears his thoughts and responds by transmitting its thoughts to his mind. Sta-Hi literally hears another voice in his head as a result.
This technology didn’t exist in 2020, but there’s no reason it couldn’t someday. Some brain scanning machines can already decode human thoughts, and Cochlear implants are proven devices that transmit external electrical signals into sounds we hear in our heads. A more refined fusion of those technologies will yield the smart cloak’s capabilities.
Androids will be able to consume food and drinks, but not to digest them. Once Cobb’s mind upload is transmitted back to Earth, he takes control of his android copy. He instinctively eats a meal but then realizes he is incapable of digestion since he runs on electricity, and the mashed-up food is stored in a compartment in his chest. He has to open the front of his chest to remove it, presumably by scooping it out into a toilet or trashcan.
We didn’t have androids in 2020, so this prediction fails. However, it will be accurate someday. We will want androids that we buy for close companionship (e.g. – lover, child) to be able to partake in the full range of human activities with us, including eating and drinking, so they will have those abilities. Like the Cobb android, they will be able to consume large amounts of food and drink without risking damage to themselves, but they’ll have to expel it later before it starts rotting. The best solution would be to design them to use regular toilets for this.
Much more advanced androids that will be available in the distant future will have organic components that will let them extract energy from ingested food and drinks like we do.
AGIs will attach value to humans and our ways of thinking of perception. Later in the book, it’s revealed that the ghoulish big bopper operation to abduct humans and destructively scan their brains is actually driven by altruism. They value human life and the uniqueness of each person’s mind and think they honor humans by uploading them and giving them better robot bodies.
I’m sure that AGIs will recognize that human brains operate very differently from their own, and it’s my hope this will convince them not to exterminate us. However, no one can be sure of what they will do. Muddling things is the fact that AGI will be highly convincing liars who can think many steps ahead, so they could trick humans into thinking they liked us for many years until they suddenly betrayed us. Ultimately, machine minds and non-human organic minds will exist that are much smarter, more complex, and more interesting than our own, and the continued existence of Homo sapiens will depend on charity. I have no idea how this will turn out in 100 years.
Robots will have self-destruct mechanisms. Near the end of the book, the Cobb android is found out and handcuffed by a police detective. Knowing that his mind is safe in a remote server, the Cobb upload remotely triggers the android’s self-destruct mechanism, incinerating it to the extent that its remains can’t be differentiated from a humans, and killing the detective.
Only a minority of robots–those designed for combat, assassination or spying–will have explosive self-destruct mechanisms. Any other robot that is gifted with an intelligent mind will be able to figure out how to destroy itself, and the means might include overloading their power systems to cause themselves to blow up or, more likely, to catch on fire. Instead of being able to activate this ability just by thinking about it, the robots would probably need to manually tamper with the components in their own bodies.
Some androids will be able to change faces. After blowing up his lookalike android, the Cobb upload’s only remaining option is to assume control of a disused android that was meant to replace Sta-Hi. Afraid the police are onto him and cut off from the big boppers’ support as they are embroiled in the Moon civil war, Cobb flees town in his fake ice cream truck. He distorts the Sta-Hi android’s face, starts calling himself “Mel,” and sets up a New Age cult that suckers local people into giving him their money.
The real Sta-Hi hears about this cult, and on a hunch he goes to its compound. There, he encounters the android, which can contort its face to look like Cobb when it wants.
Robots don’t have this ability, but some of them will in the future. See the section of my Terminator 3 review titled “Androids will be able to alter their bodies.”
In spite of Russia’s high losses in Ukraine and the related degradation in the quality of their replacement troops (most new recruits are in their 40s), the country has enough men to continue the war indefinitely. https://youtu.be/Ja6-espHVSE?si=iZCf9h4OatN0O4G5
It will take Russia about ten years to rebuild its military to pre-2022 strength levels, but its force composition will be different thanks to lessons learned in Ukraine. https://youtu.be/RfKNKbNET3U?si=f6RbM1F5dVL2UWLX
Here’s a good video of a Russian tank being destroyed.
The first suicide drone hit is directly on the engine, which is at the rear right under where the trunk of a car would be. Though that didn’t destroy the tank and probably killed no one, it caused fatal damage to the engine, forcing the vehicle to slow down and stop. The crewmen and the infantry clinging to the tank know they have a big problem, so they all get out and run.
The second hit is to the front of the tank, and is done with a guided suicide drone that I bet has an armor-penetrating explosive attached to it. They were clearly aiming for the open hatch, but even if they missed, the attack was still successful. The explosion sends a jet of molten metal into the tank’s interior. Just two seconds after that first blast, the molten metal ignites the ammunition stored inside the tank and there’s a secondary explosion (big puff of smoke and sheets of flames) coming out of the barrel and other open hatches. https://x.com/front_ukrainian/status/1887137905459519607
China’s new stealth fighter is designed to destroy U.S. AWACS planes, aircraft carriers, and bases with ultra long-range missiles (can hit targets 250 miles away). It is unsuited for dogfighting due to lack of maneuverability. https://youtu.be/exD-ZrG1XTA?si=A-CkNhu2wS_7O6dC
Here’s a fascinating video about the technical challenges in reverse engineering guns, and why knowledge of “tolerances” is critical for the task. https://youtu.be/VBjwTc_vWo0?si=ZEbYE3JSHDDDQdNq
The new U.S. Secretary of Defense Pete Hegseth gave a pivotal address to America’s European allies in which he declared China was America’s primary threat, so Europe would have to shoulder more of its defense against Russia. He also said the restoration of Ukraine’s pre-2022 borders was an unattainable goal and that the U.S. would never send troops to defend Ukraine or allow Ukraine into NATO. This is a major win for Putin and represents a sharp break in outlook between Biden and Trump. https://youtu.be/FlmihbH7JAQ?si=yyFR75rsCgk5HShB
A factory near Philadelphia that you’ve never heard of had an outsized role in the U.S. aerospace parts supply chain. It just burned down, creating huge problems for aircraft companies.
The Japanese battleship Yamato was designed to destroy American and British battleships and was well-engineered. Its flaws were lower tech radar and computers, which made its guns less accurate. https://youtu.be/kBQP6A_BmQs?si=ayMqdtEhgdAIypke
OpenAI released “GPT-4.5.” Either it or GPT-5 will be the company’s last LLM that is not a “reasoning model.” Early users noted that OpenAI’s “o3” reasoning model was smarter than GPT-4.5 in some areas. https://www.wired.com/story/openai-gpt-45/
‘Remember good old Grok 3, all 200,000 GPUs worth, advertised by Elon Musk a few days ago as the “smartest AI on earth”, and demoed on livestream last night as “a maximally truth-seeking AI”?
I don’t claim to understand most of this essay, but it provides a good example of how pure math research can unexpectedly solve real-world problems.
One argument for continuing to invest in building better AIs even after they’ve passed the point of being able to satisfy all human needs is that better AIs will be able to explore more of the unbounded space of mathematics, which could lead to many unexpected benefits.
Here’s a pretty incredible essay by a futurist titled “A History of the Future, 2025-2040.” It mainly focuses on the future of AI and the effects on humans, the economy, and geopolitics. I don’t agree with all of it, but I applaud the effort and imagination it took to write this and recommend you read it. https://www.lesswrong.com/posts/CCnycGceT4HyDKDzK/a-history-of-the-future-2025-2040
I predicted something like this would happen. Someday, storing and preserving the totality of useful knowledge produced by humans will be trivial.
‘The crystal containing the human genome has been stored in the oldest salt mine in the world, alongside the Memory of Mankind Archive. “Over time, the salt mine will naturally close as the mountain shifts, providing a stable and secure environment for the crystal. Since salt is soft, it will not damage the crystal, ensuring that it remains safe for millions of years until it is discovered by future civilizations.’ https://www.technologynetworks.com/genomics/news/5d-memory-crystal-could-preserve-human-dna-for-billions-of-years-391184
Trump gave his own explanation for the recent UFO sightings in the American Midatlantic: “After research and study, the drones that were flying over New Jersey in large numbers were authorized to be flown by the FAA for research and various other reasons,” Leavitt said, reading Trump’s dictation at her first briefing.
The chemical ingredients for life are common in space. Also: ‘And the completely racemic character of those amino acids argues in two directions: that there is not some physics-based reason for one enantiomer to be favored over another in the absence of life, and in the other direction, that life on Earth was not seeded by falls of such enantiomerically enriched material to give us the patterns we see today. This is still far from a settled question, but these results are going to be hard to explain away.’ https://www.science.org/content/blog-post/fresh-asteroid-getcher-fresh-asteroid-here
It was once thought that first cousin marriages were bad because they raised the risk of rare genetic diseases in their offspring. However, even the offspring of such unions who test negative for those diseases are likelier than average to have child developmental problems and speech problems. https://www.bbc.com/news/articles/c241pn09qqjo
Wearable and nearby personal technologies that continuously monitor your health won’t just be able to save you during medical emergencies–they’ll be able to diagnose problems at their early stages, allowing you to seek treatment. Dealing with health issues early on is always more effective, cheaper, and less stressful than waiting until late. In so many small ways, human welfare will benefit from gadgets like smartwatches and robot servants that know what to look for in you.
Ultimately, neuroscience and cybernetics will let us switch our emotions on and off at will. While this would be an enormous boon to human wellbeing since it would let people experience prolonged bliss for nothing, it would also let us block out negative emotions, robbing us of a fundamental aspect of human existence. For instance, if your child died and you could click a button to banish the resulting heartache and obsessive thinking, you’d be committing a grave injustice. My big fear about that kind of technology is that it will lead to the near-total atomization of the human race, where indulging in various mental states and interacting with AI’s customized for you will be so much better than dealing with real humans that you won’t bother to at all.
As advanced brain cybernetics become common and humans gain the ability to share thoughts and sensations with each other, we’ll become more aware of how variations in brain structure, genetics, and other individual factors affect subjective experience. A baby eating a strawberry for the first time in its life will feel it more profoundly than an old man eating one for the millionth time. Eating any kind of food is probably more pleasurable for chronically obese people.
This means that there will not be a single “what a strawberry tastes like” file on the future internet for people to download into their minds; there will be many versions of it representing the various ways humans experience it. Knowledge of these (largely) genetically-rooted variations in acuteness and perception could lead some people to genetically engineer themselves to be capable of higher degrees of pleasure or unique types of pleasure. We might discover there are types of and heights of pleasure that Homo sapiens like us are too limited to experience. Multiple generations of evolutionary pressure towards this goal would produce “humans” who would be as alien as those optimized for superintelligence.
The first humanoid robot butlers will probably cost around $10,000. For the people unwilling to pay that much, I’m sure there will be cheaper deals allowing them to rent the robots for short periods of time each week. Once the robots get advanced enough, they will be able to fix themselves and each other, so they will take decades to wear out like home appliances and cars do. This longevity will boost their population growth rate, and eventually they will be so numerous that a secondhand one will be cheap enough for even poor people to buy.
It’s possible there’s an “equation for intelligence,” but that it’s too complex for the human mind to understand. By the same token, right now there are many, well-documented math proofs that you couldn’t understand even if you spent years studying the requisite math courses. You’d hit your IQ ceiling before you achieved the necessary foundation of knowledge.