Interesting articles, March 2026

The U.S. and Israel launched a massive bombing and missile strike campaign against Iran to destroy the latter’s military capacity and nuclear program. Iran has retaliated with waves of missile and drone strikes against Israel, U.S. based, and Arab countries.

Iran’s navy has been practically destroyed.

As was long feared, Iran retaliated by attacking oil tankers in the Persian Gulf, practically closing the waterway to all traffic and spiking global oil prices. In what could be a sign of poor planning, the U.S. lacks enough warships in the region to protect shipping, so President Trump called on allies to send their own ships to do the job. Everyone rejected it.
https://www.reuters.com/business/energy/trump-demands-others-help-secure-strait-hormuz-japan-australia-say-no-plans-send-2026-03-16/

Iran could cause disproportionate damage to the global oil market by mining the waters of the Persian Gulf.
https://www.twz.com/news-features/former-centcom-commanders-candid-take-on-the-situation-in-the-strait-of-hormuz

‘On March 12, a fire broke out in the laundry area of the USS Gerald R. Ford while underway in the Middle East, injuring two sailors. Though officials initially said the damage was minor, the vessel is now heading to Souda Bay in Crete for repairs, according to USNI, taking it out of war against Iran. On Monday, The New York Times reported that the fire took more than 30 hours to extinguish and left more than 600 sailors “bunking down on floors and tables.”’
https://www.twz.com/sea/navy-juggles-its-aircraft-carrier-plans-to-stay-afloat

Though the U.S. has inflicted heavy damage on Iran with very few losses of its own, the war has proven shockingly expensive in terms of money and munitions.

Trump may be preparing to attack Iran with ground troops. Instead of trying to take over the whole country, any invasions will have very limited objectives.

Though America and Israel have badly battered Iran, their objectives and the war’s outcome remain uncertain.

Iran could keep the Strait of Hormuz closed indefinitely if its willing to endure the pain of U.S. retaliation. A massive and very expensive U.S. Navy operation would be needed to protect oil tankers from Iranian attacks.
https://youtu.be/EXkwQOhg9OY?si=e8WmZQp6vcmn9Fvx

The U.S. has been highly effective at hunting down Iran’s ballistic missile launchers thanks to our frighteningly powerful surveillance network.
https://youtu.be/rcAZRtHyNVc?si=JPWCooagKoRVMM5p

A cheap Ukrainian kamikaze drone destroyed an expensive Russian attack helicopter in midair.
https://youtu.be/ot4TA9UR4oY?si=s1GBP8qrdrIN_L3b

3D printed guns keep getting better and cheaper.
https://reason.com/video/2026/03/19/3d-printed-guns-are-getting-good/

The “XM-8” rifle from 20 years ago is back.
https://www.twz.com/land/new-army-6-8mm-carbine-recycles-xm8-designation-from-failed-starship-troopers-rifle-program

Key points of this video:

  • Enormous amounts of AI-generated disinformation about the war have flooded social media and are even being cited by the Iranian government. Truly, we’ve entered the era where you can’t believe what you see.
  • Iran’s drone and missile attacks against so many of its neighbors suggest their command-and-control network has been disabled, and local commanders are using their own (questionable) judgement when picking targets.
  • The U.S. and its allies aren’t in danger of running out of missiles.
    https://youtu.be/mP_rr859r8w?si=6y6i5m8hmc6tVBLj

The Ukraine War has been defined by the rise of combat drones.
https://youtu.be/RXmQIkV3SzU?si=SlbnbcnXHm3qp8CP

The success of the book Shy Girl, which was partly written by an LLM, shows that machines have entered the human range of writing ability.
https://www.nytimes.com/2026/03/19/books/ai-fiction-shy-girl.html

In defense of data centers.
https://reason.com/2026/03/07/the-joys-of-data-centers/

One of the leading tests of machine intelligence, “ARC-AGI-2”, is being gamed by the big tech companies.

‘A simple analogy to understand how devastating this is: imagine you give a math exam to a student, and the format of the questions is red ink on white paper. The student gets a stellar score. But the moment you change it to black ink on white paper, the student freezes and doesn’t know what’s going on.

Wouldn’t that cause you to realize the student doesn’t actually understand the material, and is instead cheating in some way you cannot figure out?’
https://www.reddit.com/r/singularity/comments/1rbw97k/the_arcagi2_illusion_of_progress_if_changing_the/

Because the creators of that test recognized it was being gamed, they just released a new, harder version of it, “ARC-AGI-3.” The lead creator, François Chollet, believes this test will eventually be gamed as well and will lose relevance as a benchmark of general intelligence.
https://www.fastcompany.com/91515360/arc-prize-foundation-new-ai-benchmark

‘Gemini Said They Could Only Be Together if He Killed Himself. Soon, He Was Dead.’
https://www.wsj.com/tech/ai/gemini-ai-wrongful-death-lawsuit-cc46c5f7

‘In private, AI bosses fret about a “Chernobyl moment”, in which the technology is implicated in some sort of deadly or ruinous disaster. The conflict with the defence department heightens the risk: if going slowly and applying limits to the use of your product results in a corporate death sentence from the federal government, only the reckless will survive. The markets are another source of unhelpful pressure: investors are jittery about AI firms burning through cash to make vast investments.

The scenarios keeping AI bosses awake at night are no longer purely hypothetical. “Some of these risks are already materialising, with documented harms,” concluded a recent report on the perils of AI. It pointed to cyber-security and biological weapons as areas where AI’s baleful influence was already apparent.’
https://www.economist.com/briefing/2026/03/05/an-ai-disaster-is-getting-ever-closer

‘TL;DR: ChatGPT 5.4 is a real upgrade from 5.2. Stronger analytical work, better spreadsheets, and extended thinking that’s impressive under the hood. But the writing is still flat compared to Claude, the finished output doesn’t match the quality of its own reasoning, and you have to over-prompt to get what you want. If you’re productive with Claude or Gemini, don’t switch. If you’re on OpenAI, enjoy the upgrade.’
https://www.smithstephen.com/p/chatgpt-54-is-good-thats-not-the

OpenAI has discontinued its video generator “Sora” after just a few months, ostensibly because it was unprofitable.
https://www.cnn.com/2026/03/24/tech/openai-sora-video-app-shutting-down

Another take on why Sora was ended and why it means OpenAI is probably planning to go public and issue stock this year.
https://x.com/aakashgupta/status/2037380094596100364

After Anthropic refused to let the Department of War use its AI systems for mass surveillance and autonomous weapons, the Trump administration started trying to force the company to comply.
https://youtu.be/KBPOTklFTiU?si=qPiO0gzUVcNSxqHE

Here’s an interesting interview with tech industry analyst Dylan Patel. Some of his insights and predictions I found interesting are:

  1. A key bottleneck in the AI race is the supply of microchips. The high-end GPUs that are used in data centers are the cutting edge of computer hardware, and the fabrication labs that make them are too few in number to accommodate the growth in demand. TSMC owns most of the fabs and has been reluctant to expand its capacity out of fear the AI bubble might burst, saddling it with a bunch of excess production capacity.
  2. China is significantly behind the U.S. and Taiwan with respect to cutting-edge GPU production. China can and will massively scale up the quality and quantity of its own GPU manufacturing but will still probably lag the West in 10 years.
  3. In defiance of what Moore’s Law has accustomed us to for decades, the prices of smartphones and personal computers will INCREASE for the next several years as data centers gobble up the world’s supply of RAM.
  4. Earth will eventually get so clogged with data centers that it will make financial sense to put them in space. However, it definitely won’t start this decade.

Remarkably, Elon Musk’s plan to start building a Dyson Swarm is not infeasible. The key hurdle to surmount will be sharply lowering space launch costs, which Musk’s Starship rocket might soon be able to do.
https://www.economist.com/science-and-technology/2026/03/02/data-centres-in-space-less-crazy-than-you-think

At this conference among physicists about the future applications of AI in their field, one speaker made two, great points:

  1. Machines are getting smarter every day. By contrast, human intelligence is not changing. At some point, the two lines on the graph will cross.
  2. If the long-sought “Theory of Everything” exists, it might be too complex for the human mind to comprehend. However, to vastly smarter AIs, such a Theory would be simple and elegant. There’s no reason to assume the physical rules of our universe are tuned for easy human understanding.
    https://physicsworld.com/a/is-vibe-physics-the-future/

Boston Dynamics’ “Atlas” robot shows off its extreme dexterity. It’s weird seeing a human-like form capable of inhuman movements.
https://youtube.com/shorts/-EJXWjMjLYk?si=PWbHEIX2wq92I2yF

‘First Lady Melania Trump walks with robot to White House event on children’s technology’
https://youtu.be/7sHSBgU5p4Y?si=nCVy6UxkHDE1c4uc

One gigawatt of electricity can power about 700,000 American homes.

Paul Ehrlich, an American scientist who became infamous for his failed prediction that overpopulation would lead to global famine in the 1970s, died of natural causes.
https://www.nytimes.com/2026/03/15/books/paul-r-ehrlich-dead.html

‘The AI scientists building these models are experts on machine learning and AI, but they are much less familiar (as evidenced by their own publications) with molecular biology and genetics. So it is likely that they believe their own claims, and they might not understand how implausible they are.’
https://stevensalzberg.substack.com/p/ai-is-starting-to-look-like-pseudoscience

‘Scientists revive activity in frozen mouse brains for the first time’
https://www.nature.com/articles/d41586-026-00756-w

A company’s claims that they accurately simulated a fly’s brain in a computer have come under fire.
https://x.com/KennethHayworth/status/2032604687212392562

Personal AIs and robots probably won’t make the world more equal

The head of OpenAI, Sam Altman, recently made this prediction:

Fundamentally our business, and I think the business of every other model provider, is going to look like selling tokens. You know, they may come from bigger or smaller models, which makes them more or less expensive. They may use more or less reasoning, which also makes them more or less expensive. They may be running all the time in the background trying to help you out, they may run only when you need them if you want to pay less. They may work super hard, you know, spend tens of millions, hundreds of millions, of someday billions of dollars on a single problem, that’s really valuable.

But we see a future where intelligence is a utility like electricity or water, and people buy it from us on a meter and use it for whatever they want to use it for.

I agree with his prediction and want to share my thoughts on what it means for individuals, society and the economy, and to unpack some key assumptions behind Altman’s statement.

“Fundamentally our business, and I think the business of every other model provider, is going to look like selling tokens.”

The phrase “is going to be” means he’s speaking about the future. This means Altman foresees most people getting AI services remotely from large tech companies like OpenAI, and not in-house from personal computing devices they own and keep in their houses or pockets. I agree this will be true, though improvements to the price-performance of LLMs and an eventual drop in the prices of GPUs and RAM as supply expands will someday make it feasible for average people to host AIs on their own hardware.

However, it will remain more convenient and probably cheaper to access LLMs over the internet from companies like OpenAI. Additionally, thanks to having better hardware and more of it, and more advanced models, the companies will always have a monopoly over the fastest and smartest AIs. For those reasons, access to AI will stay under the central control of a handful of big companies and government agencies with the resources to build data centers and to do cutting-edge AI research.

“You know, they may come from bigger or smaller models, which makes them more or less expensive. They may use more or less reasoning, which also makes them more or less expensive. They may be running all the time in the background trying to help you out, they may run only when you need them if you want to pay less.”

Access to AI will be tiered and principally based on how much money the user is willing to spend. As is the case today, different AI models will have different levels of capability, and it will cost more money to talk to better ones. As a result, though everyone in the future will have access to AIs, not all AIs will be equal, and the rich will have the best.

“They may work super hard, you know, spend tens of millions, hundreds of millions, of someday billions of dollars on a single problem, that’s really valuable.”

This is an important point. Not every question or task that humans put before an AI is equally hard for it to handle. Asking one to summarize a news article is easier than asking it to make a new discovery in particle physics. A “dumber” AI with only a slower and cheaper processor could handle the former perfectly well, whereas only a cutting-edge AI with a data center at its command could do the latter.

If true AI were available, the vast majority of people would not use it to attempt solving high-level problems that cost millions of dollars to solve–they would use it to meet their everyday needs. This includes things like soliciting advice on routine things like cooking new meal recipes, remembering chores, and making conversation. If endowed with a robot body, most people would have their AIs do mundane work like cooking or cleaning.

Today, the world’s best supercomputer can’t beat a smart nine-year-old kid at tic-tac-toe; the two will tie each other forever. This is because the game is so simple that beyond a certain threshold, adding more brainpower doesn’t help you come up with a better strategy for it. Likewise, we’ll reach a point where nearly all needs that average people have can be sated by AIs and robots that are not cutting-edge.

There is a ceiling to the level of intelligence an AI must have to perform cognitive tasks as well as humans, and a ceiling to how well a robot must be engineered to perform the physical tasks as well as humans. Once those respective ceilings are reached, the vast majority of human demands for cognitive and physical services will be met, yet the fields of AI and robotics will continue advancing. As the cost-performance of technology improves, the prices of AIs and robots designed to satisfy routine human needs will drop until everyone can easily afford them.

We will all have machine servants that can answer every question, provide us with true companionship, and do physical work for us around the house. Today, only the rich enjoy such services, but this century it will become common for everyone, and stand as another example of technology raising the standard of living. As wonderful and this sounds, it will not be a utopia, nor will it necessarily make the world more equal for humans. Rich individuals, corporations and governments will be able to afford access to more advanced AIs and robots and more of them than ordinary people. The disparity will grow when mass unemployment robs most people of their income and leaves them with only enough money (from UBI?) for the machines calibrated to not surpass what is needed to satisfy everyday needs (cognitive, physical, emotional).

For those reasons, AI technology will not be truly democratizing. Ordinary people will have access to tools of enormous power, allowing them to do things previously impossible, but that access will be controlled by large companies and entities. Additionally, the best AIs will also be too expensive for ordinary people to afford, meaning richer people and organizations will enjoy major advantages as they do now.

Consider the realm of economics. Let’s say it’s 2046 and I have a robot butler that can do any physical task as well as a human. I am enterprising, so I decide to start renting my robot out to mow peoples’ lawns in my neighborhood. After all, it’s a capital good, and I’m a capitalist. My business plan isn’t as solid as it seems.

Problem #1: Since robots have gotten cheap enough for anyone to afford, most of my neighbors already have their own, so their lawns are already being mowed.

Problem #2: The RoboChop Lawn Care company offers grass-cutting services in my neighborhood, does it faster and charges less money than I do. That’s because they have robots that are specialized for cutting grass and nothing else. One of their wedge-shaped, coffee-table sized robots with four wheels and an inbuilt rotating blade can zip along four times faster than my robot butler can walk while pushing a traditional lawn mower. And since RoboChop ONLY mows laws and its robot crew does multiple jobs per day, the company can offer volume discounts for its services that I can’t.

Even though I have a robot and access to AI, I’m still so outclassed in the market by bigger competitors that there’s no niche where I can succeed. At first glance, this clashes with my essay “Will future technologies end capitalism? No.”, but the key ideas are in fact in harmony: Yes, the future economy will still be capitalist, and arguably more so than now since most humans will have easy-to-use capital goods (AI and robots), but the resource disparity between average people and businesses will be so large that the former won’t be able to participate in that capitalist economy. A rising tide will lift all boats, but some much more than others.