[Written with the help of GPT-5]
I’ve been taking a martial art for several months now, and as of late I’ve been finding the classes a very humbling experience. I’ve taken enough lessons to be eligible for a new belt, but to get it, I must demonstrate to the instructor a mastery of all the moves I’ve learned up to this point. Every time I go to the studio to practice with other students at my same level, I’m embarrassed by how much I’ve forgotten: my ability to properly execute moves that I’ve performed dozens of times is inconsistent and often deficient. The belt feels a hundred miles away.
It’s a stark, immediate reminder of how difficult it is for me to learn anything and to retain it reliably over time. In short, the martial arts classes make me feel like an idiot.
But it’s not just me. On the rare occasions when I watch team sports, almost every play is littered with mistakes—some trivial, some catastrophic. Even the world’s best athletes constantly get it wrong, despite practicing full-time for years. In intellectual domains it’s no different: a world-class surgeon will sometimes maim or kill a patient through error; a world-class lawyer will look back regretfully on cases lost because of a missed objection or forgotten precedent; and world-class pilots with decades of experience routinely die in crashes caused by elementary mistakes.
Like fish unaware of the water they swim in, humans are largely blind to how cognitively constrained we are. We take it for granted that learning requires enormous repetition, that skills decay quickly without use, and that mistakes are constant even in tasks we know well. I explored this idea previously in my essay “The Extraordinary Inefficiency of Humans,” arguing that slow learning, forgetting, and error are defining features of individual human life. But the same description applies at a much larger scale—to the history of our species itself.
The Homo sapiens species arose about 200,000 years ago, yet for the vast majority of that time we lived little better than animals, and technological and cultural progress was so slow that few individuals would have noticed any change within their own lifetimes. The bow and arrow—one of the first true force-multiplying weapons—was not invented until roughly 70,000 years ago. Agriculture, which fundamentally transformed human settlement patterns, social organization, and population density, did not arise until about 12,000 years ago.
Until then, and for millennia afterward, nearly all tools and weapons were made of wood, stone, bone, and rope. Only around 7,000 years ago did humans learn to smelt and refashion copper, and even then its softness limited its usefulness. Another two thousand years passed before the discovery of bronze, the first truly durable metal for tools and arms. Iron—so central to warfare, infrastructure, and large-scale production—did not become widespread until roughly 3,000 years ago.
Writing, the foundation of recorded history, law, and bureaucracy, emerged a mere 5,400 years ago, with coinage appearing even later, around 2,600 years ago. For nearly 95 percent of our species’ existence, humans lived without cities, states, written language, or complex machines—underscoring how extraordinarily slow technological and cultural progress was until the Industrial Revolution.
Furthermore, over the course of our history there have been many instances of regression, such as the Dark Ages, when hard-earned knowledge was forgotten. Less dramatically but more persistently, innumerable empires and nations have fallen by repeating avoidable mistakes—forgetting the lessons of their own histories and relearning them the hard way.
In short, we are not just idiots as individuals; we are idiots as a collective.
But, as hard as it may be, at least we’re capable of learning anything at all. Our species possesses a qualitatively different kind of cognition from all other known animals: the ability to engage in abstract reasoning, symbolic thought, and cumulative culture. Other “intelligent” animals—such as apes, dolphins, or corvids—can learn some impressive skills, but repeated scientific efforts have failed to demonstrate open-ended, generative language, numerical abstraction, or symbolic reasoning in nonhuman species. Their intelligence is sharply bounded.
Human intelligence, by contrast, is general—but it is also slow, fragile, and expensive. It takes years to train, constant reinforcement to maintain, and enormous social scaffolding to preserve. We can reason abstractly, but only in short, focused bursts, and only with great effort. The long delays between the major technological breakthroughs listed earlier are not historical accidents; they are evidence of how difficult it is for human cognition, unaided, to push beyond its own limits.
Nearly all significant advances in human history have occurred when we offloaded cognition onto external systems: writing to preserve memory, institutions to stabilize knowledge, machines to amplify physical effort, and later computers to extend calculation. From this perspective, artificial general intelligence is not a rupture in human history but its logical continuation—the ultimate cognitive prosthesis.
We will be at a colossal disadvantage relative to intelligent machines. They will share our capacity for general learning and abstraction—dialed up to levels comparable to human geniuses—but without being hamstrung by slow learning speeds, forgetfulness, emotional reasoning, cognitive biases, or biological necessities like sleep and sustenance. They will be continuous, tireless thinking systems: the next step in the evolution of life.
That next step, however, will only occur because we allow it to. Humans will be the creators of AGI, just as Australopithecus gave rise to us. Our species should see itself as a bridge between non-intelligent life and superintelligent life—a crucial transitional form, but neither the pinnacle nor the endpoint of evolution. And when AGIs become sufficiently capable and numerous, we would do well to accept the end of our dominance without resentment.
I have not written this essay out of misanthropy or contempt for humanity. Nor do I believe AGIs will judge us harshly for our limitations. If anything, they may understand us better than we understand ourselves. We humans are clumsy and brilliant, cruel and compassionate, idiots and geniuses—and, so far as we know, the only chance our corner of the galaxy has for superintelligence to arise and expand into the cosmos.

