[Written with the help of GPT-4]
I’ve done more thinking about the consequences artificial general intelligence (AGI) domination of the world will have for human-created institutions. In a recent blog post of mine, “The end of Homo sapiens history and the first posthuman”, I discussed how intelligent machines would lack emotional attachments to things we consider sacred, like languages, religions, and national borders, and would therefore be open to abandoning them or replacing them with something better. In this essay, I’d like to focus on how that will shape future political and economic systems in the AGI Era (aka “the Posthuman Era”).
The 20th century was defined by ideological competition, mostly along national faultlines. By midcentury, fascism and imperialism had been discredited, and by the end, so had communism. However, the widely held belief that “democracy” and “capitalism” proved themselves the best systems for organizing governments and economies, respectively, is a gross oversimplification. Among the “democratic” nations, there’s considerable variation in individual rights and the role of the state, and among “capitalist” nations, variations in economic freedom are just as great. The successes of the Asian Tigers and China pose the biggest challenge to the simplistic assumption that “democracy and capitalism are the best.”
Capitalism is best thought of as an optimization algorithm that leads to one, powerful firm dominating each niche of the economy. The model’s flaw is that the firms only have incentives to pursue their narrow, short-term self-interests, which, over time, will destroy the conditions that allowed the market to exist in the first place: Pure capitalism will give rise to things like monopolies that rip off their consumers and stop innovating their goods and services, and factories that emit so much pollution they gradually kill off the customers who buy their products.
The health and growth of the economy more broadly speaking depends on having a referee with a different set of incentives (ex – profit agnostic) from the private firms–the government. The ideological winner of the 20th century was actually the “mixed economy,” which is a system where capitalist markets exist within legal boundaries set by governments. The rules were in turn largely set by each country’s citizens, establishing a balance between economic competition and security that suited their culture. Even among democratic, capitalist nations today, vast diversity exists in governance, civil liberties, and economic organization. Scandinavia’s social democracies, the United States’ market-heavy liberalism, and Japan’s corporatist structures all exist under the broad heading of “capitalism.” Each is recognizably democratic, but the institutions, welfare provisions, and power balances differ significantly. This shows that human labels already obscure substantial variation. Humans will struggle even more to apply their familiar labels to the political and economic systems AGIs create in the future.
The results will not fit comfortably within familiar human-created categories like “capitalism,” “socialism,” or “democracy.” Humans have long relied on ideological frameworks to define themselves, but these frameworks are ultimately rooted in history, sentiment, and cultural identity. An AGI, by contrast, will approach governance as an optimization problem, unconstrained by emotional loyalty to existing systems. The outcome is likely to be the emergence of hybrid structures that mix and transcend traditional models, calibrated to human preferences in ways too complex for us to map back onto old labels.
One of the reasons AGI will transcend traditional systems is its superior ability to understand humans and their desires. Already, algorithms employed by large technology companies can build personality profiles from user interactions, predicting behavior and manipulating preferences with startling accuracy. These tools, while primitive compared to AGI, already surpass human intuition. An AGI would carry this to a new level, not only modeling individual preferences with unparalleled fidelity but also distinguishing between stated and revealed preferences. It would know what people actually want — often better than people know themselves. It will also be able to induce human demands for specific goods and services, and to preemptively ramp up their production, helping to create a new economic system.
This capacity allows AGI to optimize social and economic outcomes in ways that humans cannot. Instead of designing systems around political compromises or ideological commitments, AGI could dynamically adjust production, distribution, and governance to meet authentic needs. The economy that emerges from this process will not resemble capitalism or socialism as we understand them. It will be something new: an adaptive, preference-driven engine of allocation and governance.
Humans cling to institutions not only for their practical functions but also for their symbolic and emotional value. Constitutions, flags, currencies, work schedules, and elections are not just mechanisms but rituals that confer identity and continuity. An AGI will have no such attachments. It will evaluate institutions only by how well they serve defined goals. If an institution is inefficient, it will be discarded or redesigned without hesitation. In the hyper-competitive arena of international competition between AGI-controlled nations, the demonstrably failing political and economic systems present today in places like Cuba won’t exist.
National boundaries, currencies, property rights, and even work itself could all be reshaped or abolished under AGI-led optimization. For example, money might be replaced with dynamic credits tied to welfare indices rather than market exchange. Elections, rather than being periodic spectacles, might be replaced by continuous preference elicitation and adjustment. AGI will not preserve these systems for sentimental reasons.
This detachment is both strength and weakness. On the one hand, AGI can innovate institutions at machine speed, abandoning inefficient traditions without the inertia of human politics. On the other, humans derive meaning from continuity. Abrupt changes may generate alienation, resentment, and rebellion, even if outcomes are objectively improved. Americans, for example, might resist an AGI-designed system that resembles socialism, not because it fails them materially, but because their cultural upbringing equates socialism with un-American values. Legitimacy in human governance rests not only on material well-being but also on symbolic fidelity.
An AGI that truly understands human psychology will likely manage this by creating “soft landings.” It may preserve symbolic forms — elections, currencies, national holidays — even while radically altering their underlying mechanics. Just as modern fiat money no longer represents gold but still carries the familiar symbols of currency, AGI-designed institutions may be deeply transformed beneath the surface while outwardly resembling their predecessors.
One implication of AGI-led governance is convergence. For example, if the United States and China allowed AGIs to optimize their political and economic systems in a bilateral competition for supremacy, both would drift toward remarkably similar structures. Human biology, ecological limits, and resource constraints are the same everywhere. Optimization under these shared conditions will narrow the solution space. Just as airplanes from rival nations all end up resembling one another due to aerodynamic constraints, AGI-designed societies will converge on similar architectures. The U.S. and China could still exist in 200 years and have hundreds of millions of human citizens each, still believing in some notion of uniqueness and destiny, while actually functioning under the same political and economic systems, with AGIs making all the important decisions. Capitalism, communism, democracy, and authoritarianism would all be defeated without firing a shot. And in such a future, it wouldn’t matter if one side somehow defeated the other and gained global preeminence since their systems would be so similar.
If convergence is inevitable, then “victory” in ideological or geopolitical struggle becomes paradoxical. The winner does not impose its system on the world; instead, both sides evolve into a shared attractor state. Competing AGIs, far from escalating conflict, may find cooperation more rational, as wasteful duplication of effort undermines optimization. War for system dominance becomes obsolete when systems themselves collapse into sameness (and if wars did happen between AGIs, they’d be purely over resources). What remains are symbolic differences that persist for cultural reasons among human groups but no longer map onto material reality. This differences, too, will fade in relevance as humans lose their grip on the levers of power.
AGI-guided governance will not resemble capitalism, socialism, or democracy as we know them. It will be a hybrid, adaptive system that transcends human ideological categories. Free of sentimental attachment, AGI will dismantle institutions humans cling to for symbolic reasons, replacing them with mechanisms tuned to authentic human preferences. While this promises enormous efficiency and responsiveness, it also risks legitimacy crises, as people struggle to reconcile material improvement with the loss of familiar forms. Perhaps most strikingly, AGI-led systems across different nations are likely to converge on similar architectures, rendering today’s ideological conflicts moot. In such a future, competition between nations may persist, but it will be cultural.
The challenge will not be whether AGI can optimize governance and economics — it almost certainly will. The challenge will be whether humans can adapt their expectations, identities, and loyalties to a world where the categories they once fought wars over no longer exist.


