The AI Economy Debate Has a Collective Action Problem
In February 2026, two pieces about AI and the economy went massively viral. A finance writer published a fictional dispatch from June 2028, describing an economy in freefall: 10% unemployment, a stock market down 38%, white-collar workers driving Uber, and a mortgage market built on incomes that no longer exist. An AI founder wrote a warning to his friends and family that we're in the "seems overblown" phase of something much bigger than COVID. Both pieces spread because they were vivid, specific, and frightening.
The counterarguments followed quickly. A veteran tech analyst argued that humans have always found new work after automation, and that humans will always prefer other humans — creating an economy for labor precisely because it's human. A writer made the historical case that catastrophists have the best stories and the worst track record: agricultural employment fell from 81% to 1% of the workforce and the world didn't end, it just changed.
Both sides have real points. Both sides are also missing something important.
The optimists are probably right about the destination and almost silent on the journey.
The historical argument — that new jobs always emerge after automation — is genuinely compelling, and the pessimists haven't refuted it. But the examples the optimists reach for span centuries. Agricultural displacement played out over two hundred years. The industrial revolution took generations. The internet reshaped the economy over several decades.
The pessimists' scenario plays out in two years.
These are not the same argument. You can simultaneously believe that humans will find new sources of value in the long run and that the transition will be brutal for a generation of workers in the middle. The optimists have essentially proven that the destination is probably fine, while declining to engage seriously with the suffering en route. "History says it works out" was cold comfort to the handloom weavers of 1820 — and one of the optimists actually acknowledges that Dickens documented their squalor, without quite noticing that this concession weakens his own case. If you're going to invoke the agricultural revolution, you have to own all of it, including the part where it was genuinely catastrophic for the people living through it.
The pessimists ignore that AI is deflationary — but which things get cheaper matters enormously.
The doom scenario focuses entirely on the income side of the equation: AI replaces workers, workers stop spending, the consumer economy hollows out. What it doesn't model is the cost side. AI doesn't just destroy jobs — it also makes things dramatically cheaper to produce. If a product that used to require ten engineers now requires one plus AI tools, that product gets cheaper. Real purchasing power can rise even when nominal wages fall.
The 20th century is full of examples of this. Mechanized agriculture collapsed farm employment and made food cheap enough that people could spend their income on entirely new things. The question isn't whether AI will be deflationary — it almost certainly will be — but what it makes cheaper.
If AI drives down the cost of software, financial advice, legal services, and healthcare administration, people who were previously priced out of those things will benefit enormously. But the biggest cost burdens on most households — housing, healthcare, education, childcare — are expensive primarily because of regulatory constraints and supply shortages, not because of labor costs. AI probably won't build more housing if zoning prevents it. It won't make healthcare cheaper if the incentive structures stay broken. If AI deflation mostly hits things that were already cheap, and the expensive things stay expensive, then the productivity gains flow to people who were already comfortable, and everyone else is left with the income loss and none of the relief.
Neither side in this debate models this carefully. The pessimists assume deflation only benefits corporations. The optimists assume abundance is broadly distributed. The actual outcome depends on specific policy choices about housing, healthcare, and education that have nothing to do with AI.
AI deployment is a collective action problem — and nobody has solved one at this scale, this fast.
The natural response to the doom scenario is: couldn't the companies developing this technology just… go more carefully? Couldn't the AI labs choose to deploy more slowly, or design tools that augment workers rather than replace them? This is more naïve than it sounds.
In a competitive market, the cautious actor loses. A company that chooses slower AI adoption faces a competitor that doesn't — and loses margin, talent, and eventually market share. The pessimists' own feedback loop applies here too: each individual company's rational choice produces a collectively damaging outcome, and no single actor can unilaterally exit that logic.
What makes this truly intractable is national security. Even the AI labs most committed to careful development face government pressure running in the opposite direction. The underlying argument is simple: if the United States slows down, China doesn't. This logic is bipartisan, largely correct on its own terms, and functions as a permanent override of any safety consideration. AI development has been conscripted into great power competition in a way that forecloses individual restraint — as illustrated this week when the Pentagon threatened to blacklist Anthropic for maintaining restrictions on military use of its models.
The only mechanism that could address a collective action problem at this scale is international coordination. And here history offers something genuinely useful: we have solved collective action problems involving dangerous, dual-use technology before. The Montreal Protocol phased out ozone-destroying chemicals across 197 countries. Nuclear non-proliferation, imperfect as it is, kept a technology capable of ending civilization from spreading to every state that wanted it. The Chemical Weapons Convention got nearly every nation on earth to verifiably destroy stockpiles. These weren't easy, and none of them worked perfectly. But they worked well enough to matter, and they were negotiated under serious geopolitical pressure between adversaries who trusted each other very little. The question for AI isn't whether international coordination is possible in principle. It's whether it's possible at the pace this technology is moving — and that's a genuinely open question that neither the doomers nor the optimists are asking.
The debate between doom and optimism is real and the stakes are high. But both sides are largely arguing about whether the destination is good or bad, while the more important question — what happens during the transition, who bears the cost, and whether anyone has the power to shape the path — is going mostly unasked.