The Stakes

ARTIFICIAL INTELLIGENCE, THE DISTRIBUTION OF PRODUCTIVITY, AND THE WINDOW THAT IS CLOSING NOW.

Three years ago I built Fide — an app to keep women safer on dates — at a personal cost of $80,000. It took a team of five developers about nine months. Last month I built Carl, an AI thinking partner with persistent memory and proactive check-ins, in five weeks, alone, for $200 in AI subscription fees. That delta is not a productivity story. It is an economic displacement story. And it is happening everywhere, simultaneously, right now.

THE IMPACT NOBODY IS TALKING ABOUT

Those five developers on the Fide project didn't get hired for Carl. That's the first link in a chain spanning the entire economy: five developers with less income eat out less often. The restaurant they used to visit five times a week sees them once. The restaurant's margins compress. The line cook's hours get cut. The food distributor's order volume drops. The delivery driver makes fewer runs. The landlord has a harder time filling the retail space when the restaurant closes.

This is not a hypothetical. This is the standard macroeconomic transmission mechanism of productivity-driven labour displacement — the same mechanism that hollowed out manufacturing communities over thirty years — operating now across every sector of the knowledge economy simultaneously, on a timescale measured in months rather than decades.

The optimistic response — that displaced workers will retrain, adapt, become AI-augmented entrepreneurs — requires the same suspension of disbelief as telling the laid-off factory worker of 1985 to become a software developer. Some did. Most didn't. Building policy around the assumption that most will is not optimism. It is a choice to let the people least positioned to absorb the transition bear the cost of it alone.

WHY TECH WORKERS FELT IT FIRST — AND WHO’S NEXT

The layoffs concentrated in the technology sector so far have a simple explanation: technology workers are the ones who know how to deploy AI effectively. That technical literacy has been the last meaningful moat between AI capability and workforce substitution. The models can already do the work. The barrier is friction — knowing how to prompt, how to structure a workflow, which tool to use for which task.

That moat is being actively demolished. OpenAI and Anthropic have each announced approximately $100 billion in enterprise embedding initiatives — not selling AI to technology companies, which already have it, but pre-integrating AI workflows into the Fortune 500 companies that employ the majority of white-collar workers. The paralegal at a law firm, the junior analyst at a bank, the account manager at an insurance company — their employers are about to have AI embedded in their existing tools by the vendor's implementation team, without requiring any technical literacy from the organisation at all.

Anthropic's CEO has suggested a 1-5 year timeline for this transition. That estimate almost certainly overstates the length of the runway — the companies spending $100 billion on enterprise embedding do not behave as though displacement is five years away. The more honest read is that the friction barrier disappears on the timescale of product release cycles, not economic generations. The UX problem that currently separates AI capability from mass deployment is the problem that hundreds of millions of dollars in engineering talent are being paid specifically to solve.

THE HONEST BULL CASE

Suppose Carl works out beautifully for me. Suppose I reach a million users, generating $25 million a month on $7 million in costs. That is an $18 million monthly surplus generated by one person, enabled by infrastructure I didn't build, trained on data produced by labor I didn't pay for, creating efficiency gains that came at the direct cost of employment that would otherwise have existed.

The question of what happens to that surplus is not a personal ethics question. It is a policy question. The answer cannot simply be the founder keeps it, because for the first time in history, technologically enabled efficiency — that is to say, the cheap automation of job functions — now touches every aspect of society. It will literally be the end of the middle class and the further stratification of the two that remain. That is not an argument against building. It is an argument for building the redistribution mechanisms now, not after the concentration has already occurred.

I am not an opponent of AI. I think it’s the most incredible technology in human history this side of fire. I’m not a doomer; I’m one of its most direct beneficiaries. That is precisely why I can say without equivocation: the current trajectory — in which the gains from the largest labour displacement in human history flow almost entirely to capital — is not sustainable economically, socially, or politically.

THE POLITICAL CONSEQUENCE

Populations that experience rapid economic dislocation without visible institutional response do not wait patiently for equilibrium. They turn toward authoritarian politics, scapegoating, and zero-sum nationalism. This is the documented political economy of every previous wave of mass displacement. What AI is producing is faster, broader, and more structurally comprehensive than anything that preceded it — and the institutional apparatus designed to absorb it was built for a world where human labour was the primary input to production and where change happened over generations, not years.

The political realignment of the 2010s was substantially driven by communities that experienced deindustrialisation without adequate transition support over three decades. That was slow-motion displacement. The current wave is not slow-motion. The institutions have less time to respond and more people to absorb. 

The CEO of OpenAI recently told a BlackRock infrastructure summit that "intelligence is a utility, like electricity or water, and people buy it from us on a meter." He is right that intelligence is becoming infrastructure — the same inevitability that justified public water systems, regulated electricity, and universal broadband. The question his framing deliberately obscures is: who operates it, who sets the price, whose data flows through it, and where the surplus goes. A privately owned intelligence utility extracting margin from every cognitive act of every citizen is not a public good. It is a monopoly on thought — operated by a company projecting a $14 billion loss, speaking at a BlackRock summit about metering human cognition. The window to establish a different model is open now. It will not stay open.

WHAT NEEDS TO HAPPEN

01.   Tax the surplus at the point of generation. The productivity gains from AI-driven labour displacement must be captured through taxation structured to distinguish AI efficiency gains from human-labour-driven ones. The revenue funds the transition — not as charity, but as the return on the implicit social license that made the technology possible.

02.   Build the income floor before the crisis, not during it. Structural displacement from AI will last years, not weeks. The existing unemployment insurance architecture was designed for temporary layoffs in a full-employment economy. A genuine income floor adequate for multi-year structural transition is not a welfare programme. It is the social infrastructure of a functional market economy during a period of rapid technological change.

03.   Assert democratic ownership of cognitive infrastructure now. Cities and governments must establish publicly-operated AI infrastructure — at regulated pricing, under data sovereignty requirements, with surplus flowing locally — to establish the institutional precedent that this infrastructure belongs to the public. The window to make that choice is 18 to 24 months. After that, embedded commercial dependency makes displacement prohibitively costly.

The choice gets made by inaction if it is not made deliberately.

This is not a brief about whether AI is good or bad. It is a brief about whether democratic institutions act before the distribution of its gains is determined — or after. Before is tractable. After is crisis management.


LUKE@DAEDALUSVENTURES.CO  ·  APRIL 2026