The most important insight in The 2028 Global Intelligence Crisis is that artificial intelligence may trigger not merely technological disruption, but a structural economic regime change. The paper argues that the modern economy—especially in advanced nations—was built upon one foundational assumption: human intelligence is scarce and valuable. Once that assumption breaks, many of the institutions built upon it begin to destabilize simultaneously.
Rather than presenting a conventional recession scenario, the essay explores a future where AI success itself becomes the trigger for systemic economic stress. It is not a story of technology failing. It is a story of technology working too well.
The crisis described emerges from several interlocking forces: labor displacement, collapsing intermediation, financial system fragility, mortgage instability, and fiscal pressure on governments. These forces combine into a self-reinforcing feedback loop that the authors call the Intelligence Displacement Spiral.
For most of modern history, intelligence was the limiting input in economic production.
Capital could be replicated. Machines could be built. Natural resources could be substituted. But the number of people capable of analyzing information, writing software, negotiating deals, or managing systems was limited.
As the paper notes, entire economic institutions—from labor markets to tax systems—are built around the scarcity of human cognition.
Artificial intelligence challenges that scarcity directly.
In the scenario presented, AI agents become capable of performing tasks across coding, research, management, customer service, and decision-making. These systems continuously improve while their cost collapses. Instead of intelligence being expensive and scarce, it becomes cheap and effectively infinite.
When the most valuable economic input suddenly becomes abundant, the premium historically attached to human intelligence begins to disappear.
This is the central structural shock.
The first major mechanism described in the paper is a negative economic feedback loop triggered by automation.
Companies adopt AI to reduce labor costs and improve margins. Layoffs follow, particularly in white-collar professions such as software engineering, consulting, finance, and knowledge work.
Initially, this appears positive for corporations. Profits rise because payroll shrinks.
But the broader macroeconomic consequences quickly become destabilizing.
Workers who lose high-income positions reduce spending. Lower consumption weakens businesses that rely on discretionary demand. Those firms respond by cutting more workers and investing further in automation.
The loop becomes self-reinforcing:
AI improves → Companies cut workers → Consumer spending declines → Companies adopt more AI to protect margins → AI improves further.
The paper describes this as a feedback loop with no natural brake, unlike normal recessions that eventually self-correct.
In traditional downturns, falling demand leads to lower interest rates and eventual recovery. But when the cause is structural automation rather than cyclical imbalance, the usual recovery mechanisms do not apply.
One of the most striking concepts introduced in the document is “Ghost GDP.”
In this scenario, productivity rises dramatically because AI systems produce more output with fewer workers. National statistics show strong productivity and stable GDP growth.
Yet the real economy deteriorates.
This occurs because machines do not consume.
Human wages circulate through the economy, creating demand for housing, travel, retail, education, and services. But if production increasingly occurs without human labor, income shifts from workers to owners of compute infrastructure.
The result is economic output that exists on paper but does not circulate through households.
The document describes this paradox succinctly: machines produce wealth, but they do not buy anything.
Since roughly 70% of advanced economies rely on consumer spending, this shift threatens the fundamental engine of growth.
Another profound insight in the essay concerns economic intermediation.
Over the past half-century, large portions of the economy have been built around managing friction in human decision-making.
People tolerate inefficiencies because searching for the best option takes time. Consumers accept higher prices because switching services is inconvenient. Many industries extract rents by navigating complexity on behalf of customers.
AI agents remove these frictions.
In the scenario described, AI assistants continuously scan markets, negotiate prices, cancel subscriptions, optimize purchases, and route transactions to the cheapest option.
This destroys entire categories of economic middlemen:
These industries rely heavily on human cognitive limitations. When machines remove those limitations, their economic value collapses.
The essay argues that trillions of dollars in enterprise value depended on those inefficiencies remaining intact.
Closely related is the destruction of habit-based economic moats.
Many digital platforms dominate markets because users repeatedly choose the familiar option rather than the optimal one. Food delivery apps, online retailers, and subscription platforms benefit from inertia.
AI agents do not experience inertia.
Instead of opening the same app every time, an AI assistant checks dozens of providers simultaneously and selects the best price.
The result is a sudden collapse of brand-based loyalty and pricing power.
Industries that once relied on habitual consumer behavior find themselves competing in pure price markets.
The disruption extends into financial infrastructure as well.
AI agents executing transactions autonomously begin routing payments through the most efficient settlement networks. This often means bypassing traditional card networks and using digital payment rails such as stablecoins.
The paper highlights how this could erode the 2–3% interchange fees that underpin the profitability of card issuers and payment processors.
In a world of machine-to-machine transactions, expensive payment rails appear inefficient and are rapidly replaced.
Once again, friction disappears.
Financial markets amplify the crisis.
The essay highlights the vulnerability of private credit markets, particularly loans tied to software companies whose valuations assumed continuous revenue growth.
When AI automation undermines SaaS business models, those revenues decline.
Loans that were structured on the assumption of recurring subscription income begin to default. Because private credit expanded rapidly during the 2010s and 2020s, the losses ripple through large portions of the financial system.
The complexity of these structures—often involving insurance companies, offshore reinsurers, and private equity sponsors—makes it difficult to determine where the losses ultimately reside.
Opacity increases panic.
Perhaps the most unsettling implication of the paper involves housing markets.
Unlike the 2008 crisis, where mortgages were poorly underwritten, the loans in this scenario were sound when issued.
Borrowers had strong credit scores, steady employment, and verified income.
The problem is that the future income assumptions embedded in those loans become invalid.
As white-collar wages decline due to AI displacement, households struggle to maintain mortgage payments. The borrowers themselves are still creditworthy—but the economic structure that supported their income has shifted.
This creates stress in the $13 trillion U.S. mortgage market.
The loans were good when written, but the world changed afterward.
The crisis also challenges government finances.
Modern fiscal systems depend heavily on taxes derived from labor income—payroll taxes, income taxes, and consumption taxes.
If labor’s share of GDP declines sharply because machines perform more work, government revenue falls even while economic output remains high.
The document notes that labor’s share of GDP could fall dramatically as AI productivity rises.
Governments therefore face a paradox:
This creates pressure for new policy frameworks such as AI taxes, sovereign wealth funds based on compute infrastructure, or direct transfers to households.
Beyond economics, the scenario anticipates rising social tension.
If AI wealth accrues primarily to the owners of computing infrastructure and early investors, inequality could accelerate dramatically.
Public resentment may shift from financial elites—who were blamed after the 2008 crisis—to technology companies and AI labs.
Political polarization intensifies as governments struggle to respond.
Ultimately, the essay frames the crisis as the unwinding of the intelligence premium.
For centuries, human intelligence commanded high wages because it was rare.
Now machines replicate that intelligence at scale.
The transition to this new equilibrium may be painful because institutions—from labor markets to mortgage underwriting—were built on the assumption that human cognitive labor would always remain scarce.
When that assumption disappears, the system must reprice nearly everything.
Despite its dramatic scenario, the paper ends on a note of cautious optimism.
The crisis described is not inevitable, nor is it necessarily catastrophic. It represents a thought experiment designed to highlight the risks that arise when technological change outpaces institutional adaptation.
The authors emphasize that economies can eventually find a new equilibrium.
But reaching that equilibrium will require entirely new frameworks for distributing the gains from machine intelligence.
The real question is not whether AI will transform the economy. That transformation is already underway.
The question is whether society can redesign its institutions fast enough to keep pace with the technology that created them.