[In Perspective]
Shanghai

The 2028 AI Crisis We Fear Versus the One We Should

March 9, 2026
Share Article:

Systems & Signals

Business landscapes are shaped not just by headline disruptions, but by the structural shifts beneath them. This series decodes the forces transforming China and global markets, from AI adoption and innovation ecosystems to trade recalibration and institutional strategy. Drawing on on-the-ground reporting and strategic analysis, the column offers executives and policymakers timely insights into how technology, governance and human capability coevolve in an age of rapid change.

The 2028 AI Crisis We Fear Versus the One We Should
Caption: This AI-generated image shows the dilemma and crisis ahead of us all.

I recently helped my wife review her application for national research funding on AI-assisted echocardiography. She's a doctor leading a clinical team using machine learning to improve the detection and prediction of heart disease.

The proposal didn't promise automation of physicians or dramatic breakthroughs. It focused on reducing diagnostic error, integrating fragmented data and redesigning workflow so doctors could concentrate on interpretation rather than measurement.

The aim was not replacement but restructuring. Artificial intelligence does not remove clinicians from care. It changes how they allocate their time and attention. Routine quantification moves to the model. Judgment and oversight remain human.

That experience shaped how I read "The 2028 Global Intelligence Crisis," a thought experiment published by Citrini Research. The essay imagines a near future in which artificial intelligence becomes deeply embedded in financial infrastructure, corporate management and professional services. In its scenario, model failures trigger panic.

Automated trading amplifies volatility. Credit tightens and a contraction follows.

White-collar employment falls. Asset prices reset.

The narrative is fictional but the response was not. Investors debated its assumptions, and AI-linked equities moved. A hypothetical crisis briefly entered real valuations.

The 2028 AI Crisis We Fear Versus the One We Should
Caption: Chinese social media posts show hundreds of people lining up for free installation of OpenClaw in front of the Tencent Towers in Shenzhen, Guangdong Province.

Why would a scenario move markets?

Because markets price expectations. They convert stories about the future into present capital allocation. When a narrative presents a coherent chain of rapid adoption, concentrated infrastructure, rising leverage and eventual failure, investors respond.

The 2028 scenario is not about machines rebelling. It is about concentration risk, governance gaps and mispriced dependency.

In that respect, it resembles earlier technological expansions. Railroads, electrification and the internet all attracted heavy initial investment before their use and effectiveness matured. Each produced over-investment and then correction. Institutions adapted only after stress exposed weaknesses.

The resonance of the 2028 narrative reflects concerns already visible in advanced economies.

Professionals see AI models drafting contracts, generating code and analyzing data. They question the durability of their expertise. Investors recognize that pensions and household portfolios are heavily exposed to technology firms. They understand that a repricing would not be contained. Organizations increasingly rely on AI for lending, hiring and compliance decisions. As judgment shifts from individuals to systems, many sense a loss of agency.

These concerns are rational. But the leap from transition to collapse assumes that AI will scale while human capability stands still. History suggests otherwise.

The steam engine reorganized labor. Electrification redesigned production. The internet reshaped media and finance. Each transformation displaced roles and reallocated capital. None eliminated human relevance. They changed which skills carried value.

AI is more likely to follow that pattern of reorganization rather than rupture.

Routine analysis and pattern recognition will move to models. Human work will concentrate on framing problems, defining constraints and auditing outputs.

Advantage will belong to those who understand a system's assumptions and limitations. The economic divide will widen along capability lines. Integrating machine output into institutional processes, evaluating uncertainty and managing model risk will become core competencies. Those who treat AI as a substitute for thought will lose leverage. Those who treat it as a tool that requires judgment will gain it.

If a crisis emerges in the coming decade, it is unlikely to stem from intelligence exceeding human capacity. It will stem from inadequate adaptation. Firms that build dependence without redundancy will face stress. Investors who ignore correlation risk in AI-linked assets will confront repricing. Workers who resist skill expansion will face displacement. Regulators who overlook systemic exposure will encounter instability.

Adaptation requires deliberate investment.

Individuals need literacy in probabilistic reasoning, data integrity and model structure. They do not need to become engineers. They need to understand how outputs are generated and where systems can fail.

Firms must redesign governance to address concentration and dependency risk with the same discipline applied to credit and liquidity. Capital allocators must price systemic exposure, not only growth potential. Policymakers should prioritize mobility and retraining over preserving tasks that technology will inevitably transform.

Technology acts as an amplifier. It increases productivity where institutions are disciplined. It increases fragility where oversight is weak. It widens capability gaps and accelerates feedback.

The 2028 scenario resonated because it identified vulnerability. But vulnerability is not destiny. Markets adjust. Institutions reform. Professionals retrain when incentives require it.

The defining divide of the next decade will not separate humans from machines. It will separate those who build capability from those who defend inertia.

Artificial intelligence will continue to scale across medicine, finance and industry. The outcome will depend less on model performance than on human response.

When my wife prepares her research proposal, she does not present AI as a replacement for physicians. She presents it as a clinical instrument that requires supervision, training and institutional accountability. The system expands diagnostic reach. It increases the demand for disciplined judgment.

That is a more realistic template for the AI age. The danger is not that machines will outthink us. It is that we will underinvest in thinking ourselves. The future will hinge not on whether artificial intelligence advances, but on whether we do.

(The author is an adjunct research fellow at the Research Center for Global Public Opinion of China, Shanghai International Studies University, and founding partner of 3am Consulting, a consultancy that specializes in global communications.)

Editor: Liu Qi

#Shanghai
Share Article:

In Case You Missed It...

15th Five-Year Plan: Who Is Leading Offshore Listings?
FEATURED
[IN PERSPECTIVE]
15th Five-Year Plan: Who Is Leading Offshore Listings?
@ Lee Shih TaLineMar 8, 2026
Women's Day Celebrated With Half Marathon
Women's Day Celebrated With Half Marathon
Women's Day Celebrated With Half Marathon
Weekend Buzz: 7-8 March 2026
[Daily Buzz]
Weekend Buzz: 7-8 March 2026
A quick look at the market, business, and economic news making headlines in China.
Chinese Stocks Withstand Worst of Global Market Losses Amid Iran War
[Money]
Chinese Stocks Withstand Worst of Global Market Losses Amid Iran War
Investors in China took heart from top policymakers' reassurances of stability in Chinese mainland markets. Wall Street plummeted again as oil prices keep surging.