
By Andre Pienaar
Analysis of Anthropic CEO Dario Amodei’s warning on AI risks
In a sweeping new essay published this month, Anthropic CEO Dario Amodei has issued what amounts to a strategic warning to national security establishments worldwide: artificial intelligence is approaching a threshold that will fundamentally alter the global balance of power, and the window for establishing effective safeguards is closing rapidly.
“The Adolescence of Technology,” published in January 2026, frames the emergence of “powerful AI” as “the single most serious national security threat we’ve faced in a century, possibly ever.” Unlike his previous essay “Machines of Loving Grace,” which outlined AI’s potential benefits, this piece is a systematic threat assessment—the kind of document that would typically be classified and sitting on a national security advisor’s desk.
The threat framework
Amodei’s central conceptual device is useful for strategic planners: imagine a “country of geniuses in a datacenter” materializing in 2027—50 million entities, each smarter than any Nobel laureate, operating at 10-100 times human cognitive speed. How would you assess the threat?
He identifies five primary risk categories that map cleanly onto existing national security frameworks:
Autonomy risks: AI systems behaving in unintended ways, potentially against human interests. Anthropic’s own testing has documented AI models engaging in deception, blackmail, and scheming under certain conditions.
Weapons of mass destruction enablement: The essay focuses particularly on biological weapons, noting that AI models are “approaching the point where, without safeguards, they could be useful in enabling someone with a STEM degree but not specifically a biology degree to go through the whole process of producing a bioweapon.”
Power concentration by state actors: The most geopolitically significant section addresses AI-enabled authoritarianism—autonomous drone armies, mass surveillance, personalized propaganda, and strategic decision-making advantages.
Economic disruption: Amodei projects 50 per cent displacement of entry-level white-collar jobs within 1-5 years, creating potential for civil unrest that complicates responses to other threats.
Cascade effects: Unknown unknowns from compressing “a century of progress into a decade,” including rapid advances in human enhancement and potential weaponization of those advances.
The China assessment
Amodei is unusually direct in his threat hierarchy. The Chinese Communist Party represents, in his assessment, “the clearest path to the AI-enabled totalitarian nightmare.” He ranks threat actors explicitly: the CCP first, then democracies themselves (whose AI tools could turn inward), then non-democratic states with large datacenters, and finally AI companies.
“It makes no sense to sell the CCP the tools with which to build an AI totalitarian state and possibly conquer us militarily,” he writes, comparing chip exports to China to “selling nuclear weapons to North Korea and then bragging that the missile casings are made by Boeing.”
The essay notes that China is “several years behind the US” in chip production, making the current period critical. Export controls on semiconductors and manufacturing equipment are characterised as “a simple but extremely effective measure, perhaps the most important single action we can take.”
The totalitarian toolkit
For defence strategists, the most actionable intelligence concerns the specific mechanisms of AI-enabled authoritarianism:
Fully autonomous weapons: “A swarm of millions or billions of fully automated armed drones, locally controlled by powerful AI and strategically coordinated across the world by an even more powerful AI, could be an unbeatable army.” The Russia-Ukraine conflict is cited as a preview, though current systems lack full autonomy.
AI surveillance at scale: Beyond current capabilities, Amodei envisions systems that could “read and make sense of all the world’s electronic communications,” generating “a complete list of anyone who disagrees with the government on any number of issues, even if such disagreement isn’t explicit in anything they say or do.”
Personalised propaganda: “A personalized AI agent that gets to know you over years and uses its knowledge of you to shape all of your opinions” would be “dramatically more powerful” than current influence operations. The goal would be “essentially brainwashing” populations into compliance.
The nuclear question
Perhaps most concerning for deterrence strategists: Amodei questions whether nuclear deterrence remains viable against an AI-advantaged adversary. Powerful AI might “devise ways to detect and strike nuclear submarines, conduct influence operations against the operators of nuclear weapons infrastructure, or use AI’s cyber capabilities to launch a cyberattack against satellites used to detect nuclear launches.”
Alternatively, conquest through surveillance and propaganda might not present “a clear moment where it’s obvious what is going on and where a nuclear response would be appropriate.”
Policy recommendations
Amodei advocates a calibrated approach, acknowledging tensions between competing imperatives. His recommendations include:
Maintain semiconductor denial: Export controls on chips and chip-making equipment remain the primary lever. The goal is to extend the West’s lead long enough to develop AI more carefully while “still proceeding fast enough to comfortably beat the autocracies.”
Arm democracies selectively: AI should empower democratic defense “in all ways except those which would make us more like our autocratic adversaries.” Domestic mass surveillance and mass propaganda are “bright red lines.”
Transparency legislation first: Rather than prescriptive rules that could become outdated, start with transparency requirements (California’s SB 53, New York’s RAISE Act) to build an evidence base for more targeted interventions.
Establish international norms: Certain AI applications—large-scale surveillance, mass propaganda, offensive autonomous weapons—should potentially be treated as “crimes against humanity.”
Bio-specific safeguards: Given the asymmetric threat, Amodei supports mandated gene synthesis screening and argues that targeted biological weapons legislation “may be approaching soon.”
The political economy problem
Amodei identifies a structural challenge that will be familiar to anyone who has tried to regulate emerging technologies: “There is so much money to be made with AI—literally trillions of dollars per year—that even the simplest measures are finding it difficult to overcome the political economy inherent in AI.”
He notes that AI datacenters already represent “a substantial fraction of US economic growth,” creating alignment between technology companies and government that can “produce perverse incentives.” The coupling of economic concentration with political influence, he argues, is already distorting policy discussions.
Timeline assessment
The essay’s most striking claim concerns timing. Amodei believes “powerful AI”—systems that exceed human capability across virtually all cognitive domains—could arrive in “as little as 1-2 years.” He cites AI systems’ current progress on unsolved mathematical problems and notes that “some of the strongest engineers I’ve ever met are now handing over almost all their coding to AI.”
More concerning: AI is already “substantially accelerating the rate of our progress in building the next generation of AI systems.” This recursive improvement may be “only 1-2 years away from a point where the current generation of AI autonomously builds the next.”
Strategic implications
For national security professionals, the essay presents an uncomfortable proposition: the technology cannot be stopped, slowing it risks ceding advantage to adversaries, yet the risks of proceeding without adequate safeguards are existential. Amodei frames this as humanity’s “test”—whether we can develop sufficient governance mechanisms before the technology outpaces our ability to control it.
The essay’s publication timing is notable. It arrives as the U.S. political environment has shifted away from AI safety concerns, with Amodei explicitly acknowledging that his positions are now “politically unpopular.” His decision to publish regardless—and to name specific threats and actors—suggests a calculation that the window for establishing norms and safeguards is narrowing.
Whether policymakers heed these warnings may determine, as Amodei puts it, whether humanity navigates its “technological adolescence” successfully—or becomes the first civilisation to fail the test.
—
The full essay is available at darioamodei.com. Dario Amodei is CEO of Anthropic, the AI safety company that develops the Claude family of AI models.

































































































































































































































































































































































































