# Palantir Wants to Replace Nukes With AI. They Forgot to Ask If That Is Even Possible.

## The Declaration
[Alexander C. Karp](https://en.wikipedia.org/wiki/Alexander_Karp), CEO of [Palantir Technologies](https://www.palantir.com/), has issued what may be the most consequential foreign policy statement of the decade. In his new book [*The Technological Republic*](https://techrepublicbook.com/), co-authored with [Nicholas W. Zamiska](https://www.nytimes.com/by/nicholas-w-zamiska), Karp declares:
> "The atomic age is ending. One age of deterrence, the atomic age, is ending, and a new era of deterrence built on A.I. is set to begin."
This is not a footnote. This is not a technical observation buried in a white paper. This is the opening salvo of a new strategic doctrine, published by a man whose company builds the software infrastructure for Western military and intelligence operations worldwide.
Karp is telling us something that the policy establishment has been whispering about in classified briefings for years: the [Mutual Assured Destruction](https://en.wikipedia.org/wiki/Mutual_assured_destruction) framework that has prevented great power war for nearly eight decades is becoming obsolete. In its place will rise something new, something algorithmic, something we do not yet fully understand.
***He is right about the ending. He may be dangerously wrong about what comes next.***
## What I Got Wrong (And What I Got Right)
In March, I published ["AI is not the new nuclear deterrence"](AI%20is%20not%20the%20new%20nuclear%20deterrence.md). The article argued that the analogy between nuclear deterrence and AI deterrence is "intellectually seductive and strategically dangerous." I dissected the [Mutually Assured AI Malfunction](https://www.rand.org/pubs/commentary/2025/03/seeking-stability-in-the-competition-for-ai-advantage.html) (MAIM) framework proposed by [Dan Hendrycks](https://en.wikipedia.org/wiki/Dan_Hendrycks), [Eric Schmidt](https://en.wikipedia.org/wiki/Eric_Schmidt), and [Alexandr Wang](https://en.wikipedia.org/wiki/Alexandr_Wang), showing why it fails on five structural grounds: ambiguous thresholds, distributed targets, first-strike incentives, general-purpose applications, and the private sector problem.
My argument was simple: you cannot simply port the logic of [Mutual Assured Destruction](Mutual%20Assured%20Destruction.md) onto AI and expect stability. The physics are different. The verification is impossible. The tripwires are fuzzy. Anyone claiming AI deterrence will work like nuclear deterrence is selling you a fantasy that will get us all killed.
But here is what I did not say clearly enough, and what Karp's declaration forces me to confront: **AI is absolutely a deterrence tool.** Just not the kind anyone knows how to use.
Karp and I agree on the diagnosis. The atomic age is ending. The [Non-Proliferation Treaty](https://en.wikipedia.org/wiki/Treaty_on_the_Non-Proliferation_of_Nuclear_Weapons) regime is fraying. [Hypersonic missiles](https://en.wikipedia.org/wiki/Hypersonic_weapon) and [fractional orbital bombardment](https://en.wikipedia.org/wiki/Fractional_orbital_bombardment) are rendering traditional second-strike capabilities uncertain. The strategic stability that [Bernard Brodie](https://en.wikipedia.org/wiki/Bernard_Brodie_(military_strategist)) theorized and [McGeorge Bundy](https://en.wikipedia.org/wiki/McGeorge_Bundy) operationalized is eroding before our eyes.
***Where we diverge is in the character of what replaces it.***
## The Realm Problem
The fundamental error in most AI deterrence thinking is categorical. Nuclear weapons operate in the realm of **physics**. AI operates in the realm of **information**. These are not merely different domains. They are different categories of reality, with different rules, observability, and strategic logics.
A nuclear launch is a physical event. It produces heat, light, radiation, and blast. It can be detected by satellites, seismic sensors, and [over-the-horizon radar](https://en.wikipedia.org/wiki/Over-the-horizon_radar). The chain of causality is direct and irreversible: launch, flight, detonation, destruction. The [Laws of thermodynamics](https://en.wikipedia.org/wiki/Laws_of_thermodynamics) do not negotiate.
AI deterrence operates through **perception, prediction, and preemption**. Its weapons are not missiles but models. Its explosions are not physical but cognitive: a system that can predict your moves before you make them, spoof your sensors before you trust them, degrade your command chain before you realize it is compromised.
This is not science fiction. This is the operational reality of the [Ukraine conflict](https://en.wikipedia.org/wiki/Russo-Ukrainian_War), where [Starlink](https://www.starlink.com/)-enabled [drone warfare](https://en.wikipedia.org/wiki/Unmanned_combat_aerial_vehicle) and AI-targeted artillery have demonstrated that software superiority translates directly into battlefield dominance. It is the reality of [Taiwan Strait](https://en.wikipedia.org/wiki/Taiwan_Strait) tensions, where both the [People's Liberation Army](https://en.wikipedia.org/wiki/People%27s_Liberation_Army) and [U.S. Indo-Pacific Command](https://www.pacom.mil/) are racing to deploy AI-enabled [C4ISR](What%20are%20C2,%20C4ISR,%20C5ISR,%20and%20C6ISR.md) systems that promise to compress the [OODA loop](https://en.wikipedia.org/wiki/OODA_loop) to milliseconds.
The deterrence that emerges from this realm will not look like mutually assured destruction. It will look like **mutually assured uncertainty**: each side so capable of disrupting the other's decision-making that neither can confidently initiate hostilities, not because destruction is guaranteed, but because outcomes are unknowable.
***This is arguably more stable than nuclear deterrence in some respects. It is arguably far more dangerous for others.***
## The No-Deal Problem
One of the central features of nuclear deterrence was the possibility of arms control. The [SALT](https://en.wikipedia.org/wiki/Strategic_Arms_Limitation_Talks) treaties, [START](https://en.wikipedia.org/wiki/START_I), [New START](https://en.wikipedia.org/wiki/New_START): these were possible because nuclear weapons are countable. You can verify silos. You can count warheads. You can monitor test ban compliance through [seismic monitoring](https://en.wikipedia.org/wiki/Comprehensive_Nuclear-Test-Ban_Treaty_Organization).
AI offers no such handles.
How do you verify that a rival has not trained a model capable of [autonomous cyber operations](https://en.wikipedia.org/wiki/Cyberwarfare)? How do you inspect a [foundation model](https://en.wikipedia.org/wiki/Foundation_model) for latent capabilities that emerge only at scale? How do you negotiate limits on [reinforcement learning](https://en.wikipedia.org/wiki/Reinforcement_learning) when the training data is proprietary, the compute is distributed across [cloud regions](https://en.wikipedia.org/wiki/Cloud_computing), and the model weights can be transmitted in seconds?
You cannot. There will be no AI [Non-Proliferation Treaty](https://en.wikipedia.org/wiki/Treaty_on_the_Non-Proliferation_of_Nuclear_Weapons). There will be no AI [START](https://en.wikipedia.org/wiki/START_I). The very concept is farcical.
This means the new deterrence regime Karp heralds will be **unnegotiated, unverified, and potentially unstable by design**. The world is not transitioning from one stable equilibrium to another. It is transitioning from a managed balance of terror to an unmanaged balance of uncertainty, with no diplomatic framework to prevent escalation, no verification regime to build confidence, and no crisis communication channels tested under pressure.
The [Cuban Missile Crisis](https://en.wikipedia.org/wiki/Cuban_Missile_Crisis) nearly ended civilization despite [hotlines](https://en.wikipedia.org/wiki/Moscow%E2%80%93Washington_hotline) and direct leader communication. What happens when the crisis is triggered not by missile deployments but by an AI system generating false intelligence, or an autonomous drone swarm crossing a border due to [sensor hallucination](https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence))?
## The Speed Problem
Karp is correct that hard power in this century will be built on software. He is correct that adversaries will not pause for theatrical debates about AI ethics. He is correct that the engineering elite has a moral obligation to participate in national defense.
But he underestimates, or at least underemphasizes, the **speed differential** between nuclear and AI escalation.
Nuclear war, even in its most rapid scenarios, unfolds over hours. [ICBMs](https://en.wikipedia.org/wiki/Intercontinental_ballistic_missile) take 30 minutes to cross continents. [SLBMs](https://en.wikipedia.org/wiki/Submarine-launched_ballistic_missile) are faster but still measured in minutes. This is terrifyingly fast, but it is human-scale fast. A president has time, if only barely, to receive intelligence, consult advisors, and make a decision.
AI-enabled conflict operates at **machine speed**. A cyber attack powered by autonomous AI agents can compromise thousands of systems in seconds. A drone swarm can saturate defenses before a human commander can finish reading the first alert. A [deepfake](https://en.wikipedia.org/wiki/Deepfake) crisis can propagate through social media and trigger military mobilization before fact-checkers can even identify the fabrication.
The deterrence framework of the nuclear age assumed human decision-makers with time to think. The AI age will feature automated systems making lethal decisions in milliseconds, with human oversight that is technically present but practically meaningless.
***This is not deterrence. This is the automation of catastrophe.***
## What Karp Gets Exactly Right
For all my disagreements with the implications, Karp's core diagnosis is unassailable. The atomic age is ending. The [post-Cold War](https://en.wikipedia.org/wiki/Post%E2%80%93Cold_War_era) fantasy of permanent peace through economic interdependence is dead. [Russia's invasion of Ukraine](https://en.wikipedia.org/wiki/Russian_invasion_of_Ukraine), [China's military buildup](https://en.wikipedia.org/wiki/Modernization_of_the_People%27s_Liberation_Army), [Iran's nuclear advances](https://en.wikipedia.org/wiki/Nuclear_program_of_Iran): these are not anomalies. They are signals that the [Pax Americana](https://en.wikipedia.org/wiki/Pax_Americana) is fraying, and something darker is emerging.
Karp's call for Silicon Valley to engage with defense is not merely correct. It is existentially necessary. The notion that the engineers building the most powerful technologies in history should remain neutral while their creations determine the fate of nations is a moral abdication masquerading as ethical sophistication.
His observation that adversaries will not pause is equally vital. The [Chinese Communist Party](https://en.wikipedia.org/wiki/Chinese_Communist_Party) is not debating whether to develop AI weapons. They are building them. The [Russian military](https://en.wikipedia.org/wiki/Russian_Armed_Forces) is not conducting ethics reviews on autonomous systems. They are deploying them. The idea that Western restraint will be met with reciprocal restraint is not idealism. It is suicide.
And his warning about the tyranny of apps, about civilization's strange decision to dedicate its greatest engineering talent to optimizing [advertising click-through rates](https://en.wikipedia.org/wiki/Click-through_rate) rather than solving hard problems, cuts to the bone of cultural decadence. The talent exists. The capital exists. The will has been lost.
## The Path Forward
So where does this leave us? If AI deterrence is not nuclear deterrence, if there will be no grand treaty, if speed and uncertainty are the new normal, what strategy should the West pursue?
First, **abandon the search for a MAD equivalent**. Stop trying to make AI deterrence look like nuclear deterrence. It will not. The sooner strategists accept this, the sooner they can develop frameworks appropriate to the actual technology.
Second, **invest in resilience, not just capability**. Nuclear deterrence worked partly because both sides knew they could absorb a first strike and still retaliate. AI deterrence requires similar resilience: distributed systems, redundant command chains, [human-in-the-loop](https://en.wikipedia.org/wiki/Human-in-the-loop) safeguards that cannot be circumvented by software alone.
Third, **build the diplomatic infrastructure for AI crisis management**. The world may not be able to negotiate limits on AI development, but it can establish protocols for AI accidents, [false flag](https://en.wikipedia.org/wiki/False_flag) operations, and autonomous system malfunctions. The risk of inadvertent escalation through misunderstood AI behavior is higher than that of a deliberate attack.
Fourth, **maintain nuclear capability as a backstop**. Karp declares the atomic age is ending, but nuclear weapons will remain the ultimate deterrent for decades. AI may supplement nuclear deterrence. It cannot replace it. Any strategy premised on AI fully substituting for nuclear capability is reckless.
Fifth, **accept that the new deterrence will be messier, less stable, and more dangerous than the old**. This is not a reason for despair. It is a reason for vigilance. The nuclear age was terrifying but, against all odds, survivable. The AI age will require even greater wisdom, even more robust institutions, and even clearer-eyed leadership.
## The Uncomfortable Truth
Karp's declaration that the atomic age is ending is a wake-up call. But it is not a promise of something better. It is a warning that the relative stability the world has known is evaporating, and what replaces it will be determined by choices made in the next few years.
AI is a deterrence tool. It will shape the strategic landscape of the 21st century as fundamentally as nuclear weapons shaped the 20th. But it is a different tool, operating in a different realm, with different rules and different risks.
Pretending otherwise, whether through naive analogies to nuclear deterrence or through utopian fantasies of AI cooperation, is how we stumble into the war we are all trying to prevent.
***The atomic age is ending. What comes next is up for grabs. Choose wisely. And choose soon.***