## The Analogy Everyone Reaches For
Whenever AI and national security collide in the same conversation, someone inevitably pulls out the nuclear deterrence card. The logic feels airtight: two superpowers, one transformative technology, existential stakes. We navigated the Cold War with Mutual Assured Destruction. Surely we can design an equivalent framework for AI.
I am convinced this analogy is not just incomplete. It is actively misleading. And if policymakers build doctrine on top of it, we may end up with something far more unstable than the Cold War ever was.
A [recent RAND commentary](https://www.rand.org/pubs/commentary/2025/03/seeking-stability-in-the-competition-for-ai-advantage.html) by Iskander Rehman, Karl Mueller, and Michael Mazarr puts this into sharp relief. Their analysis of the *Superintelligence Strategy* report by Dan Hendrycks, Eric Schmidt, and Alexandr Wang is the clearest breakdown I have read of exactly where the nuclear-AI parallel breaks down. Let me walk through it.
## What MAIM Is, and Why It Sounds Convincing
The *Superintelligence Strategy* report proposes a concept called "[Mutually Assured AI Malfunction](Mutually%20Assured%20AI%20Malfunction.md)" (MAIM). The idea: any state racing toward a superintelligence monopoly would face preventive sabotage from rivals, creating a deterrence equilibrium analogous to nuclear [Mutual Assured Destruction](Mutual%20Assured%20Destruction.md).
It is a clever framework. It has the right acronym. It has Schmidt's credibility behind it. And it is built on a fundamentally broken premise.
Here is what actually differs.
### Difference 1: What You Are Trying to Deter
Nuclear [Mutual Assured Destruction](Mutual%20Assured%20Destruction.md) deterred the *use* of a weapon. The threshold was brutally clear: a missile leaves the silo, you respond. The trigger was observable, binary, and unambiguous.
[Mutually Assured AI Malfunction](Mutually%20Assured%20AI%20Malfunction.md) tries to deter the *development* of a general-purpose technology. And as the RAND authors point out, the triggering conditions are hopelessly vague. What exactly constitutes an "aggressive bid for unilateral AI dominance"? Is the U.S. Stargate project one? Is China's state-backed compute buildout one? From Beijing's perspective, you could argue Washington crossed that line two years ago.
There is no equivalent of a missile launch here. There is only a fog of algorithmic opacity and contested technical interpretations. That is not a deterrence regime. That is a permanent state of strategic paranoia.
### Difference 2: You Cannot Aim a Missile at an Algorithm
Nuclear deterrence worked in part because the targets were identifiable. Silos. Submarines. Launch facilities. Horrible to destroy, but locatable.
AI development does not work that way. It is distributed across cloud infrastructure, spread across private data centers on multiple continents, embedded in open-weight models that any competent team can download and fine-tune. As the RAND authors note, decentralized training and distributed cloud computing make AI systems inherently more resilient to the kind of targeted sabotage [Mutually Assured AI Malfunction](Mutually%20Assured%20AI%20Malfunction.md)envisions.
And as AI powers get closer to superintelligence, they will harden their infrastructure further, build redundancy, and push the most sensitive research underground. The more credible the [Mutually Assured AI Malfunction](Mutually%20Assured%20AI%20Malfunction.md) threat becomes, the more invisible the target gets.
### Difference 3: MAD Worked Because Nobody Could Strike First
This is the one that people consistently get backwards. [Mutual Assured Destruction](Mutual%20Assured%20Destruction.md) was not stable because both sides *could* strike. It was stable because neither side could strike first and survive the response. The mutuality was enforced by the physics of nuclear destruction.
[Mutually Assured AI Malfunction](Mutually%20Assured%20AI%20Malfunction.md) inverts this entirely. It is premised on the ability to strike first, which creates first-strike incentives rather than eliminating them. Both sides would be leaning forward, terrified of waiting too long, with national security establishments pressing for preemptive action based on intelligence assessments that are, by definition, incomplete and contestable.
That is not deterrence. That is a hair-trigger.
### Difference 4: AI Is Not Just a Weapon
Nuclear weapons have one primary use case. You do not build a reactor to improve your logistics network, only to accidentally trigger a nuclear winter.
AI is a general-purpose technology with transformative social, economic, and scientific applications. Declaring willingness to go to war to prevent a rival from acquiring it is a categorically different posture than deterring a weapons program. It would be perceived globally as coercive, unilateral, and hostile to human progress. Especially if China responds by open-sourcing its models and positioning itself as the generous alternative.
### Difference 5: The Private Sector Blows Up the Whole Model
Here is the structural problem nobody wants to say out loud: the Cold War nuclear balance was managed between states, by states, with full government control over the relevant technology.
AI in 2026 is being developed primarily by private companies answerable to shareholders, not defense ministries. OpenAI, Anthropic, Google DeepMind, Mistral, Meta. None of them has a hotline to the Pentagon or the Kremlin. None of them can be easily nationalized without catastrophic consequences for innovation.
If MAIM-style existential logic were taken seriously, the only coherent response would be full government takeover of AI research and development. The RAND authors acknowledge this directly. I don't think that's the outcome the *Superintelligence Strategy* authors intended to recommend.
## So What Do We Actually Do?
I am not arguing for passivity. The geopolitical stakes are real, and the absence of any framework is its own form of danger.
The RAND authors suggest a more grounded starting point: multilateral dialogue on AI stability risks, military-to-military communication channels regarding destabilizing AI applications, and selective public commitments to reject specific uses, such as interference with nuclear command-and-control systems. These are modest steps. They are also achievable without triggering the escalatory dynamics that [Mutually Assured AI Malfunction](Mutually%20Assured%20AI%20Malfunction.md) would create.
I would add one thing the RAND piece does not emphasize enough: the private sector needs to be formally in the room when these frameworks are designed. Not as a lobbying afterthought. As a structural participant. The companies building the most capable AI systems are not peripheral to this conversation. They ARE the conversation.
## The Uncomfortable Truth
We are reaching for the nuclear analogy because it is the only historical precedent we have for a technology with this level of strategic consequence. I understand the impulse. But the analogy flatters us into thinking we already know how to navigate this, when the reality is that we are in genuinely uncharted territory.
The differences between AI and nuclear deterrence are not cosmetic. They are structural. The targets are invisible. The thresholds are undefined. The actors include private companies with no state accountability. And the technology has too much legitimate human value to be treated purely as a weapons program.
We need new strategic concepts, not retrofitted Cold War ones.
The question I keep coming back to: if our best available deterrence framework would make the situation *more* unstable rather than less, are we honest enough to admit we do not yet have the right answer?