Nuclear Non-Proliferation Is the Wrong Framework for AI Governance
Placing AI in a nuclear framework inflates expectations and distracts from practical, sector-specific governance.
Michael C. Horowitz and Lauren A. Kahn — June 27, 2025
The views in this article are those of the authors alone and do not represent those of the Department of Defense, its components, or any part of the US government.
In a recent interview, Demis Hassabis — co-founder and CEO of Google DeepMind, a leading AI lab — was asked if he worried about ending up like Robert Oppenheimer, the scientist who unleashed the atomic bomb and was later haunted by his creation. While Hassabis didn’t explicitly endorse the comparison, he responded by advocating for an international institution to govern AI, holding up the International Atomic Energy Agency (IAEA) as a guiding example.
Hassabis isn’t alone in comparing AI and nuclear technology. Sam Altman and others at OpenAI have also argued that artificial intelligence is so impactful globally that it requires an international regulatory agency on the scale of the IAEA. Back in 2019, Bill Gates, for example, likened AI to nuclear technology, describing both as rare technologies “that are both promising and dangerous.” Researchers, too, have made similar comparisons, looking to the IAEA or Eisenhower’s Atoms for Peace initiative as potential models for AI regulation.
No analogy is perfect, but especially as a general-purpose technology, AI differs so fundamentally from nuclear technology that basing AI policy around the nuclear analogy is conceptually flawed and risks inflating expectations about the international community’s ability to control model proliferation. It also places undue emphasis on AI use in weapons, rather than on its potential to drive economic prosperity and enhance national security.
How AI Differs from Nuclear Technology
The desire to analogize AI with nuclear weapons stems from the strong views that many hold about the potential for AI to be used for both good and ill. Warnings that AI threatens human survival appear alongside hopes that it can rapidly transform the global economy in unprecedented ways.
However, the comparison breaks down for three reasons: AI’s far-reaching scope, its lack of excludable inputs, and its graduated strategic impact.
AI is much more widely applicable than nuclear technology
The first key difference between AI and nuclear technology lies in their generality. The splitting of the atom is not inherently a weapon, but its most geopolitically significant application is certainly nuclear weapons. Less than 10% of the world's energy comes from nuclear power, and debates about nuclear power are deeply entangled with concerns about nuclear weapons.
AI, by contrast, is a truly general-purpose technology, with broad applicability across use cases and sectors. This generality is magnified by a comparatively low barrier to entry: many advanced models are at least partially open source, allowing them to be easily downloaded, copied, and modified.
Some worry that this generality might lead to what are called information hazards, potentially giving low-skilled ne'er-do-wells the knowledge they need to develop nuclear, chemical, or biological weapons of mass destruction (WMDs). These risks deserve to be taken seriously, but there is still uncertainty about whether they are real or merely hypothetical.
Regardless, AI is just one of several factors, such as having more physics PhDs, that can make it easier for countries to acquire WMDs. And we don’t regulate advanced degrees like we do weapons.
Want to contribute to the conversation? Pitch your piece.
Similarly, while AI is being incorporated into military systems and could be used in autonomous weapon systems, it is not itself a weapon. The most common applications of AI are likely to come from commercial use cases and non-weapons-based military applications, like decision-support tools.
AI is less excludable than nuclear technology
The second key difference between nuclear technology and AI is excludability, or the extent to which access to a resource can be restricted or controlled. A nuclear chain reaction, critical for both nuclear weapons and nuclear energy, requires access to specialized minerals — plutonium and enriched uranium — that are not widely available. The limited availability of plutonium and enriched uranium creates opportunities to control who acquires it.
Artificial intelligence has no equivalent excludable input — no counterpart to plutonium or enriched uranium. The closest analog is computing power — often referred to simply as compute — which is provided by semiconductors (“chips”).
The production of the most advanced chips is currently dominated by a single company, Taiwan Semiconductor Manufacturing Company (TSMC). TSMC is itself critically reliant on machines manufactured by a single company, Advanced Semiconductor Materials Lithography (ASML). Another company, NVIDIA, similarly dominates in chip design.

Proponents of compute governance — the control of access to computational resources — mirroring the IAEA point to this highly concentrated supply chain as evidence that chips are sufficiently excludable to be regulated. However, even tight controls on chips have struggled to prevent the most advanced components from spreading, as evidenced by reports of large-scale smuggling of advanced NVIDIA chips to China.
Additionally, the current dominance of a small number of firms in Taiwan, the Netherlands, Japan, and the United States over the chip supply chains is temporary. In time, Chinese companies can potentially replicate technical capabilities, particularly given the economic incentives for catching up. And even this temporary lead is partial. While the most advanced expertise is currently concentrated in a few hands, slightly less sophisticated — but still highly effective — expertise is quickly proliferating.
And compute is just one piece of the AI puzzle. Increasing efficiencies in algorithm design, resource management, and model architecture can also dramatically improve performance. High-end chips may not even represent a long-term constraint on the most advanced AI models if the compute-heavy design paradigm fails to deliver rapid breakthroughs, as OpenAI co-founder Illya Sutskever fears.
This distinguishes AI from nuclear technology in a crucial way: technical knowledge can be copied and shared indefinitely, whereas nuclear technology depends on finite, geographically concentrated raw materials. Similarly, trained models are software, not hardware, and companies have incentives to make them widely available to ensure profitability, which makes preventing their spread even more challenging.
AI’s strategic value is continuous, not binary
Finally, the analogy between AI and nuclear weapons overlooks the fact that nuclear technology, specifically its application as a weapon, is unique in human history. The initial acquisition of nuclear weapons possesses a clear and demonstrated binary strategic value: you either have them or you don’t. Possessing even the oldest and least sophisticated nuclear weapon has monumental implications for international politics—a fact underscored by the recent Israeli and US strikes on Iran’s Natanz and Fordow facilities, driven by long-standing concerns over Iran’s nuclear ambitions.

This binary effect from old versions of the technology is certainly not the case for aircraft carriers, machine guns, combustion engines, or electricity. Nor is it the case for AI, where the utility of the technology is more continuous, modulating based on performance rather than from the fact of its existence.
It’s possible that a future breakthrough in superintelligence could introduce a binary effect to AI by providing its creator with a lasting first-mover advantage. However, superintelligence is an ambiguous and unproven concept, and advanced AI, prior to the emergence of superintelligence, still has strategic value, making the acquisition of AI qualitatively distinct from the acquisition of nuclear weapons.
Nuclear Non-Proliferation is the Wrong Framework for AI Governance
Belief in the flawed analogy between AI and nuclear technology has given rise to a now familiar strain of thought: that today’s AI moment resembles the period following the invention of nuclear weapons, and that the governance of AI should thus draw on the international nuclear non-proliferation frameworks developed after 1945.
The policy apparatus built up to stop the proliferation of nuclear weapons and regulate their potential deployment and use is unprecedented in human history, relying on a unique global consensus on nuclear dangers due to the specific and destructive use of the technology.
This regime, of which the IAEA is a central pillar, is essentially an export cartel on nuclear materials that allows access to some countries for nuclear energy uses and prevents access to others bent on developing weapons. Export cartels only work when there is something the cartel members can control that nobody else can access without them, and as explained above, there is no excludable AI input equivalent to plutonium or enriched uranium.
Additionally, because export controls create economic incentives for substitution, they are not a panacea for preventing the proliferation of general-purpose technologies like AI. Export controls permit the “controlling” country a window of time before competitors catch up, and the policy question then becomes what to do with that time, e.g., invest to extend a technology lead, capture more market share, or something else. But that is different than the notion of using export controls to completely lock down technology diffusion.
Moreover, if fast-follower open-source models are sufficient substitutes to closed models for the bulk of business applications, illicit activities, and core military functions (e.g., autonomous systems, intelligence, predictive maintenance, logistics), states will have far less reason to amass large stockpiles of export-controlled chips. In that case, the task of a generalized AI non-proliferation regime — based on the nuclear technology analogy — becomes even harder. There isn't a small set of companies or hardware at the cutting edge to be controlled if one wants to limit consequential uses of AI.
Approaches to AI Governance that Are More Likely to Succeed
The fundamental flaws in the analogy between AI and nuclear technology do not mean AI governance is doomed to failure. The differences between AI and nuclear technology just mean that basing attempts at AI governance on nuclear frameworks are unlikely to succeed. Rather, AI governance is more likely to succeed if it is grounded in the characteristics of AI technology.
There are already international institutions and national policies that govern almost every area where companies and governments might consider applying AI, from consumer product safety to bank loans to testing and evaluation of weapon systems. There are opportunities for governance that involve working through existing institutions and upgrading them for the age of AI, rather than creating new frameworks.
The most appropriate models for AI governance are international and national standards organizations and agreements that manage the assured, safe, and coordinated use of capabilities widely available to states and individuals. Examples include organizations like the US Food and Drug Administration, the International Civil Aviation Organization, and the International Telecommunication Union, and treaties such as the United Nations Convention on the Law of the Sea.
Governance that establishes clear, AI-specific standards and best practices can foster international trust in the acquisition and deployment of AI models. Artificial intelligence is not a weapon of mass destruction. It’s not a weapon at all. It’s a general-purpose technology. Time and energy invested in nuclear-style approaches like an “IAEA for AI” is time and energy not invested in more practical and promising solutions.
See things differently? AI Frontiers welcomes expert insights, thoughtful critiques, and fresh perspectives. Send us your pitch.
Michael C. Horowitz is the Richard Perry Professor and Director of Perry World House at the University of Pennsylvania and Senior Fellow at the Council on Foreign Relations.
Lauren A. Kahn is a Senior Research Analyst at the Center for Security and Emerging Technology at Georgetown University.
Image: SWInsider / iStock
As one of the team members who contributed to establishing the "non-proliferation" agreements in the 1970s and 80s, I agree with the higher-level objectives of this article. What's clearly missing, however, is the perspective of someone who actually knows what nuclear weapons are, how they're used, and what the drivers of the treaties really were. For example, in the section titled," How AI Differs from Nuclear Technology", it uses the criterion of similar social "potential" for both, and the threat to "human survival" they have. It then "claims" the comparison "breaks down for three reasons: AI’s far-reaching scope, its lack of excludable inputs, and its graduated strategic impact." The comparison, in fact, easily breaks down for very different, simple comparisons.
At the time the non-proliferation treaties were established, the nuclear "landscape" was very precisely defined. The U.S. had a stockpile of about 20,000 warheads. The U.S.S.R had about 70,000. We knew exactly how they were built. They knew how ours were built. We both knew generally: where both were stored, the command and control each had to launch them, and how they would be delivered. We both also knew, with high reliability, that no other country either had or was capable of rapidly building such stockpiles. This "landscape" is entirely absent from the AI universe.
The "less excludable" claim is also misleading because it is too focused on the exotic parts like "plutonium and uranium", which it views as able to be "restricted or controlled". That factor has long been lost. Right now, NINE countries have nuclear weapons. They have been sold or provided to others. Just a single use will radically change world politics. Beyond that, the creation of "dirty" nuclear bombs is within reach of any country that has nuclear power reactors. Past governments knew these issues. Each required a different type of overview and control "agency".
The third element, that nuclear weapons have "binary ... strategic" influence, overlooks the factors throughout human history that have led to the dominance of "nations" within their practical influence range. Egypt, Persia and Rome each could claim "world dominance" for their "time". But that was not based on some super weapon. It was a result of the level of "transportation technology". The "mast and sail" technology of the "tall ships" of Portugal, Spain, France and England was as impactful as their gun powder weapons.