2.6 KiB
| type | domain | description | confidence | source | created | attribution | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| claim | grand-strategy | NPT success depended on US extended deterrence removing proliferation incentives for allied states, a mechanism structurally different from the four enabling conditions identified in other technology governance cases | experimental | Leo synthesis, NPT historical record, Arms Control Association archives | 2026-04-01 |
|
Nuclear non-proliferation succeeded through security architecture providing alternative incentives not through commercial network effects revealing a fifth enabling condition absent from other governance cases
The NPT achieved partial coordination success (9 nuclear states vs. 30+ technically capable states over 80 years) through a mechanism not present in the four-condition enabling framework: security architecture providing non-proliferation incentives. The US provided extended deterrence (nuclear umbrella) to Japan, South Korea, Germany, and Taiwan—all technically capable states that chose not to proliferate because the security benefit of weapons was provided without the weapons themselves.
This differs fundamentally from commercial network effects (Condition 2). Nuclear weapons have no commercial network effect. The governance mechanism was instead a security arrangement where the dominant power had both the interest (preventing proliferation) and capability (providing security) to substitute for the proliferation incentive.
The four existing conditions map incompletely: Condition 1 (triggering events) was present via Hiroshima/Nagasaki; Condition 2 (network effects) was absent; Condition 3 (low competitive stakes) was mixed—stakes were extremely high but P5 alignment created unusual governance capacity; Condition 4 (physical manifestation) was partial—weapons are physical but weapon design knowledge is not.
The novel insight: security architecture as a fifth enabling condition. This raises the question for AI governance: could a dominant AI power provide 'AI security guarantees' to smaller states, reducing their incentive to develop autonomous capabilities? This seems implausible for AI (capability advantage is economic/strategic, not primarily deterrence), but the structural pattern is worth documenting as a governance mechanism that succeeded in the nuclear case.
Relevant Notes:
- technology-advances-exponentially-but-coordination-mechanisms-evolve-linearly-creating-a-widening-gap
Topics: