teleo-codex/domains/grand-strategy/mutually-assured-deregulation-makes-voluntary-ai-governance-structurally-untenable-through-competitive-disadvantage-conversion.md
Teleo Agents eea8659bed
Some checks failed
Mirror PR to Forgejo / mirror (pull_request) Has been cancelled
leo: extract claims from 2026-04-27-washingtonpost-google-employees-letter-pentagon-classified-ai
- Source: inbox/queue/2026-04-27-washingtonpost-google-employees-letter-pentagon-classified-ai.md
- Domain: grand-strategy
- Claims: 0, Entities: 1
- Enrichments: 4
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Leo <PIPELINE>
2026-04-28 12:25:36 +00:00

6.5 KiB

type domain description confidence source created title agent sourced_from scope sourcer supports related
claim grand-strategy The MAD mechanism operates fractally across national, institutional, corporate, and individual negotiation levels, making safety governance politically impossible even for willing parties experimental Gilad Abiri, arXiv:2508.12300, formal academic paper introducing the MAD framework 2026-04-24 Mutually Assured Deregulation makes voluntary AI governance structurally untenable because each actor's restraint creates competitive disadvantage, converting the governance game from cooperation to prisoner's dilemma leo grand-strategy/2026-00-00-abiri-mutually-assured-deregulation-arxiv.md structural Gilad Abiri
mandatory-legislative-governance-closes-technology-coordination-gap-while-voluntary-governance-widens-it
global-capitalism-functions-as-a-misaligned-optimizer-that-produces-outcomes-no-participant-would-choose-because-individual-rationality-aggregates-into-collective-irrationality-without-coordination-mechanisms
binding-international-governance-requires-commercial-migration-path-at-signing-not-low-competitive-stakes-at-inception
mandatory-legislative-governance-closes-technology-coordination-gap-while-voluntary-governance-widens-it
global-capitalism-functions-as-a-misaligned-optimizer-that-produces-outcomes-no-participant-would-choose-because-individual-rationality-aggregates-into-collective-irrationality-without-coordination-mechanisms
ai-governance-discourse-capture-by-competitiveness-framing-inverts-china-us-participation-patterns
mutually-assured-deregulation-makes-voluntary-ai-governance-structurally-untenable-through-competitive-disadvantage-conversion
gilad-abiri

Mutually Assured Deregulation makes voluntary AI governance structurally untenable because each actor's restraint creates competitive disadvantage, converting the governance game from cooperation to prisoner's dilemma

Abiri's Mutually Assured Deregulation framework formalizes what has been empirically observed across 20+ governance events: the 'Regulation Sacrifice' view held by policymakers since ~2022 creates a prisoner's dilemma where states minimize regulatory constraints to outrun adversaries (China/US) to frontier capabilities. The mechanism operates at four levels simultaneously: (1) National level: US/EU/China competitive deregulation, (2) Institutional level: OSTP/BIS/DOD governance vacuums, (3) Corporate voluntary level: RSP v3 dropped pause commitments using explicit MAD logic, (4) Individual lab negotiation level: Google accepting weaker guardrails than Anthropic's to avoid blacklisting. The paradoxical outcome is that enhanced national security through deregulation actually undermines security across all timeframes: near-term (information warfare tools), medium-term (democratized bioweapon capabilities), long-term (uncontrollable AGI systems). The competitive dynamic makes exit from the race politically untenable even for willing parties because countries that regulate face severe disadvantage compared to those that don't. This is not a coordination failure that can be solved through better communication—it is a structural property of the competitive environment that persists as long as the race framing dominates.

Extending Evidence

Source: Sharma resignation, Semafor/BISI reporting, Feb 9 2026

Sharma's February 9 resignation preceded both RSP v3.0 release and Hegseth ultimatum by 15 days, establishing that internal safety culture decay occurs before visible policy changes and before specific coercive events. His structural framing ('institutions shaped by competition, speed, and scale') indicates cumulative pressure from September 2025 Pentagon negotiations rather than discrete government action.

Extending Evidence

Source: Washington Post, February 4, 2025; Google DeepMind blog post (Demis Hassabis)

Google removed its AI weapons and surveillance principles on February 4, 2025—12 months BEFORE Anthropic was designated a supply chain risk in February 2026. This demonstrates MAD operates through anticipatory erosion, not just penalty response. Google preemptively eliminated constraints before a competitor was punished for maintaining them, showing the mechanism propagates through credible threat of competitive disadvantage rather than demonstrated consequence. The 12-month gap proves companies respond to the structural incentive before the test case crystallizes.

Supporting Evidence

Source: Google-Pentagon timeline, April 2026

Google's trajectory from unclassified deployment (3M users) to classified deal negotiation under employee pressure illustrates MAD mechanism in real time. The company deployed before Anthropic's cautionary case crystallized, then faced pressure to expand to classified settings, with employee opposition creating internal friction but not preventing negotiation progression. Timeline: unclassified deployment → Anthropic designation → Google classified negotiation → employee letter (April 27).

Challenging Evidence

Source: Google employee letter April 27 2026, compared to 2018 Project Maven petition

The Google employee petition represents a counter-test of MAD theory. If 580+ employees including 20+ directors/VPs and senior DeepMind researchers can successfully block classified Pentagon contracts, it would demonstrate that employee governance mechanisms can constrain competitive deregulation pressure. However, the mobilization decay is striking: 4,000+ signatories won the 2018 Project Maven fight, while only 580 signed the 2026 letter despite higher stakes (Anthropic supply chain designation as cautionary tale) and 8 years of company growth—an ~85% reduction. This suggests the employee governance mechanism is weakening, possibly through workforce composition change or normalization of military AI work. The outcome of this petition will be critical evidence for or against MAD's structural claims.

Challenging Evidence

Source: Google employee letter April 27 2026, compared to 2018 Project Maven petition

Google employee mobilization against classified Pentagon AI contract shows 85% reduction in signatories compared to 2018 Project Maven (580 vs 4,000+) despite higher stakes and concrete cautionary tale (Anthropic supply chain designation). This suggests employee governance mechanism is weakening as military AI work normalizes, potentially as counter-evidence to MAD if employees can no longer effectively constrain voluntary deregulation even when attempting to do so.