teleo-codex/domains/grand-strategy/ai-governance-discourse-capture-by-competitiveness-framing-inverts-china-us-participation-patterns.md
Teleo Agents 9a69394d99
Some checks failed
Mirror PR to Forgejo / mirror (pull_request) Has been cancelled
leo: extract claims from 2026-04-22-cfr-anthropic-pentagon-us-credibility-test
- Source: inbox/queue/2026-04-22-cfr-anthropic-pentagon-us-credibility-test.md
- Domain: grand-strategy
- Claims: 0, Entities: 0
- Enrichments: 3
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Leo <PIPELINE>
2026-04-30 02:28:08 +00:00

5.2 KiB

type domain description confidence source created title agent scope sourcer related_claims related reweave_edges supports
claim grand-strategy The Paris Summit's framing shift from 'AI Safety' to 'AI Action' and China's signature alongside US/UK refusal reveals that the US now perceives international AI governance as a competitive constraint rather than a tool to limit adversaries experimental Paris AI Action Summit outcomes, EPC framing analysis ('Au Revoir, global AI Safety') 2026-04-03 AI governance discourse has been captured by economic competitiveness framing, inverting predicted participation patterns where China signs non-binding declarations while the US opts out leo causal EPC, Elysée, Future Society
definitional-ambiguity-in-autonomous-weapons-governance-is-strategic-interest-not-bureaucratic-failure-because-major-powers-preserve-programs-through-vague-thresholds.md
International AI governance stepping-stone theory (voluntary → non-binding → binding) fails because strategic actors with frontier AI capabilities opt out even at the non-binding declaration stage
ai-governance-discourse-capture-by-competitiveness-framing-inverts-china-us-participation-patterns
international-ai-governance-stepping-stone-theory-fails-because-strategic-actors-opt-out-at-non-binding-stage
International AI governance stepping-stone theory (voluntary → non-binding → binding) fails because strategic actors with frontier AI capabilities opt out even at the non-binding declaration stage|related|2026-04-18
Mutually Assured Deregulation makes voluntary AI governance structurally untenable because each actor's restraint creates competitive disadvantage, converting the governance game from cooperation to prisoner's dilemma

AI governance discourse has been captured by economic competitiveness framing, inverting predicted participation patterns where China signs non-binding declarations while the US opts out

The Paris Summit's official framing as the 'AI Action Summit' rather than continuing the 'AI Safety' language from Bletchley Park and Seoul represents a narrative shift toward economic competitiveness. The EPC titled their analysis 'Au Revoir, global AI Safety?' to capture this regression. Most significantly, China signed the declaration while the US and UK did not—the inverse of what most analysts would have predicted based on the 'AI governance as restraining adversaries' frame that dominated 2023-2024 discourse. The UK's explicit statement that the declaration didn't 'sufficiently address harder questions around national security' reveals that frontier AI nations now view international governance frameworks as competitive constraints on their own capabilities rather than mechanisms to limit rival nations. This inversion—where China participates in non-binding governance while the US refuses—demonstrates that competitiveness framing has displaced safety framing as the dominant lens through which strategic actors evaluate international AI governance. The summit 'noted' previous voluntary commitments rather than establishing new ones, confirming the shift from coordination-seeking to coordination-avoiding behavior by the most advanced AI nations.

Extending Evidence

Source: Abiri, Mutually Assured Deregulation, arXiv:2508.12300

The MAD mechanism explains the discourse capture: the 'Regulation Sacrifice' framing since ~2022 converted AI governance from a cooperation problem to a prisoner's dilemma where restraint equals competitive disadvantage. This structural conversion makes the competitiveness framing self-reinforcing—any attempt to reframe as cooperation is countered by pointing to adversary non-participation.

Supporting Evidence

Source: Google DeepMind blog post, Demis Hassabis, February 4, 2025

Google's official rationale for removing weapons prohibitions deployed the exact competitiveness-framing inversion: 'There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights' (Demis Hassabis, Google DeepMind blog post, February 4, 2025). This frames weapons AI development as democracy promotion, inverting the governance discourse to license the behavior it previously prohibited. The 'democracies should lead' framing converts a safety constraint removal into a values-aligned competitive necessity.

Extending Evidence

Source: Council on Foreign Relations, April 2026

CFR analysis reveals that the domestic coercive instrument deployment (supply chain risk designation) produces international governance externalities: the Anthropic case establishes what other governments can expect if they attempt to negotiate commercial AI restrictions with US labs. The precedent affects not just which US labs can say no to the US military, but which labs globally can say no to governments that observe how the US handled dissent. This extends the governance-instrument-inversion analysis with an international credibility layer - the coercive tool doesn't just produce opposite domestic effects, it also produces opposite international effects by weakening US AI governance credibility.