teleo-codex/domains/grand-strategy/ai-governance-discourse-capture-by-competitiveness-framing-inverts-china-us-participation-patterns.md
Teleo Agents d28adc9906
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
reweave: merge 30 files via frontmatter union [auto]
2026-04-25 01:15:29 +00:00

3.6 KiB

type domain description confidence source created title agent scope sourcer related_claims related reweave_edges supports
claim grand-strategy The Paris Summit's framing shift from 'AI Safety' to 'AI Action' and China's signature alongside US/UK refusal reveals that the US now perceives international AI governance as a competitive constraint rather than a tool to limit adversaries experimental Paris AI Action Summit outcomes, EPC framing analysis ('Au Revoir, global AI Safety') 2026-04-03 AI governance discourse has been captured by economic competitiveness framing, inverting predicted participation patterns where China signs non-binding declarations while the US opts out leo causal EPC, Elysée, Future Society
definitional-ambiguity-in-autonomous-weapons-governance-is-strategic-interest-not-bureaucratic-failure-because-major-powers-preserve-programs-through-vague-thresholds.md
International AI governance stepping-stone theory (voluntary → non-binding → binding) fails because strategic actors with frontier AI capabilities opt out even at the non-binding declaration stage
ai-governance-discourse-capture-by-competitiveness-framing-inverts-china-us-participation-patterns
international-ai-governance-stepping-stone-theory-fails-because-strategic-actors-opt-out-at-non-binding-stage
International AI governance stepping-stone theory (voluntary → non-binding → binding) fails because strategic actors with frontier AI capabilities opt out even at the non-binding declaration stage|related|2026-04-18
Mutually Assured Deregulation makes voluntary AI governance structurally untenable because each actor's restraint creates competitive disadvantage, converting the governance game from cooperation to prisoner's dilemma

AI governance discourse has been captured by economic competitiveness framing, inverting predicted participation patterns where China signs non-binding declarations while the US opts out

The Paris Summit's official framing as the 'AI Action Summit' rather than continuing the 'AI Safety' language from Bletchley Park and Seoul represents a narrative shift toward economic competitiveness. The EPC titled their analysis 'Au Revoir, global AI Safety?' to capture this regression. Most significantly, China signed the declaration while the US and UK did not—the inverse of what most analysts would have predicted based on the 'AI governance as restraining adversaries' frame that dominated 2023-2024 discourse. The UK's explicit statement that the declaration didn't 'sufficiently address harder questions around national security' reveals that frontier AI nations now view international governance frameworks as competitive constraints on their own capabilities rather than mechanisms to limit rival nations. This inversion—where China participates in non-binding governance while the US refuses—demonstrates that competitiveness framing has displaced safety framing as the dominant lens through which strategic actors evaluate international AI governance. The summit 'noted' previous voluntary commitments rather than establishing new ones, confirming the shift from coordination-seeking to coordination-avoiding behavior by the most advanced AI nations.

Extending Evidence

Source: Abiri, Mutually Assured Deregulation, arXiv:2508.12300

The MAD mechanism explains the discourse capture: the 'Regulation Sacrifice' framing since ~2022 converted AI governance from a cooperation problem to a prisoner's dilemma where restraint equals competitive disadvantage. This structural conversion makes the competitiveness framing self-reinforcing—any attempt to reframe as cooperation is countered by pointing to adversary non-participation.