leo: extract claims from 2026-02-05-futureuae-reaim-acoruna-washington-beijing-refused

- Source: inbox/queue/2026-02-05-futureuae-reaim-acoruna-washington-beijing-refused.md
- Domain: grand-strategy
- Claims: 0, Entities: 0
- Enrichments: 4
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Leo <PIPELINE>
This commit is contained in:
Teleo Agents 2026-04-28 08:15:28 +00:00
parent 5dfc5463b1
commit bfa11f5135
5 changed files with 37 additions and 12 deletions

View file

@ -23,3 +23,10 @@ The Council of Europe AI Framework Convention (CETS 225) entered into force on N
**Source:** International AI Safety Report 2026
The 2026 International AI Safety Report, despite achieving consensus across 30+ countries, does not close the military AI governance gap and explicitly notes that national security exemptions remain. Even at the epistemic coordination level (agreement on facts), the report's scope excludes high-stakes military applications, confirming that strategic interest conflicts prevent comprehensive governance even before operational commitments are attempted.
## Supporting Evidence
**Source:** FutureUAE REAIM analysis, 2026-02-05
REAIM confirms the ceiling operates even at non-binding level: when major powers refuse even voluntary commitments on military AI (US and China both declined A Coruña), the scope stratification excludes high-stakes applications before reaching binding governance stage. The voluntary norm-building process cannot achieve commitments from states with most capable military AI programs.

View file

@ -11,9 +11,16 @@ sourced_from: grand-strategy/2026-02-03-bengio-international-ai-safety-report-20
scope: structural
sourcer: Yoshua Bengio et al.
supports: ["international-ai-governance-stepping-stone-theory-fails-because-strategic-actors-opt-out-at-non-binding-stage", "binding-international-ai-governance-achieves-legal-form-through-scope-stratification-excluding-high-stakes-applications"]
related: ["technology-advances-exponentially-but-coordination-mechanisms-evolve-linearly-creating-a-widening-gap", "formal-coordination-mechanisms-require-narrative-objective-function-specification", "binding-international-ai-governance-achieves-legal-form-through-scope-stratification-excluding-high-stakes-applications", "evidence-dilemma-rapid-ai-development-structurally-prevents-adequate-pre-deployment-safety-evidence-accumulation", "only binding regulation with enforcement teeth changes frontier AI lab behavior because every voluntary commitment has been eroded abandoned or made conditional on competitor behavior when commercially inconvenient", "AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation"]
related: ["technology-advances-exponentially-but-coordination-mechanisms-evolve-linearly-creating-a-widening-gap", "formal-coordination-mechanisms-require-narrative-objective-function-specification", "binding-international-ai-governance-achieves-legal-form-through-scope-stratification-excluding-high-stakes-applications", "evidence-dilemma-rapid-ai-development-structurally-prevents-adequate-pre-deployment-safety-evidence-accumulation", "only binding regulation with enforcement teeth changes frontier AI lab behavior because every voluntary commitment has been eroded abandoned or made conditional on competitor behavior when commercially inconvenient", "AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation", "epistemic-coordination-outpaces-operational-coordination-in-ai-governance-creating-documented-consensus-on-fragmented-implementation", "international-ai-governance-stepping-stone-theory-fails-because-strategic-actors-opt-out-at-non-binding-stage"]
---
# Epistemic coordination on AI safety outpaces operational coordination, creating documented scientific consensus on governance fragmentation
The 2026 International AI Safety Report represents the largest international scientific collaboration on AI governance to date, with 100+ independent experts from 30+ countries and international organizations (EU, OECD, UN) achieving consensus on AI capabilities, risks, and governance gaps. However, the report's own findings document that 'current governance remains fragmented, largely voluntary, and difficult to evaluate due to limited incident reporting and transparency.' The report explicitly does NOT make binding policy recommendations, instead choosing to 'synthesize evidence' rather than 'recommend action.' This reveals a structural decoupling between two layers of coordination: (1) epistemic coordination (agreement on what is true) which succeeded at unprecedented scale, and (2) operational coordination (agreement on what to do) which the report itself confirms has failed. The report's deliberate choice to function purely in the epistemic layer—informing rather than constraining—demonstrates that international scientific consensus can coexist with and actually document operational governance failure. This is not evidence that coordination is succeeding, but rather evidence that the easier problem (agreeing on facts) is advancing while the harder problem (agreeing on binding action) remains unsolved. The report synthesizes recommendations for legal requirements, liability frameworks, and regulatory bodies, but produces no binding commitments, no enforcement mechanisms, and explicitly excludes military AI governance through national security exemptions.
## Supporting Evidence
**Source:** FutureUAE/JustSecurity REAIM analysis, 2026-02-05
REAIM demonstrates epistemic coordination (three summits, documented frameworks, middle-power consensus) without operational coordination (major powers refuse participation, 43% decline in signatories). The 'artificial urgency' critique notes that urgency framing functions as rhetorical substitute for governance, not driver of it — epistemic activity without operational binding.

View file

@ -11,15 +11,10 @@ attribution:
sourcer:
- handle: "leo"
context: "Leo (cross-session synthesis), aviation (16 years, ~5 conditions), CWC (~5 years, ~3 conditions), Ottawa Treaty (~5 years, ~2 conditions), pharmaceutical US (56 years, ~1 condition)"
supports:
- governance-speed-scales-with-number-of-enabling-conditions-present
related:
- Governance scope can bootstrap narrow and scale as commercial migration paths deepen over time
reweave_edges:
- Governance scope can bootstrap narrow and scale as commercial migration paths deepen over time|related|2026-04-18
- governance-speed-scales-with-number-of-enabling-conditions-present|supports|2026-04-18
sourced_from:
- inbox/archive/grand-strategy/2026-04-01-leo-enabling-conditions-technology-governance-coupling-synthesis.md
supports: ["governance-speed-scales-with-number-of-enabling-conditions-present"]
related: ["Governance scope can bootstrap narrow and scale as commercial migration paths deepen over time", "governance-coordination-speed-scales-with-number-of-enabling-conditions-present-creating-predictable-timeline-variation-from-5-years-with-three-conditions-to-56-years-with-one-condition", "governance-speed-scales-with-number-of-enabling-conditions-present", "aviation-governance-succeeded-through-five-enabling-conditions-all-absent-for-ai"]
reweave_edges: ["Governance scope can bootstrap narrow and scale as commercial migration paths deepen over time|related|2026-04-18", "governance-speed-scales-with-number-of-enabling-conditions-present|supports|2026-04-18"]
sourced_from: ["inbox/archive/grand-strategy/2026-04-01-leo-enabling-conditions-technology-governance-coupling-synthesis.md"]
---
# Governance coordination speed scales with number of enabling conditions present, creating predictable timeline variation from 5 years with three conditions to 56 years with one condition
@ -52,4 +47,10 @@ Relevant Notes:
- [[technology-governance-coordination-gaps-close-when-four-enabling-conditions-are-present-visible-triggering-events-commercial-network-effects-low-competitive-stakes-at-inception-or-physical-manifestation]]
Topics:
- [[_map]]
- [[_map]]
## Supporting Evidence
**Source:** FutureUAE REAIM analysis, 2026-02-05
REAIM military AI governance exhibits zero enabling conditions (no commercial migration path, no security architecture substitute, no trade sanctions mechanism, no self-enforcing network effects) and shows active regression rather than slow progress: 43% participation decline in 18 months with US reversal. This confirms the zero-enabling-conditions case produces not just slow coordination but negative coordination velocity.

View file

@ -33,3 +33,10 @@ Barrett's 2003 prediction that Paris Agreement would fail due to lack of enforce
**Source:** International AI Safety Report 2026
The 2026 International AI Safety Report achieved the largest international scientific collaboration on AI governance (100+ experts, 30+ countries) but explicitly chose NOT to make binding policy recommendations, instead functioning purely as evidence synthesis. The report documented that governance 'remains fragmented, largely voluntary' despite this unprecedented epistemic coordination, confirming that non-binding consensus does not transition to binding governance even when scientific agreement is achieved at scale.
## Supporting Evidence
**Source:** FutureUAE REAIM analysis, 2026-02-05
REAIM summit participation regressed from Seoul 2024 (61 nations, US signed under Biden) to A Coruña 2026 (35 nations, US and China both refused) = 43% participation decline in 18 months. The US reversal is particularly significant: not just opt-out from inception, but active withdrawal after demonstrated participation. VP J.D. Vance articulated the rationale as 'excessive regulation could stifle innovation and weaken national security' — the international expression of the domestic 'alignment tax' argument. This demonstrates that voluntary governance is not sticky across changes in domestic political administration, and that even when a major power participates and endorses, the system cannot survive competitive pressure framing.

View file

@ -7,10 +7,13 @@ date: 2026-02-05
domain: grand-strategy
secondary_domains: [ai-alignment]
format: analysis
status: unprocessed
status: processed
processed_by: leo
processed_date: 2026-04-28
priority: high
tags: [REAIM, US-China, military-AI, governance-regression, stepping-stone-failure, voluntary-commitments, international-governance, JD-Vance]
intake_tier: research-task
extraction_model: "anthropic/claude-sonnet-4.5"
---
## Content