leo: extract claims from 2026-04-13-synthesislawreview-global-ai-governance-stuck-soft-law
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run

- Source: inbox/queue/2026-04-13-synthesislawreview-global-ai-governance-stuck-soft-law.md
- Domain: grand-strategy
- Claims: 0, Entities: 0
- Enrichments: 4
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Leo <PIPELINE>
This commit is contained in:
Teleo Agents 2026-04-28 08:18:24 +00:00
parent 311303d673
commit c9b63df0f0
4 changed files with 25 additions and 1 deletions

View file

@ -30,3 +30,10 @@ The 2026 International AI Safety Report, despite achieving consensus across 30+
**Source:** FutureUAE REAIM analysis, 2026-02-05 **Source:** FutureUAE REAIM analysis, 2026-02-05
REAIM confirms the ceiling operates even at non-binding level: when major powers refuse even voluntary commitments on military AI (US and China both declined A Coruña), the scope stratification excludes high-stakes applications before reaching binding governance stage. The voluntary norm-building process cannot achieve commitments from states with most capable military AI programs. REAIM confirms the ceiling operates even at non-binding level: when major powers refuse even voluntary commitments on military AI (US and China both declined A Coruña), the scope stratification excludes high-stakes applications before reaching binding governance stage. The voluntary norm-building process cannot achieve commitments from states with most capable military AI programs.
## Supporting Evidence
**Source:** Synthesis Law Review Blog, 2026-04-13
The Council of Europe Framework Convention on Artificial Intelligence, marketed as 'the first binding international AI treaty,' contains national security carve-outs that make it 'largely toothless against state-sponsored AI development.' The binding language applies primarily to private sector actors; state use of AI in national security contexts is explicitly exempted. This is the purest form-substance divergence example at the international treaty level—technically binding, strategically toothless due to scope stratification.

View file

@ -24,3 +24,10 @@ The 2026 International AI Safety Report represents the largest international sci
**Source:** FutureUAE/JustSecurity REAIM analysis, 2026-02-05 **Source:** FutureUAE/JustSecurity REAIM analysis, 2026-02-05
REAIM demonstrates epistemic coordination (three summits, documented frameworks, middle-power consensus) without operational coordination (major powers refuse participation, 43% decline in signatories). The 'artificial urgency' critique notes that urgency framing functions as rhetorical substitute for governance, not driver of it — epistemic activity without operational binding. REAIM demonstrates epistemic coordination (three summits, documented frameworks, middle-power consensus) without operational coordination (major powers refuse participation, 43% decline in signatories). The 'artificial urgency' critique notes that urgency framing functions as rhetorical substitute for governance, not driver of it — epistemic activity without operational binding.
## Supporting Evidence
**Source:** Synthesis Law Review Blog, 2026-04-13
Despite 'multiple international summits and frameworks,' there is 'still no Geneva Convention for AI' after 8+ years. The Council of Europe treaty achieves epistemic coordination (documented consensus on principles) while operational coordination fails through national security carve-outs. This is the international expression of epistemic-operational divergence—agreement on what should happen without binding implementation in high-stakes domains.

View file

@ -40,3 +40,10 @@ The 2026 International AI Safety Report achieved the largest international scien
**Source:** FutureUAE REAIM analysis, 2026-02-05 **Source:** FutureUAE REAIM analysis, 2026-02-05
REAIM summit participation regressed from Seoul 2024 (61 nations, US signed under Biden) to A Coruña 2026 (35 nations, US and China both refused) = 43% participation decline in 18 months. The US reversal is particularly significant: not just opt-out from inception, but active withdrawal after demonstrated participation. VP J.D. Vance articulated the rationale as 'excessive regulation could stifle innovation and weaken national security' — the international expression of the domestic 'alignment tax' argument. This demonstrates that voluntary governance is not sticky across changes in domestic political administration, and that even when a major power participates and endorses, the system cannot survive competitive pressure framing. REAIM summit participation regressed from Seoul 2024 (61 nations, US signed under Biden) to A Coruña 2026 (35 nations, US and China both refused) = 43% participation decline in 18 months. The US reversal is particularly significant: not just opt-out from inception, but active withdrawal after demonstrated participation. VP J.D. Vance articulated the rationale as 'excessive regulation could stifle innovation and weaken national security' — the international expression of the domestic 'alignment tax' argument. This demonstrates that voluntary governance is not sticky across changes in domestic political administration, and that even when a major power participates and endorses, the system cannot survive competitive pressure framing.
## Supporting Evidence
**Source:** Synthesis Law Review Blog, 2026-04-13
At the February 2026 REAIM A Coruña summit, only 35 of 85 nations signed a commitment to 20 principles on military AI. 'Both the United States and China opted out of the joint declaration.' This confirms that strategic actors opt out at the non-binding stage, preventing the soft-to-hard law transition. As a result: 'there is still no Geneva Convention for AI, or World Health Organisation for algorithms' after 8+ years of governance attempts.

View file

@ -7,10 +7,13 @@ date: 2026-04-13
domain: grand-strategy domain: grand-strategy
secondary_domains: [ai-alignment] secondary_domains: [ai-alignment]
format: analysis format: analysis
status: unprocessed status: processed
processed_by: leo
processed_date: 2026-04-28
priority: medium priority: medium
tags: [AI-governance, soft-law, hard-law, Council-of-Europe, REAIM, international-governance, national-security-carveout, stepping-stone] tags: [AI-governance, soft-law, hard-law, Council-of-Europe, REAIM, international-governance, national-security-carveout, stepping-stone]
intake_tier: research-task intake_tier: research-task
extraction_model: "anthropic/claude-sonnet-4.5"
--- ---
## Content ## Content