leo: extract claims from 2026-04-03-coe-ai-framework-convention-scope-stratification
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run

- Source: inbox/queue/2026-04-03-coe-ai-framework-convention-scope-stratification.md
- Domain: grand-strategy
- Claims: 1, Entities: 1
- Enrichments: 3
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Leo <PIPELINE>
This commit is contained in:
Teleo Agents 2026-04-03 14:21:02 +00:00
parent 4cafc83519
commit a1c26fba70
2 changed files with 66 additions and 0 deletions

View file

@ -0,0 +1,17 @@
---
type: claim
domain: grand-strategy
description: The first binding international AI treaty confirms that governance frameworks achieve binding status by scoping out the applications that most require governance, creating a two-tier architecture where civil applications are governed but military, frontier, and private sector AI remain unregulated
confidence: experimental
source: Council of Europe Framework Convention on AI (CETS 225), entered force November 2025; civil society critiques; GPPi policy brief March 2026
created: 2026-04-03
title: Binding international AI governance achieves legal form through scope stratification — the Council of Europe AI Framework Convention entered force by explicitly excluding national security, defense applications, and making private sector obligations optional
agent: leo
scope: structural
sourcer: Council of Europe, civil society organizations, GPPi
related_claims: ["eu-ai-act-article-2-3-national-security-exclusion-confirms-legislative-ceiling-is-cross-jurisdictional.md", "the-legislative-ceiling-on-military-ai-governance-is-conditional-not-absolute-cwc-proves-binding-governance-without-carveouts-is-achievable-but-requires-three-currently-absent-conditions.md", "international-ai-governance-stepping-stone-theory-fails-because-strategic-actors-opt-out-at-non-binding-stage.md"]
---
# Binding international AI governance achieves legal form through scope stratification — the Council of Europe AI Framework Convention entered force by explicitly excluding national security, defense applications, and making private sector obligations optional
The Council of Europe AI Framework Convention (CETS 225) entered into force on November 1, 2025, becoming the first legally binding international AI treaty. However, it achieved this binding status through systematic exclusion of high-stakes applications: (1) National security activities are completely exempt — parties 'are not required to apply the provisions of the treaty to activities related to the protection of their national security interests'; (2) National defense matters are explicitly excluded; (3) Private sector obligations are opt-in — parties may choose whether to directly obligate companies or 'take other measures' while respecting international obligations. Civil society organizations warned that 'the prospect of failing to address private companies while also providing states with a broad national security exemption would provide little meaningful protection to individuals who are increasingly subject to powerful AI systems.' This pattern mirrors the EU AI Act Article 2.3 national security carve-out, suggesting scope stratification is the dominant mechanism by which AI governance frameworks achieve binding legal form. The treaty's rapid entry into force (18 months from adoption, requiring only 5 ratifications including 3 CoE members) was enabled by its limited scope — it binds only where it excludes the highest-stakes AI deployments. This creates a two-tier international architecture: Tier 1 (CoE treaty) binds civil AI applications with minimal enforcement; Tier 2 (military, frontier development, private sector) remains ungoverned internationally. The GPPi March 2026 policy brief 'Anchoring Global AI Governance' acknowledges the challenge of building on this foundation given its structural limitations.

View file

@ -0,0 +1,49 @@
# Council of Europe AI Framework Convention (CETS 225)
**Type:** International treaty
**Status:** In force (November 1, 2025)
**Formal title:** Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law
**Scope:** Civil AI applications (excludes national security, defense, and makes private sector obligations optional)
## Overview
The first legally binding international AI treaty, adopted by the Council of Europe Committee of Ministers on May 17, 2024, and entered into force on November 1, 2025, after five ratifications including three CoE member states.
## Key Provisions
**Scope exclusions:**
- National security activities: Complete exemption — parties not required to apply treaty provisions
- National defense: Explicitly excluded
- Research and development: Excluded except when testing may interfere with human rights, democracy, or rule of law
- Private sector: Opt-in obligations — parties may choose direct obligations or alternative measures
**Signatories:**
- EU Commission (signed)
- United States (signed September 2024 under Biden, ratification unlikely under Trump)
- UK, France, Norway (among ratifying states)
- China: Did not participate in negotiations
## Timeline
- **2024-05-17** — Adopted by Committee of Ministers
- **2024-09-05** — Opened for signature in Vilnius
- **2024-09** — United States signed under Biden administration
- **2025-11-01** — Entered into force after five ratifications
- **2026-03** — GPPi policy brief acknowledges challenges of building on treaty given structural scope limitations
## Civil Society Response
Organizations warned that failing to address private companies while providing broad national security exemptions would provide 'little meaningful protection to individuals who are increasingly subject to powerful AI systems prone to bias, human manipulation, and the destabilisation of democratic institutions.'
## Governance Architecture
Creates two-tier international AI governance:
- **Tier 1:** Civil AI applications (bound by treaty, minimal enforcement)
- **Tier 2:** Military, national security, frontier development, private sector (ungoverned internationally)
## Sources
- Council of Europe official documentation
- CETaS Turing Institute analysis
- GPPi policy brief (March 2026): "Anchoring Global AI Governance"
- Civil society critiques