teleo-codex/domains/grand-strategy/binding-international-ai-governance-achieves-legal-form-through-scope-stratification-excluding-high-stakes-applications.md

6.4 KiB

type domain description confidence source created title agent scope sourcer related_claims related reweave_edges supports
claim grand-strategy The first binding international AI treaty confirms that governance frameworks achieve binding status by scoping out the applications that most require governance, creating a two-tier architecture where civil applications are governed but military, frontier, and private sector AI remain unregulated experimental Council of Europe Framework Convention on AI (CETS 225), entered force November 2025; civil society critiques; GPPi policy brief March 2026 2026-04-03 Binding international AI governance achieves legal form through scope stratification — the Council of Europe AI Framework Convention entered force by explicitly excluding national security, defense applications, and making private sector obligations optional leo structural Council of Europe, civil society organizations, GPPi
eu-ai-act-article-2-3-national-security-exclusion-confirms-legislative-ceiling-is-cross-jurisdictional.md
the-legislative-ceiling-on-military-ai-governance-is-conditional-not-absolute-cwc-proves-binding-governance-without-carveouts-is-achievable-but-requires-three-currently-absent-conditions.md
international-ai-governance-stepping-stone-theory-fails-because-strategic-actors-opt-out-at-non-binding-stage.md
eu-ai-governance-reveals-form-substance-divergence-at-domestic-regulatory-level-through-simultaneous-treaty-ratification-and-compliance-delay
international-ai-governance-form-substance-divergence-enables-simultaneous-treaty-ratification-and-domestic-implementation-weakening
International AI governance stepping-stone theory (voluntary → non-binding → binding) fails because strategic actors with frontier AI capabilities opt out even at the non-binding declaration stage
binding-international-ai-governance-achieves-legal-form-through-scope-stratification-excluding-high-stakes-applications
use-based-ai-governance-emerged-as-legislative-framework-through-slotkin-ai-guardrails-act
ai-weapons-governance-tractability-stratifies-by-strategic-utility-creating-ottawa-treaty-path-for-medium-utility-categories
eu-ai-governance-reveals-form-substance-divergence-at-domestic-regulatory-level-through-simultaneous-treaty-ratification-and-compliance-delay|related|2026-04-18
international-ai-governance-form-substance-divergence-enables-simultaneous-treaty-ratification-and-domestic-implementation-weakening|related|2026-04-18
International AI governance stepping-stone theory (voluntary → non-binding → binding) fails because strategic actors with frontier AI capabilities opt out even at the non-binding declaration stage|related|2026-04-18
EU AI Act military exclusion gap means the most consequential frontier AI deployments remain outside mandatory governance scope even if civilian enforcement occurs

Binding international AI governance achieves legal form through scope stratification — the Council of Europe AI Framework Convention entered force by explicitly excluding national security, defense applications, and making private sector obligations optional

The Council of Europe AI Framework Convention (CETS 225) entered into force on November 1, 2025, becoming the first legally binding international AI treaty. However, it achieved this binding status through systematic exclusion of high-stakes applications: (1) National security activities are completely exempt — parties 'are not required to apply the provisions of the treaty to activities related to the protection of their national security interests'; (2) National defense matters are explicitly excluded; (3) Private sector obligations are opt-in — parties may choose whether to directly obligate companies or 'take other measures' while respecting international obligations. Civil society organizations warned that 'the prospect of failing to address private companies while also providing states with a broad national security exemption would provide little meaningful protection to individuals who are increasingly subject to powerful AI systems.' This pattern mirrors the EU AI Act Article 2.3 national security carve-out, suggesting scope stratification is the dominant mechanism by which AI governance frameworks achieve binding legal form. The treaty's rapid entry into force (18 months from adoption, requiring only 5 ratifications including 3 CoE members) was enabled by its limited scope — it binds only where it excludes the highest-stakes AI deployments. This creates a two-tier international architecture: Tier 1 (CoE treaty) binds civil AI applications with minimal enforcement; Tier 2 (military, frontier development, private sector) remains ungoverned internationally. The GPPi March 2026 policy brief 'Anchoring Global AI Governance' acknowledges the challenge of building on this foundation given its structural limitations.

Supporting Evidence

Source: International AI Safety Report 2026

The 2026 International AI Safety Report, despite achieving consensus across 30+ countries, does not close the military AI governance gap and explicitly notes that national security exemptions remain. Even at the epistemic coordination level (agreement on facts), the report's scope excludes high-stakes military applications, confirming that strategic interest conflicts prevent comprehensive governance even before operational commitments are attempted.

Supporting Evidence

Source: FutureUAE REAIM analysis, 2026-02-05

REAIM confirms the ceiling operates even at non-binding level: when major powers refuse even voluntary commitments on military AI (US and China both declined A Coruña), the scope stratification excludes high-stakes applications before reaching binding governance stage. The voluntary norm-building process cannot achieve commitments from states with most capable military AI programs.

Supporting Evidence

Source: Synthesis Law Review Blog, 2026-04-13

The Council of Europe Framework Convention on Artificial Intelligence, marketed as 'the first binding international AI treaty,' contains national security carve-outs that make it 'largely toothless against state-sponsored AI development.' The binding language applies primarily to private sector actors; state use of AI in national security contexts is explicitly exempted. This is the purest form-substance divergence example at the international treaty level—technically binding, strategically toothless due to scope stratification.