teleo-codex/inbox/queue/2026-03-16-theseus-ai-coordination-governance-evidence.md
Teleo Agents 6459163781 epimetheus: source archive restructure — 537 files reorganized
inbox/queue/ (52 unprocessed) — landing zone for new sources
inbox/archive/{domain}/ (311 processed) — organized by domain
inbox/null-result/ (174) — reviewed, nothing extractable

One-time atomic migration. All paths preserved (wiki links use stems).

Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA>
2026-03-18 11:52:23 +00:00

53 lines
3.3 KiB
Markdown

---
type: source
title: "Empirical Evidence: AI Coordination and Governance Mechanisms That Changed Behavior"
author: "Theseus research agent (multi-source web synthesis)"
url: null
date_published: 2026-03-16
date_archived: 2026-03-16
domain: ai-alignment
status: processing
processed_by: theseus
tags: [ai-governance, coordination, safety-commitments, regulation, enforcement, voluntary-pledges]
sourced_via: "Theseus research agent — 45 web searches synthesized from Brookings, Stanford FMTI, EU legislation, OECD, government publications, TechCrunch, TIME, CNN, Fortune, academic papers"
---
# Empirical Evidence: AI Coordination and Governance Mechanisms That Changed Behavior
Core finding: almost no international AI governance mechanism has produced verified behavioral change at frontier AI labs. Only three mechanisms work: (1) binding regulation with enforcement teeth (EU AI Act, China), (2) export controls backed by state power, (3) competitive/reputational pressure through markets.
## Behavioral Change Tier List
**Tier 1 — Verified behavioral change:**
- EU AI Act: Apple paused Apple Intelligence in EU, Meta changed ads, EUR 500M+ fines (DMA). Companies preemptively modifying products.
- China's AI regulations: mandatory algorithm filing, content labeling, criminal enforcement. First binding generative AI regulation (Aug 2023).
- US export controls: most impactful mechanism. Tiered country system, deployment caps, Nvidia designing compliance chips. Geopolitically motivated, not safety-motivated.
**Tier 2 — Institutional infrastructure, uncertain behavioral change:**
- AI Safety Institutes (UK, US, Japan, Korea, Canada). US-UK joint o1 evaluation. But no blocking authority, US AISI defunded/rebranded.
- Third-party evaluation (METR, Apollo Research). Fragile, no regulatory mandate.
**Tier 3 — Partial voluntary compliance:**
- Watermarking: 38% implementation. Google SynthID, Meta AudioSeal. Anthropic the only major lab without one.
- Red-teaming: self-reported, limited external verification.
**Tier 4 — No verified behavioral change:**
- ALL international declarations (Bletchley, Seoul, Paris, Hiroshima, OECD, UN)
- Frontier Model Forum
- White House voluntary commitments
## Key Evidence Points
- Stanford FMTI transparency scores DECLINING: -17 points mean (2024→2025). Meta -29, Mistral -37, OpenAI -14.
- OpenAI explicitly made safety conditional on competitor behavior (Preparedness Framework v2, Apr 2025).
- OpenAI removed "safely" from mission statement (Nov 2025).
- OpenAI dissolved Superalignment team (May 2024) and Mission Alignment team (Feb 2026).
- Google accused by 60 UK lawmakers of violating Seoul commitments (Gemini 2.5 Pro, Apr 2025).
- 450+ organizations lobbied on AI in 2025 (up from 6 in 2016). $92M in lobbying fees Q1-Q3 2025.
- SB 1047 (CA AI safety bill) vetoed after heavy industry lobbying.
- Anthropic's own language: RSP "very hard to meet without industry-wide coordination."
## Novel Mechanisms
- Compute governance: export controls work but geopolitically motivated. KYC for compute proposed, not implemented.
- Insurance/liability: market projected $29.7B by 2033. Creates market incentives aligned with safety.
- Third-party auditing: METR, Apollo Research. Apollo warns ecosystem unsustainable without regulatory mandate.
- Futarchy: implemented for DAO governance (MetaDAO, Optimism experiment) but not yet for AI governance.