| claim |
grand-strategy |
International scientific bodies can achieve agreement on facts (epistemic layer) while simultaneously documenting failure to achieve agreement on action (operational layer), as demonstrated by 30+ countries coordinating on AI risk evidence while confirming governance remains voluntary and fragmented |
experimental |
International AI Safety Report 2026 (Bengio et al., 100+ experts, 30+ countries) |
2026-04-25 |
Epistemic coordination on AI safety outpaces operational coordination, creating documented scientific consensus on governance fragmentation |
leo |
grand-strategy/2026-02-03-bengio-international-ai-safety-report-2026.md |
structural |
Yoshua Bengio et al. |
| international-ai-governance-stepping-stone-theory-fails-because-strategic-actors-opt-out-at-non-binding-stage |
| binding-international-ai-governance-achieves-legal-form-through-scope-stratification-excluding-high-stakes-applications |
|
| technology-advances-exponentially-but-coordination-mechanisms-evolve-linearly-creating-a-widening-gap |
| formal-coordination-mechanisms-require-narrative-objective-function-specification |
| binding-international-ai-governance-achieves-legal-form-through-scope-stratification-excluding-high-stakes-applications |
| evidence-dilemma-rapid-ai-development-structurally-prevents-adequate-pre-deployment-safety-evidence-accumulation |
| only binding regulation with enforcement teeth changes frontier AI lab behavior because every voluntary commitment has been eroded abandoned or made conditional on competitor behavior when commercially inconvenient |
| AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation |
|