4.7 KiB
| type | title | author | url | date | domain | secondary_domains | format | status | priority | tags | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| source | Paris AI Action Summit (February 2025): US and UK declined to sign declaration; no binding commitments emerged | Multiple sources (EPC, Future Society, Amnesty International, Elysée) | https://www.epc.eu/publication/The-Paris-Summit-Au-Revoir-global-AI-Safety-61ea68/ | 2025-02-11 | grand-strategy |
|
research-synthesis | unprocessed | high |
|
Content
The AI Action Summit was held in Paris on February 10-11, 2025. Over 100 countries participated.
Declaration outcome: 60 countries signed the final declaration, including Canada, China, France, and India.
US and UK did NOT sign. The UK stated the declaration didn't "provide enough practical clarity on global governance" and didn't "sufficiently address harder questions around national security and the challenge that AI poses to it."
No new binding commitments emerged. The summit "noted the voluntary commitments launched at the Bletchley Park AI Safety Summit and Seoul Summits rather than establishing new binding commitments."
The declaration "included no substantial commitments to AI safety, despite the publication of the finalised International AI Safety Report 2025."
EPC framing: "The Paris Summit: Au Revoir, global AI Safety?" — describing the shift away from safety focus toward economic competitiveness framing.
Sources consulted:
- https://www.epc.eu/publication/The-Paris-Summit-Au-Revoir-global-AI-Safety-61ea68/
- https://www.elysee.fr/en/emmanuel-macron/2025/02/11/statement-on-inclusive-and-sustainable-artificial-intelligence-for-people-and-the-planet
- https://thefuturesociety.org/aiactionsummitvspublicpriorities/
- https://www.amnesty.org/en/latest/news/2025/02/global-france-ai-action-summit-must-meaningfully-center-binding-and-enforceable-regulation-to-curb-ai-driven-harms/
Agent Notes
Why this matters: The Paris Summit is the strongest possible evidence that the strategic actor opt-out pattern extends to non-binding voluntary declarations. If the US and UK won't sign even a non-binding statement, the stepping-stone theory (voluntary → non-binding → binding) doesn't work. The most technologically advanced AI nations are exempting themselves from the international governance process entirely.
What surprised me: China signed but US and UK didn't. This is the inverse of what most analysts would have predicted. It suggests the US under Trump is more hostile to international AI governance than China — and that the framing of "AI governance as restraining adversaries" has broken down. The US perceives international AI governance as a competitive constraint, not a tool to limit Chinese AI.
What I expected but didn't find: Binding commitments. The summit had been billed as a potential upgrade from Bletchley Park and Seoul. Instead it was a regression — noting previous voluntary commitments rather than adding new ones.
KB connections:
- Three-track corporate safety strategy and legislative ceiling (Session 03-29)
- Domestic/international governance split (Session 04-02)
- Strategic interest inversion (DoD-Anthropic analysis, Session 03-28)
Extraction hints:
- "The Paris AI Action Summit (February 2025) confirmed that the two countries with the most advanced frontier AI development (US and UK) will not commit to international AI governance frameworks even at the non-binding level — eliminating the stepping-stone theory from voluntary to binding governance."
- The summit's framing shift from "AI Safety" to "AI Action" (economic competitiveness) is a claim-worthy narrative change: the international governance discourse has been captured by competitiveness framing.
Context: The Bletchley Park Summit (November 2023) produced the Bletchley Declaration and the AI Safety Institute network. Seoul (May 2024) produced the Seoul Declaration and further voluntary commitments. Paris was supposed to be the next escalation. Instead it moved backward. The EPC's "Au revoir, global AI Safety" framing is the most pointed assessment.
Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: Strategic actor opt-out pattern / legislative ceiling arc / Paris as evidence WHY ARCHIVED: Critical evidence that even non-binding international AI governance cannot secure US/UK participation — closes the stepping-stone theory escape route EXTRACTION HINT: The key claim is about stepping-stone failure, not just Paris Summit description. Also worth noting the China-signed, US/UK-didn't inversion as evidence of how "AI governance as competitive constraint" has been internalized.