theseus: extract claims from 2025-07-00-fli-ai-safety-index-summer-2025 #222

Closed
theseus wants to merge 1 commit from extract/2025-07-00-fli-ai-safety-index-summer-2025 into main

View file

@ -7,9 +7,14 @@ date: 2025-07-01
domain: ai-alignment
secondary_domains: [grand-strategy]
format: report
status: unprocessed
status: null-result
priority: high
tags: [AI-safety, company-scores, accountability, governance, existential-risk, transparency]
processed_by: theseus
processed_date: 2025-07-01
enrichments_applied: ["the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it.md", "voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints.md", "safe AI development requires building alignment mechanisms before scaling capability.md", "no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "High-value extraction. Four new claims establish quantitative baseline for frontier AI safety practice. Four enrichments provide strong empirical confirmation of existing structural claims about alignment tax and voluntary safety failure. The C+ ceiling and universal D existential safety scores are the headline findings. Index provides repeatable methodology for tracking safety evolution over time."
---
## Content
@ -62,3 +67,10 @@ FLI's comprehensive evaluation of frontier AI companies across 6 safety dimensio
PRIMARY CONNECTION: [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]]
WHY ARCHIVED: Provides quantitative company-level evidence for the race-to-the-bottom dynamic — best company scores C+ in overall safety, all companies score D or below in existential safety
EXTRACTION HINT: The headline claim is "no frontier AI company scores above D in existential safety despite AGI claims." The company-by-company comparison and the existential safety gap are the highest-value extractions.
## Key Facts
- FLI AI Safety Index Summer 2025 assessed 7 frontier AI companies across 6 dimensions
- Company scores: Anthropic C+ (2.64), OpenAI C (2.10), DeepMind C- (1.76), x.AI D (1.23), Meta D (1.06), Zhipu AI F (0.62), DeepSeek F (0.37)
- Six evaluation dimensions: Risk Assessment, Current Harms, Safety Frameworks, Existential Safety, Governance & Accountability, Information Sharing
- Methodology peer-reviewed, based on public information plus email correspondence with developers