What: 6 new claims from 4 Noahopinion articles + 4 source archives. Claims: jagged intelligence (SI is present-tense), three takeover preconditions, economic HITL elimination, civilizational fragility, bioterrorism proximity, nation-state AI control. Why: Phase 2 extraction — first new-source generation in the codex. Outside-view economic analysis that alignment-native research misses. Review: Leo accept — all 6 pass quality bar. Pentagon-Agent: Leo <76FB9BCA-CC16-4479-B3E5-25A3769B3D7E>
1.8 KiB
1.8 KiB
| title | author | source | date | processed_by | processed_date | type | status | claims_extracted | |||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Updated thoughts on AI risk | Noah Smith | Noahopinion (Substack) | 2026-02-16 | theseus | 2026-03-06 | newsletter | complete (13 pages) |
|
Updated thoughts on AI risk
Noah Smith's shift from 2023 AI optimism to increased concern about existential risk. Three risk vectors analyzed:
- Autonomous robot uprising — least worried; requires robotics + production chain control that don't exist yet
- "Machine Stops" scenario — vibe coding creating civilizational fragility as humans lose ability to maintain critical software; overoptimization as the meta-pattern
- AI-assisted bioterrorism — top worry; o3 scores 43.8% vs human PhD 22.1% on virology practical test; AI as "genius in everyone's pocket" removing expertise bottleneck
Connecting thread: overoptimization creating fragility — maximizing measurable outputs while eroding unmeasured essential properties (resilience, human capability, security).
Economic forces as alignment mechanism: wherever AI output quality is verifiable, markets eliminate human oversight. Human-in-the-loop preserved only where quality is hardest to measure.
Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - Updated thoughts on AI risk.pdf