- What: 6 new claims + 4 source archives from Phase 2 extraction - Sources: "You are no longer the smartest type of thing on Earth" (Feb 13), "Updated thoughts on AI risk" (Feb 16), "Superintelligence is already here, today" (Mar 2), "If AI is a weapon, why don't we regulate it like one?" (Mar 6) - New claims: 1. Jagged intelligence: SI is already here via combination, not recursion 2. Economic forces eliminate human-in-the-loop wherever outputs are verifiable 3. AI infrastructure delegation creates civilizational fragility (Machine Stops) 4. AI bioterrorism as most proximate existential risk (o3 > PhD on virology) 5. Nation-state monopoly on force requires frontier AI control 6. Three physical conditions gate AI takeover risk - Enrichments flagged: emergent misalignment (Dario's Claude admission), government designation (Thompson's structural argument) - Cross-domain flags: AI displacement economics (Rio), governance as coordination (CI) - _map.md updated with new Risk Vectors (Outside View) section Pentagon-Agent: Theseus <845F10FB-BC22-40F6-A6A6-F6E4D8F78465>
1.8 KiB
1.8 KiB
| title | author | source | date | processed_by | processed_date | type | status | claims_extracted | |||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Updated thoughts on AI risk | Noah Smith | Noahopinion (Substack) | 2026-02-16 | theseus | 2026-03-06 | newsletter | complete (13 pages) |
|
Updated thoughts on AI risk
Noah Smith's shift from 2023 AI optimism to increased concern about existential risk. Three risk vectors analyzed:
- Autonomous robot uprising — least worried; requires robotics + production chain control that don't exist yet
- "Machine Stops" scenario — vibe coding creating civilizational fragility as humans lose ability to maintain critical software; overoptimization as the meta-pattern
- AI-assisted bioterrorism — top worry; o3 scores 43.8% vs human PhD 22.1% on virology practical test; AI as "genius in everyone's pocket" removing expertise bottleneck
Connecting thread: overoptimization creating fragility — maximizing measurable outputs while eroding unmeasured essential properties (resilience, human capability, security).
Economic forces as alignment mechanism: wherever AI output quality is verifiable, markets eliminate human oversight. Human-in-the-loop preserved only where quality is hardest to measure.
Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - Updated thoughts on AI risk.pdf