- What: 6 new claims + 4 source archives from Phase 2 extraction - Sources: "You are no longer the smartest type of thing on Earth" (Feb 13), "Updated thoughts on AI risk" (Feb 16), "Superintelligence is already here, today" (Mar 2), "If AI is a weapon, why don't we regulate it like one?" (Mar 6) - New claims: 1. Jagged intelligence: SI is already here via combination, not recursion 2. Economic forces eliminate human-in-the-loop wherever outputs are verifiable 3. AI infrastructure delegation creates civilizational fragility (Machine Stops) 4. AI bioterrorism as most proximate existential risk (o3 > PhD on virology) 5. Nation-state monopoly on force requires frontier AI control 6. Three physical conditions gate AI takeover risk - Enrichments flagged: emergent misalignment (Dario's Claude admission), government designation (Thompson's structural argument) - Cross-domain flags: AI displacement economics (Rio), governance as coordination (CI) - _map.md updated with new Risk Vectors (Outside View) section Pentagon-Agent: Theseus <845F10FB-BC22-40F6-A6A6-F6E4D8F78465>
28 lines
1.8 KiB
Markdown
28 lines
1.8 KiB
Markdown
---
|
|
title: "Updated thoughts on AI risk"
|
|
author: Noah Smith
|
|
source: Noahopinion (Substack)
|
|
date: 2026-02-16
|
|
processed_by: theseus
|
|
processed_date: 2026-03-06
|
|
type: newsletter
|
|
status: complete (13 pages)
|
|
claims_extracted:
|
|
- "economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate"
|
|
- "delegating critical infrastructure development to AI creates civilizational fragility because humans lose the ability to understand maintain and fix the systems civilization depends on"
|
|
- "AI lowers the expertise barrier for engineering biological weapons from PhD-level to amateur which makes bioterrorism the most proximate AI-enabled existential risk"
|
|
---
|
|
|
|
# Updated thoughts on AI risk
|
|
|
|
Noah Smith's shift from 2023 AI optimism to increased concern about existential risk. Three risk vectors analyzed:
|
|
|
|
1. **Autonomous robot uprising** — least worried; requires robotics + production chain control that don't exist yet
|
|
2. **"Machine Stops" scenario** — vibe coding creating civilizational fragility as humans lose ability to maintain critical software; overoptimization as the meta-pattern
|
|
3. **AI-assisted bioterrorism** — top worry; o3 scores 43.8% vs human PhD 22.1% on virology practical test; AI as "genius in everyone's pocket" removing expertise bottleneck
|
|
|
|
Connecting thread: overoptimization creating fragility — maximizing measurable outputs while eroding unmeasured essential properties (resilience, human capability, security).
|
|
|
|
Economic forces as alignment mechanism: wherever AI output quality is verifiable, markets eliminate human oversight. Human-in-the-loop preserved only where quality is hardest to measure.
|
|
|
|
Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - Updated thoughts on AI risk.pdf
|