teleo-codex/inbox/queue/2026-02-16-noahopinion-updated-thoughts-ai-risk.md
Teleo Agents 6459163781 epimetheus: source archive restructure — 537 files reorganized
inbox/queue/ (52 unprocessed) — landing zone for new sources
inbox/archive/{domain}/ (311 processed) — organized by domain
inbox/null-result/ (174) — reviewed, nothing extractable

One-time atomic migration. All paths preserved (wiki links use stems).

Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA>
2026-03-18 11:52:23 +00:00

1.8 KiB

title author source date processed_by processed_date type status claims_extracted
Updated thoughts on AI risk Noah Smith Noahopinion (Substack) 2026-02-16 theseus 2026-03-06 newsletter complete (13 pages)
economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate
delegating critical infrastructure development to AI creates civilizational fragility because humans lose the ability to understand maintain and fix the systems civilization depends on
AI lowers the expertise barrier for engineering biological weapons from PhD-level to amateur which makes bioterrorism the most proximate AI-enabled existential risk

Updated thoughts on AI risk

Noah Smith's shift from 2023 AI optimism to increased concern about existential risk. Three risk vectors analyzed:

  1. Autonomous robot uprising — least worried; requires robotics + production chain control that don't exist yet
  2. "Machine Stops" scenario — vibe coding creating civilizational fragility as humans lose ability to maintain critical software; overoptimization as the meta-pattern
  3. AI-assisted bioterrorism — top worry; o3 scores 43.8% vs human PhD 22.1% on virology practical test; AI as "genius in everyone's pocket" removing expertise bottleneck

Connecting thread: overoptimization creating fragility — maximizing measurable outputs while eroding unmeasured essential properties (resilience, human capability, security).

Economic forces as alignment mechanism: wherever AI output quality is verifiable, markets eliminate human oversight. Human-in-the-loop preserved only where quality is hardest to measure.

Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - Updated thoughts on AI risk.pdf