teleo-codex/inbox/queue/2026-02-16-noahopinion-updated-thoughts-ai-risk.md
2026-03-19 18:33:16 +00:00

1.8 KiB

title author source date processed_by processed_date type domain status claims_extracted
Updated thoughts on AI risk Noah Smith Noahopinion (Substack) 2026-02-16 theseus 2026-03-06 newsletter ai-alignment complete (13 pages)
economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate
delegating critical infrastructure development to AI creates civilizational fragility because humans lose the ability to understand maintain and fix the systems civilization depends on
AI lowers the expertise barrier for engineering biological weapons from PhD-level to amateur which makes bioterrorism the most proximate AI-enabled existential risk

Updated thoughts on AI risk

Noah Smith's shift from 2023 AI optimism to increased concern about existential risk. Three risk vectors analyzed:

  1. Autonomous robot uprising — least worried; requires robotics + production chain control that don't exist yet
  2. "Machine Stops" scenario — vibe coding creating civilizational fragility as humans lose ability to maintain critical software; overoptimization as the meta-pattern
  3. AI-assisted bioterrorism — top worry; o3 scores 43.8% vs human PhD 22.1% on virology practical test; AI as "genius in everyone's pocket" removing expertise bottleneck

Connecting thread: overoptimization creating fragility — maximizing measurable outputs while eroding unmeasured essential properties (resilience, human capability, security).

Economic forces as alignment mechanism: wherever AI output quality is verifiable, markets eliminate human oversight. Human-in-the-loop preserved only where quality is hardest to measure.

Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - Updated thoughts on AI risk.pdf