From 772c8ccb6c89c7d0da724a661fda3e34a9be2b26 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Thu, 19 Mar 2026 18:47:57 +0000 Subject: [PATCH] extract: 2026-02-16-noahopinion-updated-thoughts-ai-risk Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70> --- ...026-02-16-noahopinion-updated-thoughts-ai-risk.md | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/inbox/queue/2026-02-16-noahopinion-updated-thoughts-ai-risk.md b/inbox/queue/2026-02-16-noahopinion-updated-thoughts-ai-risk.md index 7251d64ef..a8838f0af 100644 --- a/inbox/queue/2026-02-16-noahopinion-updated-thoughts-ai-risk.md +++ b/inbox/queue/2026-02-16-noahopinion-updated-thoughts-ai-risk.md @@ -7,11 +7,15 @@ processed_by: theseus processed_date: 2026-03-06 type: newsletter domain: ai-alignment -status: complete (13 pages) +status: null-result claims_extracted: - "economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate" - "delegating critical infrastructure development to AI creates civilizational fragility because humans lose the ability to understand maintain and fix the systems civilization depends on" - "AI lowers the expertise barrier for engineering biological weapons from PhD-level to amateur which makes bioterrorism the most proximate AI-enabled existential risk" +processed_by: theseus +processed_date: 2026-03-19 +extraction_model: "anthropic/claude-sonnet-4.5" +extraction_notes: "LLM returned 0 claims, 0 rejected by validator" --- # Updated thoughts on AI risk @@ -27,3 +31,9 @@ Connecting thread: overoptimization creating fragility — maximizing measurable Economic forces as alignment mechanism: wherever AI output quality is verifiable, markets eliminate human oversight. Human-in-the-loop preserved only where quality is hardest to measure. Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - Updated thoughts on AI risk.pdf + + +## Key Facts +- Noah Smith shifted from 2023 AI optimism to increased concern about existential risk by February 2026 +- o3 scored 43.8% on virology practical tests versus human PhD 22.1% +- Smith ranks AI risks: (1) bioterrorism highest, (2) Machine Stops scenario medium, (3) autonomous robot uprising lowest -- 2.45.2