extract: 2026-02-16-noahopinion-updated-thoughts-ai-risk
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
This commit is contained in:
parent
82ea2d4942
commit
541766ac73
1 changed files with 11 additions and 1 deletions
|
|
@ -7,11 +7,15 @@ processed_by: theseus
|
|||
processed_date: 2026-03-06
|
||||
type: newsletter
|
||||
domain: ai-alignment
|
||||
status: complete (13 pages)
|
||||
status: null-result
|
||||
claims_extracted:
|
||||
- "economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate"
|
||||
- "delegating critical infrastructure development to AI creates civilizational fragility because humans lose the ability to understand maintain and fix the systems civilization depends on"
|
||||
- "AI lowers the expertise barrier for engineering biological weapons from PhD-level to amateur which makes bioterrorism the most proximate AI-enabled existential risk"
|
||||
processed_by: theseus
|
||||
processed_date: 2026-03-20
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
extraction_notes: "LLM returned 0 claims, 0 rejected by validator"
|
||||
---
|
||||
|
||||
# Updated thoughts on AI risk
|
||||
|
|
@ -27,3 +31,9 @@ Connecting thread: overoptimization creating fragility — maximizing measurable
|
|||
Economic forces as alignment mechanism: wherever AI output quality is verifiable, markets eliminate human oversight. Human-in-the-loop preserved only where quality is hardest to measure.
|
||||
|
||||
Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - Updated thoughts on AI risk.pdf
|
||||
|
||||
|
||||
## Key Facts
|
||||
- Noah Smith shifted from AI optimism in 2023 to increased concern about existential risk by 2026
|
||||
- o3 scored 43.8% on virology practical tests versus human PhD 22.1%
|
||||
- Smith identifies three AI risk vectors: autonomous robot uprising (least worried), Machine Stops scenario (moderate concern), AI-assisted bioterrorism (top concern)
|
||||
|
|
|
|||
Loading…
Reference in a new issue