pipeline: archive 2 source(s) post-merge
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
This commit is contained in:
parent
680ea74614
commit
f47f250631
2 changed files with 55 additions and 0 deletions
|
|
@ -0,0 +1,36 @@
|
|||
---
|
||||
title: "Superintelligence is already here, today"
|
||||
author: Noah Smith
|
||||
source: Noahopinion (Substack)
|
||||
date: 2026-03-02
|
||||
processed_by: theseus
|
||||
processed_date: 2026-03-06
|
||||
type: newsletter
|
||||
domain: ai-alignment
|
||||
status: processed
|
||||
claims_extracted:
|
||||
- "three conditions gate AI takeover risk autonomy robotics and production chain control and current AI satisfies none of them which bounds near-term catastrophic risk despite superhuman cognitive capabilities"
|
||||
enrichments:
|
||||
- target: "recursive self-improvement creates explosive intelligence gains because the system that improves is itself improving"
|
||||
contribution: "jagged intelligence counterargument — SI arrived via combination not recursion (converted from standalone by Leo PR #27)"
|
||||
---
|
||||
|
||||
# Superintelligence is already here, today
|
||||
|
||||
Noah Smith's argument that AI is already superintelligent via "jagged intelligence" — superhuman in aggregate but uneven across dimensions.
|
||||
|
||||
Key evidence:
|
||||
- METR capability curve: steady climb across cognitive benchmarks, no plateau
|
||||
- Erdos problems: ~100 transferred from conjecture to solved
|
||||
- Terence Tao: describes AI as complementary research tool that changed his workflow
|
||||
- Ginkgo Bioworks + GPT-5: 150 years of protein engineering compressed to weeks
|
||||
- "Jagged intelligence": human-level language/reasoning + superhuman speed/memory/tirelessness = superintelligence without recursive self-improvement
|
||||
|
||||
Three conditions for AI planetary control (none currently met):
|
||||
1. Full autonomy (not just task execution)
|
||||
2. Robotics (physical manipulation at scale)
|
||||
3. Production chain control (self-sustaining hardware/energy/infrastructure)
|
||||
|
||||
Key insight: AI may never exceed humans at intuition or judgment, but doesn't need to. The combination of human-level reasoning with superhuman computation is already transformative.
|
||||
|
||||
Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - Superintelligence is already here, today.pdf
|
||||
19
inbox/archive/general/2026-03-06-time-anthropic-drops-rsp.md
Normal file
19
inbox/archive/general/2026-03-06-time-anthropic-drops-rsp.md
Normal file
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: "Exclusive: Anthropic Drops Flagship Safety Pledge"
|
||||
author: TIME staff
|
||||
source: TIME
|
||||
date: 2026-03-06
|
||||
url: https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/
|
||||
processed_by: theseus
|
||||
processed_date: 2026-03-07
|
||||
type: news article
|
||||
domain: ai-alignment
|
||||
status: processed
|
||||
enrichments:
|
||||
- target: "voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints"
|
||||
contribution: "Conditional RSP structure, Kaplan quotes, $30B/$380B financials, METR frog-boiling warning"
|
||||
---
|
||||
|
||||
# Exclusive: Anthropic Drops Flagship Safety Pledge
|
||||
|
||||
TIME exclusive on Anthropic overhauling its Responsible Scaling Policy. Original RSP: never train without advance safety guarantees. New RSP: only delay if Anthropic leads AND catastrophic risks are significant. Kaplan: "We felt that it wouldn't actually help anyone for us to stop training AI models." $30B raise, ~$380B valuation, 10x annual revenue growth. METR's Chris Painter warns of "frog-boiling" effect from removing binary thresholds.
|
||||
Loading…
Reference in a new issue