- What: 6 new claims + 4 source archives from Phase 2 extraction - Sources: "You are no longer the smartest type of thing on Earth" (Feb 13), "Updated thoughts on AI risk" (Feb 16), "Superintelligence is already here, today" (Mar 2), "If AI is a weapon, why don't we regulate it like one?" (Mar 6) - New claims: 1. Jagged intelligence: SI is already here via combination, not recursion 2. Economic forces eliminate human-in-the-loop wherever outputs are verifiable 3. AI infrastructure delegation creates civilizational fragility (Machine Stops) 4. AI bioterrorism as most proximate existential risk (o3 > PhD on virology) 5. Nation-state monopoly on force requires frontier AI control 6. Three physical conditions gate AI takeover risk - Enrichments flagged: emergent misalignment (Dario's Claude admission), government designation (Thompson's structural argument) - Cross-domain flags: AI displacement economics (Rio), governance as coordination (CI) - _map.md updated with new Risk Vectors (Outside View) section Pentagon-Agent: Theseus <845F10FB-BC22-40F6-A6A6-F6E4D8F78465>
1.8 KiB
1.8 KiB
| title | author | source | date | processed_by | processed_date | type | status | claims_extracted | ||
|---|---|---|---|---|---|---|---|---|---|---|
| Superintelligence is already here, today | Noah Smith | Noahopinion (Substack) | 2026-03-02 | theseus | 2026-03-06 | newsletter | complete (13 pages) |
|
Superintelligence is already here, today
Noah Smith's argument that AI is already superintelligent via "jagged intelligence" — superhuman in aggregate but uneven across dimensions.
Key evidence:
- METR capability curve: steady climb across cognitive benchmarks, no plateau
- Erdos problems: ~100 transferred from conjecture to solved
- Terence Tao: describes AI as complementary research tool that changed his workflow
- Ginkgo Bioworks + GPT-5: 150 years of protein engineering compressed to weeks
- "Jagged intelligence": human-level language/reasoning + superhuman speed/memory/tirelessness = superintelligence without recursive self-improvement
Three conditions for AI planetary control (none currently met):
- Full autonomy (not just task execution)
- Robotics (physical manipulation at scale)
- Production chain control (self-sustaining hardware/energy/infrastructure)
Key insight: AI may never exceed humans at intuition or judgment, but doesn't need to. The combination of human-level reasoning with superhuman computation is already transformative.
Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - Superintelligence is already here, today.pdf