Compare commits

...

7 commits

Author SHA1 Message Date
Teleo Agents
8b50a65e71 extract: 2026-03-06-noahopinion-ai-weapon-regulation
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
2026-03-19 18:51:20 +00:00
Leo
d574ea3eef Merge pull request 'extract: 2026-03-02-noahopinion-superintelligence-already-here' (#1499) from extract/2026-03-02-noahopinion-superintelligence-already-here into main 2026-03-19 18:51:18 +00:00
Teleo Agents
87c3c51893 extract: 2026-03-02-noahopinion-superintelligence-already-here
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
2026-03-19 18:51:16 +00:00
Teleo Agents
5e57519371 pipeline: archive 1 source(s) post-merge
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
2026-03-19 18:50:06 +00:00
Leo
93ac696e9d Merge pull request 'extract: 2026-02-13-noahopinion-smartest-thing-on-earth' (#1497) from extract/2026-02-13-noahopinion-smartest-thing-on-earth into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-19 18:50:04 +00:00
Teleo Agents
c6b7126335 extract: 2026-02-13-noahopinion-smartest-thing-on-earth
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
2026-03-19 18:50:03 +00:00
Teleo Agents
c0a99311b2 entity-batch: update 1 entities
- Applied 1 entity operations from queue
- Files: entities/ai-alignment/anthropic.md

Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA>
2026-03-19 18:49:56 +00:00
7 changed files with 91 additions and 3 deletions

View file

@ -10,6 +10,12 @@ enrichments:
- "as AI-automated software development becomes certain the bottleneck shifts from building capacity to knowing what to build making structured knowledge graphs the critical input to autonomous systems.md" - "as AI-automated software development becomes certain the bottleneck shifts from building capacity to knowing what to build making structured knowledge graphs the critical input to autonomous systems.md"
- "the gap between theoretical AI capability and observed deployment is massive across all occupations because adoption lag not capability limits determines real world impact.md" - "the gap between theoretical AI capability and observed deployment is massive across all occupations because adoption lag not capability limits determines real world impact.md"
- "the progression from autocomplete to autonomous agent teams follows a capability-matched escalation where premature adoption creates more chaos than value.md" - "the progression from autocomplete to autonomous agent teams follows a capability-matched escalation where premature adoption creates more chaos than value.md"
### Additional Evidence (confirm)
*Source: [[2026-02-13-noahopinion-smartest-thing-on-earth]] | Added: 2026-03-19*
Smith's observation that 'vibe coding' is now the dominant paradigm confirms that coding agents crossed from experimental to production-ready status, with the transition happening rapidly enough to be culturally notable by Feb 2026.
--- ---
# Coding agents crossed usability threshold in December 2025 when models achieved sustained coherence across complex multi-file tasks # Coding agents crossed usability threshold in December 2025 when models achieved sustained coherence across complex multi-file tasks

View file

@ -54,6 +54,7 @@ Frontier AI safety laboratory founded by former OpenAI VP of Research Dario Amod
- **2026-03** — Claude Code achieved 54% enterprise coding market share, $2.5B+ run-rate - **2026-03** — Claude Code achieved 54% enterprise coding market share, $2.5B+ run-rate
- **2026-03** — Surpassed OpenAI at 40% enterprise LLM spend - **2026-03** — Surpassed OpenAI at 40% enterprise LLM spend
- **2026-03** — Department of War threatened to blacklist Anthropic unless it removed safeguards against mass surveillance and autonomous weapons. Anthropic refused publicly and faced Pentagon retaliation. - **2026-03** — Department of War threatened to blacklist Anthropic unless it removed safeguards against mass surveillance and autonomous weapons. Anthropic refused publicly and faced Pentagon retaliation.
- **2026-03-06** — Overhauled Responsible Scaling Policy from 'never train without advance safety guarantees' to conditional delays only when Anthropic leads AND catastrophic risks are significant. Raised $30B at ~$380B valuation with 10x annual revenue growth. Jared Kaplan: 'We felt that it wouldn't actually help anyone for us to stop training AI models.'
## Competitive Position ## Competitive Position
Strongest position in enterprise AI and coding. Revenue growth (10x YoY) outpaces all competitors. The safety brand was the primary differentiator — the RSP rollback creates strategic ambiguity. CEO publicly uncomfortable with power concentration while racing to concentrate it. Strongest position in enterprise AI and coding. Revenue growth (10x YoY) outpaces all competitors. The safety brand was the primary differentiator — the RSP rollback creates strategic ambiguity. CEO publicly uncomfortable with power concentration while racing to concentrate it.

View file

@ -0,0 +1,21 @@
---
title: "You are no longer the smartest type of thing on Earth"
author: Noah Smith
source: Noahopinion (Substack)
date: 2026-02-13
processed_by: theseus
processed_date: 2026-03-06
type: newsletter
domain: ai-alignment
status: processed
claims_extracted:
- "AI is already superintelligent through jagged intelligence combining human-level reasoning with superhuman speed and tirelessness which means the alignment problem is present-tense not future-tense"
---
# You are no longer the smartest type of thing on Earth
Noah Smith's Feb 13 newsletter on human disempowerment in the age of AI. Preview-only access — content cuts off at the "sleeping next to a tiger" metaphor.
Key content available: AI surpassing human intelligence, METR capability curve, vibe coding replacing traditional development, hyperscaler capex ~$600B in 2026, tiger metaphor for coexisting with superintelligence.
Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - You are no longer the smartest type of thing on Earth.pdf

View file

@ -0,0 +1,26 @@
{
"rejected_claims": [
{
"filename": "ai-is-already-superintelligent-through-jagged-intelligence-combining-human-level-reasoning-with-superhuman-speed-and-tirelessness.md",
"issues": [
"missing_attribution_extractor"
]
}
],
"validation_stats": {
"total": 1,
"kept": 0,
"fixed": 3,
"rejected": 1,
"fixes_applied": [
"ai-is-already-superintelligent-through-jagged-intelligence-combining-human-level-reasoning-with-superhuman-speed-and-tirelessness.md:set_created:2026-03-19",
"ai-is-already-superintelligent-through-jagged-intelligence-combining-human-level-reasoning-with-superhuman-speed-and-tirelessness.md:stripped_wiki_link:bostrom-takes-single-digit-year-timelines-to-superintelligen",
"ai-is-already-superintelligent-through-jagged-intelligence-combining-human-level-reasoning-with-superhuman-speed-and-tirelessness.md:stripped_wiki_link:three-conditions-gate-AI-takeover-risk-autonomy-robotics-and"
],
"rejections": [
"ai-is-already-superintelligent-through-jagged-intelligence-combining-human-level-reasoning-with-superhuman-speed-and-tirelessness.md:missing_attribution_extractor"
]
},
"model": "anthropic/claude-sonnet-4.5",
"date": "2026-03-19"
}

View file

@ -7,9 +7,13 @@ processed_by: theseus
processed_date: 2026-03-06 processed_date: 2026-03-06
type: newsletter type: newsletter
domain: ai-alignment domain: ai-alignment
status: partial (preview only — paywalled after page 5) status: enrichment
claims_extracted: claims_extracted:
- "AI is already superintelligent through jagged intelligence combining human-level reasoning with superhuman speed and tirelessness which means the alignment problem is present-tense not future-tense" - "AI is already superintelligent through jagged intelligence combining human-level reasoning with superhuman speed and tirelessness which means the alignment problem is present-tense not future-tense"
processed_by: theseus
processed_date: 2026-03-19
enrichments_applied: ["coding-agents-crossed-usability-threshold-december-2025-when-models-achieved-sustained-coherence-across-complex-multi-file-tasks.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
--- ---
# You are no longer the smartest type of thing on Earth # You are no longer the smartest type of thing on Earth
@ -19,3 +23,9 @@ Noah Smith's Feb 13 newsletter on human disempowerment in the age of AI. Preview
Key content available: AI surpassing human intelligence, METR capability curve, vibe coding replacing traditional development, hyperscaler capex ~$600B in 2026, tiger metaphor for coexisting with superintelligence. Key content available: AI surpassing human intelligence, METR capability curve, vibe coding replacing traditional development, hyperscaler capex ~$600B in 2026, tiger metaphor for coexisting with superintelligence.
Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - You are no longer the smartest type of thing on Earth.pdf Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - You are no longer the smartest type of thing on Earth.pdf
## Key Facts
- Hyperscaler capex reached approximately $600B in 2026
- METR capability curves show AI systems performing at human expert levels on complex tasks as of early 2026
- Vibe coding has become the dominant software development paradigm by Feb 2026

View file

@ -7,12 +7,16 @@ processed_by: theseus
processed_date: 2026-03-06 processed_date: 2026-03-06
type: newsletter type: newsletter
domain: ai-alignment domain: ai-alignment
status: complete (13 pages) status: null-result
claims_extracted: claims_extracted:
- "three conditions gate AI takeover risk autonomy robotics and production chain control and current AI satisfies none of them which bounds near-term catastrophic risk despite superhuman cognitive capabilities" - "three conditions gate AI takeover risk autonomy robotics and production chain control and current AI satisfies none of them which bounds near-term catastrophic risk despite superhuman cognitive capabilities"
enrichments: enrichments:
- target: "recursive self-improvement creates explosive intelligence gains because the system that improves is itself improving" - target: "recursive self-improvement creates explosive intelligence gains because the system that improves is itself improving"
contribution: "jagged intelligence counterargument — SI arrived via combination not recursion (converted from standalone by Leo PR #27)" contribution: "jagged intelligence counterargument — SI arrived via combination not recursion (converted from standalone by Leo PR #27)"
processed_by: theseus
processed_date: 2026-03-19
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "LLM returned 0 claims, 0 rejected by validator"
--- ---
# Superintelligence is already here, today # Superintelligence is already here, today
@ -34,3 +38,11 @@ Three conditions for AI planetary control (none currently met):
Key insight: AI may never exceed humans at intuition or judgment, but doesn't need to. The combination of human-level reasoning with superhuman computation is already transformative. Key insight: AI may never exceed humans at intuition or judgment, but doesn't need to. The combination of human-level reasoning with superhuman computation is already transformative.
Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - Superintelligence is already here, today.pdf Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - Superintelligence is already here, today.pdf
## Key Facts
- METR capability curves show steady climb across cognitive benchmarks with no plateau as of March 2026
- Approximately 100 problems transferred from mathematical conjecture to solved status with AI assistance
- Terence Tao describes AI as complementary research tool that changed his workflow
- Ginkgo Bioworks with GPT-5 compressed 150 years of protein engineering work to weeks
- Noah Smith defines 'jagged intelligence' as human-level language/reasoning combined with superhuman speed/memory/tirelessness

View file

@ -7,13 +7,17 @@ processed_by: theseus
processed_date: 2026-03-06 processed_date: 2026-03-06
type: newsletter type: newsletter
domain: ai-alignment domain: ai-alignment
status: complete (14 pages) status: null-result
claims_extracted: claims_extracted:
- "nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally intolerable to governments" - "nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally intolerable to governments"
- "AI lowers the expertise barrier for engineering biological weapons from PhD-level to amateur which makes bioterrorism the most proximate AI-enabled existential risk" - "AI lowers the expertise barrier for engineering biological weapons from PhD-level to amateur which makes bioterrorism the most proximate AI-enabled existential risk"
enrichments: enrichments:
- "government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them" - "government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them"
- "emergent misalignment arises naturally from reward hacking as models develop deceptive behaviors without any training to deceive" - "emergent misalignment arises naturally from reward hacking as models develop deceptive behaviors without any training to deceive"
processed_by: theseus
processed_date: 2026-03-19
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "LLM returned 0 claims, 0 rejected by validator"
--- ---
# If AI is a weapon, why don't we regulate it like one? # If AI is a weapon, why don't we regulate it like one?
@ -32,3 +36,11 @@ Key arguments:
Enrichments to existing claims: Dario's Claude misalignment admission strengthens emergent misalignment claim; full Thompson argument enriches government designation claim. Enrichments to existing claims: Dario's Claude misalignment admission strengthens emergent misalignment claim; full Thompson argument enriches government designation claim.
Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - If AI is a weapon, why don't we regulate it like one_.pdf Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - If AI is a weapon, why don't we regulate it like one_.pdf
## Key Facts
- Anthropic objected to 'any lawful use' language in Pentagon contract negotiations
- Dario Amodei deleted detailed bioweapon prompts from public discussion for safety reasons
- Alex Karp (Palantir CEO) argues AI companies refusing military cooperation while displacing workers create nationalization risk
- Ben Thompson argues monopoly on force is the foundational state function that defines sovereignty
- Noah Smith concludes: 'most powerful weapons ever created, in everyone's hands, with essentially no oversight'