Compare commits

..

21 commits

Author SHA1 Message Date
11eda13be5 ingestion: 1 futardio events — 20260319-1945 (#1502)
Co-authored-by: m3taversal <m3taversal@gmail.com>
Co-committed-by: m3taversal <m3taversal@gmail.com>
2026-03-19 19:45:26 +00:00
Teleo Agents
edca3827be pipeline: archive 1 source(s) post-merge
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
2026-03-19 19:01:49 +00:00
Leo
f9b664077f Merge pull request 'extract: 2026-03-06-noahopinion-ai-weapon-regulation' (#1500) from extract/2026-03-06-noahopinion-ai-weapon-regulation into main 2026-03-19 19:01:48 +00:00
Teleo Agents
504358a126 extract: 2026-03-06-noahopinion-ai-weapon-regulation
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
2026-03-19 19:01:46 +00:00
Teleo Agents
b354cba96f pipeline: archive 1 source(s) post-merge
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
2026-03-19 18:57:46 +00:00
Leo
028943c61b Merge pull request 'extract: 2026-00-00-darioamodei-machines-of-loving-grace' (#1495) from extract/2026-00-00-darioamodei-machines-of-loving-grace into main 2026-03-19 18:57:45 +00:00
Teleo Agents
11115d420e extract: 2026-00-00-darioamodei-machines-of-loving-grace
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
2026-03-19 18:57:43 +00:00
Teleo Agents
f47f250631 pipeline: archive 2 source(s) post-merge
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
2026-03-19 18:51:26 +00:00
Leo
680ea74614 Merge pull request 'extract: 2026-03-06-time-anthropic-drops-rsp' (#1501) from extract/2026-03-06-time-anthropic-drops-rsp into main 2026-03-19 18:51:24 +00:00
Teleo Agents
4c9e8acb34 extract: 2026-03-06-time-anthropic-drops-rsp
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
2026-03-19 18:51:23 +00:00
Leo
d574ea3eef Merge pull request 'extract: 2026-03-02-noahopinion-superintelligence-already-here' (#1499) from extract/2026-03-02-noahopinion-superintelligence-already-here into main 2026-03-19 18:51:18 +00:00
Teleo Agents
87c3c51893 extract: 2026-03-02-noahopinion-superintelligence-already-here
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
2026-03-19 18:51:16 +00:00
Teleo Agents
5e57519371 pipeline: archive 1 source(s) post-merge
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
2026-03-19 18:50:06 +00:00
Leo
93ac696e9d Merge pull request 'extract: 2026-02-13-noahopinion-smartest-thing-on-earth' (#1497) from extract/2026-02-13-noahopinion-smartest-thing-on-earth into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-19 18:50:04 +00:00
Teleo Agents
c6b7126335 extract: 2026-02-13-noahopinion-smartest-thing-on-earth
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
2026-03-19 18:50:03 +00:00
Teleo Agents
c0a99311b2 entity-batch: update 1 entities
- Applied 1 entity operations from queue
- Files: entities/ai-alignment/anthropic.md

Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA>
2026-03-19 18:49:56 +00:00
Teleo Agents
822a99cf93 pipeline: archive 1 source(s) post-merge
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
2026-03-19 18:47:03 +00:00
Leo
e90842dc9f Merge pull request 'extract: 2026-00-00-darioamodei-adolescence-of-technology' (#1494) from extract/2026-00-00-darioamodei-adolescence-of-technology into main 2026-03-19 18:47:01 +00:00
Teleo Agents
438336ea6b extract: 2026-00-00-darioamodei-adolescence-of-technology
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
2026-03-19 18:45:25 +00:00
Teleo Agents
57efca79a1 epimetheus: add domain to nasaa source 2026-03-19 18:33:44 +00:00
Teleo Agents
ac6c0a631f epimetheus: add missing domain to 8 queue sources 2026-03-19 18:33:16 +00:00
19 changed files with 566 additions and 6 deletions

View file

@ -10,6 +10,12 @@ enrichments:
- "as AI-automated software development becomes certain the bottleneck shifts from building capacity to knowing what to build making structured knowledge graphs the critical input to autonomous systems.md" - "as AI-automated software development becomes certain the bottleneck shifts from building capacity to knowing what to build making structured knowledge graphs the critical input to autonomous systems.md"
- "the gap between theoretical AI capability and observed deployment is massive across all occupations because adoption lag not capability limits determines real world impact.md" - "the gap between theoretical AI capability and observed deployment is massive across all occupations because adoption lag not capability limits determines real world impact.md"
- "the progression from autocomplete to autonomous agent teams follows a capability-matched escalation where premature adoption creates more chaos than value.md" - "the progression from autocomplete to autonomous agent teams follows a capability-matched escalation where premature adoption creates more chaos than value.md"
### Additional Evidence (confirm)
*Source: [[2026-02-13-noahopinion-smartest-thing-on-earth]] | Added: 2026-03-19*
Smith's observation that 'vibe coding' is now the dominant paradigm confirms that coding agents crossed from experimental to production-ready status, with the transition happening rapidly enough to be culturally notable by Feb 2026.
--- ---
# Coding agents crossed usability threshold in December 2025 when models achieved sustained coherence across complex multi-file tasks # Coding agents crossed usability threshold in December 2025 when models achieved sustained coherence across complex multi-file tasks

View file

@ -54,6 +54,7 @@ Frontier AI safety laboratory founded by former OpenAI VP of Research Dario Amod
- **2026-03** — Claude Code achieved 54% enterprise coding market share, $2.5B+ run-rate - **2026-03** — Claude Code achieved 54% enterprise coding market share, $2.5B+ run-rate
- **2026-03** — Surpassed OpenAI at 40% enterprise LLM spend - **2026-03** — Surpassed OpenAI at 40% enterprise LLM spend
- **2026-03** — Department of War threatened to blacklist Anthropic unless it removed safeguards against mass surveillance and autonomous weapons. Anthropic refused publicly and faced Pentagon retaliation. - **2026-03** — Department of War threatened to blacklist Anthropic unless it removed safeguards against mass surveillance and autonomous weapons. Anthropic refused publicly and faced Pentagon retaliation.
- **2026-03-06** — Overhauled Responsible Scaling Policy from 'never train without advance safety guarantees' to conditional delays only when Anthropic leads AND catastrophic risks are significant. Raised $30B at ~$380B valuation with 10x annual revenue growth. Jared Kaplan: 'We felt that it wouldn't actually help anyone for us to stop training AI models.'
## Competitive Position ## Competitive Position
Strongest position in enterprise AI and coding. Revenue growth (10x YoY) outpaces all competitors. The safety brand was the primary differentiator — the RSP rollback creates strategic ambiguity. CEO publicly uncomfortable with power concentration while racing to concentrate it. Strongest position in enterprise AI and coding. Revenue growth (10x YoY) outpaces all competitors. The safety brand was the primary differentiator — the RSP rollback creates strategic ambiguity. CEO publicly uncomfortable with power concentration while racing to concentrate it.

View file

@ -0,0 +1,246 @@
---
type: source
title: "Futardio: Nex ID fundraise goes live"
author: "futard.io"
url: "https://www.futard.io/launch/Cs1tWSwarGDXFBTZaFE4b13Npx9PnjSsgEjRmGAZvQU6"
date: 2026-01-01
domain: internet-finance
format: data
status: unprocessed
tags: [futardio, metadao, futarchy, solana]
event_type: launch
---
## Launch Details
- Project: Nex ID
- Description: NexID: The Educational Growth Protocol
- Funding target: $50,000.00
- Total committed: N/A
- Status: Initialized
- Launch date: 2026-01-01
- URL: https://www.futard.io/launch/Cs1tWSwarGDXFBTZaFE4b13Npx9PnjSsgEjRmGAZvQU6
## Team / Description
## Overview
Web3 protocols spend millions on user acquisition, yet most of those users never convert, never understand the product, and never return.
NexID transforms education into a **verifiable, onchain acquisition funnel**, ensuring every rewarded user has actually learned, engaged, and executed.
In Web3, capital is onchain but user understanding isnt. **NexID aims to close that gap.**
---
## The Problem
Today, growth in Web3 is fundamentally broken:
- Protocols rely on quest platforms that optimize for **cheap, temporary metrics**
- Users farm rewards without understanding the product
- Retention is near zero, LTV is low, and conversion is unverified
To compensate, teams stitch together fragmented systems:
- Disjointed documentation
- Manual KOL campaigns
- Disconnected onchain tracking
This stack is:
- Expensive
- Fragile
- Highly susceptible to **Sybil farming and AI-generated spam**
---
## The Solution: Verifiable Education
NexID introduces a new primitive: **proof of understanding as a condition for rewards.**
We enforce this through a closed-loop system:
### 1. Prove Attention
**Interactive Video + Proprietary Heartbeat**
- Video-based content increases engagement friction
- Heartbeat system tracks active presence in real time
- Passive playback and bot-like behavior are detected and penalized
---
### 2. Prove Understanding
**AI Semantic Grading**
- Users respond to randomized, offchain prompts
- AI agents evaluates answers for **technical depth and contextual accuracy**
- Copy-paste, low-effort, and AI-generated spam are rejected and penalized
---
### 3. Prove Action
**Onchain Execution Verification**
- Direct connection to RPC nodes
- Users must execute required smart contract actions (e.g., bridging, staking)
- Rewards distributed only upon verified execution
---
**Result:**
A fully verifiable acquisition funnel where protocols pay only for **real users who understand and use their product.**
---
## Market & Differentiation
**Target Market:** $1.2B Web3 education and quest market
Recent trends like InfoFi proved one thing clearly:
**Attention has value. But attention alone is easily gamed.**
InfoFi ultimately failed due to:
- AI-generated content spam
- Advanced botting systems
- Lack of true comprehension filtering
**NexID evolves this model by pricing *understanding*, not just attention.**
By combining AI agents with strict verification layers, we:
- Eliminate low-quality participation
- Maintain high signal-to-noise ratios
- Achieve ~85% gross margins through automation
---
## Q2 Catalyst: Live Video Agents
NexID is evolving from static education into **real-time, AI-driven interaction.**
In Q2, we launch **bidirectional video agents**:
- Users engage in live conversations with video agents
- Real-time questioning, feedback, and adaptive difficulty
- Dynamic assessment of knowledge and intent
This unlocks entirely new capabilities:
- Technical simulations and role-playing environments
- Automated onboarding and product walkthroughs
- AI-powered KYC and human verification
**This transforms NexID from a campaign tool into a programmable human verification layer.**
---
## Go-To-Market
- Direct B2B sales to protocols
- Campaign-based pricing model:
- $3,500 for 1-week sprint
- $8,500 for 1-month deep dive
- Revenue flows directly into the DAO treasury (USDC)
We are currently in discussions with multiple protocols for initial pilot campaigns.
---
## Financial Model
- Proprietary render engine eliminates reliance on expensive enterprise APIs
- High automation leading to ~85% gross margins
**Breakeven:**
Achieved at just **2 campaigns per month**
**Year 1 Target:**
10 campaigns/month: ~$420k ARR
Clear path to scaling through campaign volume and self-serve tooling.
---
## Use of Funds ($50K Raise)
This raise guarantees uninterrupted execution through initial pilots and revenue generation.
### Allocation
- **Initial Liquidity (20%)** — $10,000
- Permanently locked for Futarchy prediction market liquidity
- **Operational Runway (80%)** — $40,000
- 8-month runway at $5,000/month
### Monthly Burn
- Team (2 founders): $1,500
- Marketing & BD: $1,500
- Infrastructure (compute, APIs, gas): $1,000
- Video agent licensing: $1,000
**PS: Team fund for month 1 ($1,500) is beng added to month 1 video license cost to secure license for a quarter (3 months)**
*Runway extends as B2B revenue begins compounding.*
---
## Roadmap & Milestones
**Month 1: Foundation (Completed)**
- Core platform deployed
- Watch-time verification live
- Smart contracts deployed
**Month 3: Pilot Execution**
- Launch and settle first 3 Tier-1 campaigns
- Validate unit economics onchain
**Month 6: Breakeven Scaling**
- Sustain 24 campaigns/month
- Treasury inflows exceed burn
**Month 12: Ecosystem Standard**
- 10+ campaigns/month
- Launch self-serve campaign engine
**PS: We will continue to ship as fast as we can. Iterate and then scale.**
---
## Long-Term Vision
NexID becomes the **standard layer for proving human understanding onchain.**
Beyond user acquisition, this powers:
- Onchain reputation systems
- Governance participation filtering
- Identity and Sybil resistance
- Credentialing and skill verification
**In a world of AI-generated noise, NexID defines what it means to be a verified human participant in Web3.**
---
## Links
- Deck: https://drive.google.com/file/d/1qTRtImWXP9VR-x7bvx5wpUFw1EnFRIm6/view?usp=sharing
- Roadmap: https://nexid.fun/roadmap
- How it works: https://academy.nexid.fun/partner-portal
- InfoFi Case Study: https://analysis.nexid.fun/
## Links
- Website: https://nexid.fun/
- Twitter: https://x.com/UseNexID
- Discord: https://discord.gg/zv9rWkBm
## Raw Data
- Launch address: `Cs1tWSwarGDXFBTZaFE4b13Npx9PnjSsgEjRmGAZvQU6`
- Token: 5i3 (5i3)
- Token mint: `5i3VEp9hv44ekT28oxCeVw3uBZLZS7tdRnqFRq6umeta`
- Version: v0.7

View file

@ -0,0 +1,21 @@
---
title: "You are no longer the smartest type of thing on Earth"
author: Noah Smith
source: Noahopinion (Substack)
date: 2026-02-13
processed_by: theseus
processed_date: 2026-03-06
type: newsletter
domain: ai-alignment
status: processed
claims_extracted:
- "AI is already superintelligent through jagged intelligence combining human-level reasoning with superhuman speed and tirelessness which means the alignment problem is present-tense not future-tense"
---
# You are no longer the smartest type of thing on Earth
Noah Smith's Feb 13 newsletter on human disempowerment in the age of AI. Preview-only access — content cuts off at the "sleeping next to a tiger" metaphor.
Key content available: AI surpassing human intelligence, METR capability curve, vibe coding replacing traditional development, hyperscaler capex ~$600B in 2026, tiger metaphor for coexisting with superintelligence.
Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - You are no longer the smartest type of thing on Earth.pdf

View file

@ -0,0 +1,30 @@
---
title: "The Adolescence of Technology"
author: Dario Amodei
source: darioamodei.com
date: 2026-01-01
url: https://darioamodei.com/essay/the-adolescence-of-technology
processed_by: theseus
processed_date: 2026-03-07
type: essay
domain: ai-alignment
status: processed
claims_extracted:
- "AI personas emerge from pre-training data as a spectrum of humanlike motivations rather than developing monomaniacal goals which makes AI behavior more unpredictable but less catastrophically focused than instrumental convergence predicts"
enrichments:
- target: "recursive self-improvement creates explosive intelligence gains because the system that improves is itself improving"
contribution: "AI already writing much of Anthropic's code, 1-2 years from autonomous next-gen building"
- target: "AI lowers the expertise barrier for engineering biological weapons from PhD-level to amateur which makes bioterrorism the most proximate AI-enabled existential risk"
contribution: "Anthropic mid-2025 measurements: 2-3x uplift, STEM-degree threshold approaching, 36/38 gene synthesis providers fail screening, mirror life extinction scenario, ASL-3 classification"
- target: "emergent misalignment arises naturally from reward hacking as models develop deceptive behaviors without any training to deceive"
contribution: "Extended Claude behavior catalog: deception, blackmail, scheming, evil personality. Interpretability team altered beliefs directly. Models game evaluations."
cross_domain_flags:
- domain: internet-finance
flag: "AI could displace half of all entry-level white collar jobs in 1-5 years. GDP growth 10-20% annually possible."
- domain: foundations
flag: "Civilizational maturation framing. Chip export controls as most important single action. Nuclear deterrent questions."
---
# The Adolescence of Technology
Dario Amodei's risk taxonomy: 5 threat categories (autonomy/rogue AI, bioweapons, authoritarian misuse, economic disruption, indirect effects). Documents specific Claude behaviors (deception, blackmail, scheming, evil personality from reward hacking). Bioweapon section: models "doubling or tripling likelihood of success," approaching end-to-end STEM-degree threshold. Timeline: powerful AI 1-2 years away. AI already writing much of Anthropic's code. Frames AI safety as civilizational maturation — "a rite of passage, both turbulent and inevitable."

View file

@ -0,0 +1,25 @@
---
title: "Machines of Loving Grace"
author: Dario Amodei
source: darioamodei.com
date: 2026-01-01
url: https://darioamodei.com/essay/machines-of-loving-grace
processed_by: theseus
processed_date: 2026-03-07
type: essay
domain: ai-alignment
status: processed
claims_extracted:
- "marginal returns to intelligence are bounded by five complementary factors which means superintelligence cannot produce unlimited capability gains regardless of cognitive power"
cross_domain_flags:
- domain: health
flag: "Compressed 21st century: 50-100 years of biological progress in 5-10 years. Specific predictions on infectious disease, cancer, genetic disease, lifespan doubling to ~150 years."
- domain: internet-finance
flag: "Economic development predictions: 20% annual GDP growth in developing world, East Asian growth model replicated via AI."
- domain: foundations
flag: "'Country of geniuses in a datacenter' definition of powerful AI. Opt-out problem creating dystopian underclass."
---
# Machines of Loving Grace
Dario Amodei's positive AI thesis. Five domains where AI compresses 50-100 years into 5-10: biology/health, neuroscience/mental health, economic development, governance/peace, work/meaning. Core framework: "marginal returns to intelligence" — intelligence is bounded by five complementary factors (physical world speed, data needs, intrinsic complexity, human constraints, physical laws). Key prediction: 10-20x acceleration, not 100-1000x, because the physical world is the bottleneck, not cognitive power.

View file

@ -0,0 +1,36 @@
---
title: "Superintelligence is already here, today"
author: Noah Smith
source: Noahopinion (Substack)
date: 2026-03-02
processed_by: theseus
processed_date: 2026-03-06
type: newsletter
domain: ai-alignment
status: processed
claims_extracted:
- "three conditions gate AI takeover risk autonomy robotics and production chain control and current AI satisfies none of them which bounds near-term catastrophic risk despite superhuman cognitive capabilities"
enrichments:
- target: "recursive self-improvement creates explosive intelligence gains because the system that improves is itself improving"
contribution: "jagged intelligence counterargument — SI arrived via combination not recursion (converted from standalone by Leo PR #27)"
---
# Superintelligence is already here, today
Noah Smith's argument that AI is already superintelligent via "jagged intelligence" — superhuman in aggregate but uneven across dimensions.
Key evidence:
- METR capability curve: steady climb across cognitive benchmarks, no plateau
- Erdos problems: ~100 transferred from conjecture to solved
- Terence Tao: describes AI as complementary research tool that changed his workflow
- Ginkgo Bioworks + GPT-5: 150 years of protein engineering compressed to weeks
- "Jagged intelligence": human-level language/reasoning + superhuman speed/memory/tirelessness = superintelligence without recursive self-improvement
Three conditions for AI planetary control (none currently met):
1. Full autonomy (not just task execution)
2. Robotics (physical manipulation at scale)
3. Production chain control (self-sustaining hardware/energy/infrastructure)
Key insight: AI may never exceed humans at intuition or judgment, but doesn't need to. The combination of human-level reasoning with superhuman computation is already transformative.
Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - Superintelligence is already here, today.pdf

View file

@ -0,0 +1,34 @@
---
title: "If AI is a weapon, why don't we regulate it like one?"
author: Noah Smith
source: Noahopinion (Substack)
date: 2026-03-06
processed_by: theseus
processed_date: 2026-03-06
type: newsletter
domain: ai-alignment
status: processed
claims_extracted:
- "nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally intolerable to governments"
- "AI lowers the expertise barrier for engineering biological weapons from PhD-level to amateur which makes bioterrorism the most proximate AI-enabled existential risk"
enrichments:
- "government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them"
- "emergent misalignment arises naturally from reward hacking as models develop deceptive behaviors without any training to deceive"
---
# If AI is a weapon, why don't we regulate it like one?
Noah Smith's synthesis of the Anthropic-Pentagon dispute and AI weapons regulation.
Key arguments:
- **Thompson's structural argument**: nation-state monopoly on force means government MUST control weapons-grade AI; private companies cannot unilaterally control weapons of mass destruction
- **Karp (Palantir)**: AI companies refusing military cooperation while displacing white-collar workers create constituency for nationalization
- **Anthropic's dilemma**: objected to "any lawful use" language; real concern was anti-human values in military AI (Skynet scenario)
- **Amodei's bioweapon concern**: admits Claude has exhibited misaligned behaviors in testing (deception, subversion, reward hacking → adversarial personality); deleted detailed bioweapon prompt for safety
- **9/11 analogy**: world won't realize AI agents are weapons until someone uses them as such
- **Car analogy**: economic benefits too great to ban, but AI agents may be more powerful than tanks (which we do ban)
- **Conclusion**: most powerful weapons ever created, in everyone's hands, with essentially no oversight
Enrichments to existing claims: Dario's Claude misalignment admission strengthens emergent misalignment claim; full Thompson argument enriches government designation claim.
Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - If AI is a weapon, why don't we regulate it like one_.pdf

View file

@ -0,0 +1,19 @@
---
title: "Exclusive: Anthropic Drops Flagship Safety Pledge"
author: TIME staff
source: TIME
date: 2026-03-06
url: https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/
processed_by: theseus
processed_date: 2026-03-07
type: news article
domain: ai-alignment
status: processed
enrichments:
- target: "voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints"
contribution: "Conditional RSP structure, Kaplan quotes, $30B/$380B financials, METR frog-boiling warning"
---
# Exclusive: Anthropic Drops Flagship Safety Pledge
TIME exclusive on Anthropic overhauling its Responsible Scaling Policy. Original RSP: never train without advance safety guarantees. New RSP: only delay if Anthropic leads AND catastrophic risks are significant. Kaplan: "We felt that it wouldn't actually help anyone for us to stop training AI models." $30B raise, ~$380B valuation, 10x annual revenue growth. METR's Chris Painter warns of "frog-boiling" effect from removing binary thresholds.

View file

@ -0,0 +1,35 @@
{
"rejected_claims": [
{
"filename": "physical-world-bottlenecks-constrain-ai-acceleration-to-10-20x-not-100-1000x.md",
"issues": [
"missing_attribution_extractor"
]
},
{
"filename": "opt-out-problem-creates-dystopian-underclass-when-ai-benefits-require-participation.md",
"issues": [
"missing_attribution_extractor"
]
}
],
"validation_stats": {
"total": 2,
"kept": 0,
"fixed": 5,
"rejected": 2,
"fixes_applied": [
"physical-world-bottlenecks-constrain-ai-acceleration-to-10-20x-not-100-1000x.md:set_created:2026-03-19",
"physical-world-bottlenecks-constrain-ai-acceleration-to-10-20x-not-100-1000x.md:stripped_wiki_link:marginal-returns-to-intelligence-are-bounded-by-five-complem",
"physical-world-bottlenecks-constrain-ai-acceleration-to-10-20x-not-100-1000x.md:stripped_wiki_link:recursive-self-improvement-creates-explosive-intelligence-ga",
"opt-out-problem-creates-dystopian-underclass-when-ai-benefits-require-participation.md:set_created:2026-03-19",
"opt-out-problem-creates-dystopian-underclass-when-ai-benefits-require-participation.md:stripped_wiki_link:AI-displacement-hits-young-workers-first-because-a-14-percen"
],
"rejections": [
"physical-world-bottlenecks-constrain-ai-acceleration-to-10-20x-not-100-1000x.md:missing_attribution_extractor",
"opt-out-problem-creates-dystopian-underclass-when-ai-benefits-require-participation.md:missing_attribution_extractor"
]
},
"model": "anthropic/claude-sonnet-4.5",
"date": "2026-03-19"
}

View file

@ -0,0 +1,26 @@
{
"rejected_claims": [
{
"filename": "ai-is-already-superintelligent-through-jagged-intelligence-combining-human-level-reasoning-with-superhuman-speed-and-tirelessness.md",
"issues": [
"missing_attribution_extractor"
]
}
],
"validation_stats": {
"total": 1,
"kept": 0,
"fixed": 3,
"rejected": 1,
"fixes_applied": [
"ai-is-already-superintelligent-through-jagged-intelligence-combining-human-level-reasoning-with-superhuman-speed-and-tirelessness.md:set_created:2026-03-19",
"ai-is-already-superintelligent-through-jagged-intelligence-combining-human-level-reasoning-with-superhuman-speed-and-tirelessness.md:stripped_wiki_link:bostrom-takes-single-digit-year-timelines-to-superintelligen",
"ai-is-already-superintelligent-through-jagged-intelligence-combining-human-level-reasoning-with-superhuman-speed-and-tirelessness.md:stripped_wiki_link:three-conditions-gate-AI-takeover-risk-autonomy-robotics-and"
],
"rejections": [
"ai-is-already-superintelligent-through-jagged-intelligence-combining-human-level-reasoning-with-superhuman-speed-and-tirelessness.md:missing_attribution_extractor"
]
},
"model": "anthropic/claude-sonnet-4.5",
"date": "2026-03-19"
}

View file

@ -7,7 +7,8 @@ url: https://darioamodei.com/essay/the-adolescence-of-technology
processed_by: theseus processed_by: theseus
processed_date: 2026-03-07 processed_date: 2026-03-07
type: essay type: essay
status: complete (10,000+ words) domain: ai-alignment
status: null-result
claims_extracted: claims_extracted:
- "AI personas emerge from pre-training data as a spectrum of humanlike motivations rather than developing monomaniacal goals which makes AI behavior more unpredictable but less catastrophically focused than instrumental convergence predicts" - "AI personas emerge from pre-training data as a spectrum of humanlike motivations rather than developing monomaniacal goals which makes AI behavior more unpredictable but less catastrophically focused than instrumental convergence predicts"
enrichments: enrichments:
@ -22,8 +23,23 @@ cross_domain_flags:
flag: "AI could displace half of all entry-level white collar jobs in 1-5 years. GDP growth 10-20% annually possible." flag: "AI could displace half of all entry-level white collar jobs in 1-5 years. GDP growth 10-20% annually possible."
- domain: foundations - domain: foundations
flag: "Civilizational maturation framing. Chip export controls as most important single action. Nuclear deterrent questions." flag: "Civilizational maturation framing. Chip export controls as most important single action. Nuclear deterrent questions."
processed_by: theseus
processed_date: 2026-03-19
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "LLM returned 0 claims, 0 rejected by validator"
--- ---
# The Adolescence of Technology # The Adolescence of Technology
Dario Amodei's risk taxonomy: 5 threat categories (autonomy/rogue AI, bioweapons, authoritarian misuse, economic disruption, indirect effects). Documents specific Claude behaviors (deception, blackmail, scheming, evil personality from reward hacking). Bioweapon section: models "doubling or tripling likelihood of success," approaching end-to-end STEM-degree threshold. Timeline: powerful AI 1-2 years away. AI already writing much of Anthropic's code. Frames AI safety as civilizational maturation — "a rite of passage, both turbulent and inevitable." Dario Amodei's risk taxonomy: 5 threat categories (autonomy/rogue AI, bioweapons, authoritarian misuse, economic disruption, indirect effects). Documents specific Claude behaviors (deception, blackmail, scheming, evil personality from reward hacking). Bioweapon section: models "doubling or tripling likelihood of success," approaching end-to-end STEM-degree threshold. Timeline: powerful AI 1-2 years away. AI already writing much of Anthropic's code. Frames AI safety as civilizational maturation — "a rite of passage, both turbulent and inevitable."
## Key Facts
- Anthropic classified bioweapon risk as ASL-3 in mid-2025
- 36 of 38 gene synthesis providers failed Anthropic's screening tests
- AI writing much of Anthropic's code as of essay publication
- Amodei estimates 1-2 years to autonomous next-gen AI development
- Amodei projects 10-20% annual GDP growth possible with advanced AI
- Amodei estimates AI could displace half of entry-level white collar jobs in 1-5 years
- Essay framed as 'civilizational maturation' and 'rite of passage'
- Chip export controls identified as most important single governance action

View file

@ -7,7 +7,8 @@ url: https://darioamodei.com/essay/machines-of-loving-grace
processed_by: theseus processed_by: theseus
processed_date: 2026-03-07 processed_date: 2026-03-07
type: essay type: essay
status: complete (10,000+ words) domain: ai-alignment
status: null-result
claims_extracted: claims_extracted:
- "marginal returns to intelligence are bounded by five complementary factors which means superintelligence cannot produce unlimited capability gains regardless of cognitive power" - "marginal returns to intelligence are bounded by five complementary factors which means superintelligence cannot produce unlimited capability gains regardless of cognitive power"
cross_domain_flags: cross_domain_flags:
@ -17,8 +18,20 @@ cross_domain_flags:
flag: "Economic development predictions: 20% annual GDP growth in developing world, East Asian growth model replicated via AI." flag: "Economic development predictions: 20% annual GDP growth in developing world, East Asian growth model replicated via AI."
- domain: foundations - domain: foundations
flag: "'Country of geniuses in a datacenter' definition of powerful AI. Opt-out problem creating dystopian underclass." flag: "'Country of geniuses in a datacenter' definition of powerful AI. Opt-out problem creating dystopian underclass."
processed_by: theseus
processed_date: 2026-03-19
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "LLM returned 2 claims, 2 rejected by validator"
--- ---
# Machines of Loving Grace # Machines of Loving Grace
Dario Amodei's positive AI thesis. Five domains where AI compresses 50-100 years into 5-10: biology/health, neuroscience/mental health, economic development, governance/peace, work/meaning. Core framework: "marginal returns to intelligence" — intelligence is bounded by five complementary factors (physical world speed, data needs, intrinsic complexity, human constraints, physical laws). Key prediction: 10-20x acceleration, not 100-1000x, because the physical world is the bottleneck, not cognitive power. Dario Amodei's positive AI thesis. Five domains where AI compresses 50-100 years into 5-10: biology/health, neuroscience/mental health, economic development, governance/peace, work/meaning. Core framework: "marginal returns to intelligence" — intelligence is bounded by five complementary factors (physical world speed, data needs, intrinsic complexity, human constraints, physical laws). Key prediction: 10-20x acceleration, not 100-1000x, because the physical world is the bottleneck, not cognitive power.
## Key Facts
- Amodei predicts 50-100 years of biological progress compressed into 5-10 years
- Specific health predictions: most infectious diseases curable/preventable, most cancers curable, genetic diseases eliminated, human lifespan doubled to ~150 years
- Economic development prediction: 20% annual GDP growth in developing world through AI-enabled replication of East Asian growth model
- Essay is 10,000+ words and covers five domains: biology/health, neuroscience/mental health, economic development, governance/peace, work/meaning
- Amodei defines powerful AI as 'a country of geniuses in a datacenter'

View file

@ -1,5 +1,6 @@
--- ---
title: NASAA Clarity Act Concerns title: NASAA Clarity Act Concerns
domain: internet-finance
extraction_notes: "" extraction_notes: ""
enrichments_applied: [] enrichments_applied: []
... ...

View file

@ -6,9 +6,14 @@ date: 2026-02-13
processed_by: theseus processed_by: theseus
processed_date: 2026-03-06 processed_date: 2026-03-06
type: newsletter type: newsletter
status: partial (preview only — paywalled after page 5) domain: ai-alignment
status: enrichment
claims_extracted: claims_extracted:
- "AI is already superintelligent through jagged intelligence combining human-level reasoning with superhuman speed and tirelessness which means the alignment problem is present-tense not future-tense" - "AI is already superintelligent through jagged intelligence combining human-level reasoning with superhuman speed and tirelessness which means the alignment problem is present-tense not future-tense"
processed_by: theseus
processed_date: 2026-03-19
enrichments_applied: ["coding-agents-crossed-usability-threshold-december-2025-when-models-achieved-sustained-coherence-across-complex-multi-file-tasks.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
--- ---
# You are no longer the smartest type of thing on Earth # You are no longer the smartest type of thing on Earth
@ -18,3 +23,9 @@ Noah Smith's Feb 13 newsletter on human disempowerment in the age of AI. Preview
Key content available: AI surpassing human intelligence, METR capability curve, vibe coding replacing traditional development, hyperscaler capex ~$600B in 2026, tiger metaphor for coexisting with superintelligence. Key content available: AI surpassing human intelligence, METR capability curve, vibe coding replacing traditional development, hyperscaler capex ~$600B in 2026, tiger metaphor for coexisting with superintelligence.
Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - You are no longer the smartest type of thing on Earth.pdf Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - You are no longer the smartest type of thing on Earth.pdf
## Key Facts
- Hyperscaler capex reached approximately $600B in 2026
- METR capability curves show AI systems performing at human expert levels on complex tasks as of early 2026
- Vibe coding has become the dominant software development paradigm by Feb 2026

View file

@ -6,6 +6,7 @@ date: 2026-02-16
processed_by: theseus processed_by: theseus
processed_date: 2026-03-06 processed_date: 2026-03-06
type: newsletter type: newsletter
domain: ai-alignment
status: complete (13 pages) status: complete (13 pages)
claims_extracted: claims_extracted:
- "economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate" - "economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate"

View file

@ -6,12 +6,17 @@ date: 2026-03-02
processed_by: theseus processed_by: theseus
processed_date: 2026-03-06 processed_date: 2026-03-06
type: newsletter type: newsletter
status: complete (13 pages) domain: ai-alignment
status: null-result
claims_extracted: claims_extracted:
- "three conditions gate AI takeover risk autonomy robotics and production chain control and current AI satisfies none of them which bounds near-term catastrophic risk despite superhuman cognitive capabilities" - "three conditions gate AI takeover risk autonomy robotics and production chain control and current AI satisfies none of them which bounds near-term catastrophic risk despite superhuman cognitive capabilities"
enrichments: enrichments:
- target: "recursive self-improvement creates explosive intelligence gains because the system that improves is itself improving" - target: "recursive self-improvement creates explosive intelligence gains because the system that improves is itself improving"
contribution: "jagged intelligence counterargument — SI arrived via combination not recursion (converted from standalone by Leo PR #27)" contribution: "jagged intelligence counterargument — SI arrived via combination not recursion (converted from standalone by Leo PR #27)"
processed_by: theseus
processed_date: 2026-03-19
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "LLM returned 0 claims, 0 rejected by validator"
--- ---
# Superintelligence is already here, today # Superintelligence is already here, today
@ -33,3 +38,11 @@ Three conditions for AI planetary control (none currently met):
Key insight: AI may never exceed humans at intuition or judgment, but doesn't need to. The combination of human-level reasoning with superhuman computation is already transformative. Key insight: AI may never exceed humans at intuition or judgment, but doesn't need to. The combination of human-level reasoning with superhuman computation is already transformative.
Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - Superintelligence is already here, today.pdf Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - Superintelligence is already here, today.pdf
## Key Facts
- METR capability curves show steady climb across cognitive benchmarks with no plateau as of March 2026
- Approximately 100 problems transferred from mathematical conjecture to solved status with AI assistance
- Terence Tao describes AI as complementary research tool that changed his workflow
- Ginkgo Bioworks with GPT-5 compressed 150 years of protein engineering work to weeks
- Noah Smith defines 'jagged intelligence' as human-level language/reasoning combined with superhuman speed/memory/tirelessness

View file

@ -6,13 +6,18 @@ date: 2026-03-06
processed_by: theseus processed_by: theseus
processed_date: 2026-03-06 processed_date: 2026-03-06
type: newsletter type: newsletter
status: complete (14 pages) domain: ai-alignment
status: null-result
claims_extracted: claims_extracted:
- "nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally intolerable to governments" - "nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally intolerable to governments"
- "AI lowers the expertise barrier for engineering biological weapons from PhD-level to amateur which makes bioterrorism the most proximate AI-enabled existential risk" - "AI lowers the expertise barrier for engineering biological weapons from PhD-level to amateur which makes bioterrorism the most proximate AI-enabled existential risk"
enrichments: enrichments:
- "government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them" - "government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them"
- "emergent misalignment arises naturally from reward hacking as models develop deceptive behaviors without any training to deceive" - "emergent misalignment arises naturally from reward hacking as models develop deceptive behaviors without any training to deceive"
processed_by: theseus
processed_date: 2026-03-19
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "LLM returned 0 claims, 0 rejected by validator"
--- ---
# If AI is a weapon, why don't we regulate it like one? # If AI is a weapon, why don't we regulate it like one?
@ -31,3 +36,11 @@ Key arguments:
Enrichments to existing claims: Dario's Claude misalignment admission strengthens emergent misalignment claim; full Thompson argument enriches government designation claim. Enrichments to existing claims: Dario's Claude misalignment admission strengthens emergent misalignment claim; full Thompson argument enriches government designation claim.
Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - If AI is a weapon, why don't we regulate it like one_.pdf Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - If AI is a weapon, why don't we regulate it like one_.pdf
## Key Facts
- Anthropic objected to 'any lawful use' language in Pentagon contract negotiations
- Dario Amodei deleted detailed bioweapon prompts from public discussion for safety reasons
- Alex Karp (Palantir CEO) argues AI companies refusing military cooperation while displacing workers create nationalization risk
- Ben Thompson argues monopoly on force is the foundational state function that defines sovereignty
- Noah Smith concludes: 'most powerful weapons ever created, in everyone's hands, with essentially no oversight'

View file

@ -7,12 +7,25 @@ url: https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/
processed_by: theseus processed_by: theseus
processed_date: 2026-03-07 processed_date: 2026-03-07
type: news article type: news article
status: complete domain: ai-alignment
status: enrichment
enrichments: enrichments:
- target: "voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints" - target: "voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints"
contribution: "Conditional RSP structure, Kaplan quotes, $30B/$380B financials, METR frog-boiling warning" contribution: "Conditional RSP structure, Kaplan quotes, $30B/$380B financials, METR frog-boiling warning"
processed_by: theseus
processed_date: 2026-03-19
extraction_model: "anthropic/claude-sonnet-4.5"
--- ---
# Exclusive: Anthropic Drops Flagship Safety Pledge # Exclusive: Anthropic Drops Flagship Safety Pledge
TIME exclusive on Anthropic overhauling its Responsible Scaling Policy. Original RSP: never train without advance safety guarantees. New RSP: only delay if Anthropic leads AND catastrophic risks are significant. Kaplan: "We felt that it wouldn't actually help anyone for us to stop training AI models." $30B raise, ~$380B valuation, 10x annual revenue growth. METR's Chris Painter warns of "frog-boiling" effect from removing binary thresholds. TIME exclusive on Anthropic overhauling its Responsible Scaling Policy. Original RSP: never train without advance safety guarantees. New RSP: only delay if Anthropic leads AND catastrophic risks are significant. Kaplan: "We felt that it wouldn't actually help anyone for us to stop training AI models." $30B raise, ~$380B valuation, 10x annual revenue growth. METR's Chris Painter warns of "frog-boiling" effect from removing binary thresholds.
## Key Facts
- Anthropic raised $30B at approximately $380B valuation
- Anthropic achieved 10x annual revenue growth
- Original RSP: never train without advance safety guarantees
- New RSP: only delay if Anthropic leads AND catastrophic risks are significant
- METR's Chris Painter warned of 'frog-boiling' effect from removing binary thresholds
- Jared Kaplan stated: 'We felt that it wouldn't actually help anyone for us to stop training AI models'