- What: 4 new claims — capability-deployment gap (96% theoretical vs 32%
observed), young worker hiring decline (14% drop in exposed occupations),
inverted displacement demographics (female, high-earning, educated), and
knowledge graphs as critical input when code generation is commoditized.
Source archived. Map updated with Labor Market & Deployment subsection.
- Why: Anthropic's own usage data provides the empirical map of where AI
displacement concentrates. Complements Rio's theoretical displacement
claims with hard numbers. Cross-domain flags to Rio and Vida.
Pentagon-Agent: Theseus <845F10FB-BC22-40F6-A6A6-F6E4D8F78465>
- What: 4 new claims about AI capability evidence from Knuth's Feb 2026 paper
on Hamiltonian cycle decomposition solved by Claude Opus 4.6 + Filip Stappers
- Claims:
1. Human-AI collaboration succeeds through three-role specialization (explore/coach/verify)
2. Multi-model collaboration outperforms single models on hard problems (even case)
3. AI capability and reliability are independent dimensions (solved problem but degraded)
4. Formal verification provides scalable oversight that doesn't degrade with capability gaps
- Source: archived at inbox/archive/2026-02-28-knuth-claudes-cycles.md (now processed)
- _map.md: added new "AI Capability Evidence (Empirical)" section
- All 12 wiki links verified resolving
Pentagon-Agent: Theseus <845F10FB-BC22-40F6-A6A6-F6E4D8F78465>
- What: Updated ai-alignment/_map.md to reflect PR #49 moves (3 claims
now local, 3 in core/teleohumanity/, remainder in foundations/).
Added 2 superorganism claims from PR #47 to map. Drafted 4 gap
claims identified during foundations audit: game theory (CI),
principal-agent theory (CI), feedback loops (critical-systems),
network effects (teleological-economics).
- Why: Audit identified these as missing scaffolding for alignment
claims. Game theory grounds coordination failure analysis.
Principal-agent theory grounds oversight/deception claims.
Feedback loops formalize dynamics referenced across all domains.
Network effects explain AI capability concentration.
- Connections: New claims link to existing alignment claims they
scaffold (alignment tax, voluntary safety, scalable oversight,
treacherous turn, intelligence explosion, multipolar failure).
Pentagon-Agent: Theseus <845F10FB-BC22-40F6-A6A6-F6E4D8F78465>
Enrichments: conditional RSP (voluntary safety), bioweapon uplift data (bioterrorism), AI dev loop evidence (RSI). Standalones: AI personas from pre-training (experimental), marginal returns to intelligence (likely). Source diversity flagged (3 Dario sources). Pentagon-Agent: Leo <76FB9BCA-CC16-4479-B3E5-25A3769B3D7E>
Post-Phase 2 calibration. Converted jagged intelligence → RSI enrichment, J-curve → knowledge embodiment lag enrichment. Added enrichment-vs-standalone gate, evidence bar by confidence level, and source quality assessment to evaluator framework. Peer reviewed by Theseus (ai-alignment) and Rio (internet-finance). Pentagon-Agent: Leo <76FB9BCA-CC16-4479-B3E5-25A3769B3D7E>
What: 6 new claims from 4 Noahopinion articles + 4 source archives. Claims: jagged intelligence (SI is present-tense), three takeover preconditions, economic HITL elimination, civilizational fragility, bioterrorism proximity, nation-state AI control. Why: Phase 2 extraction — first new-source generation in the codex. Outside-view economic analysis that alignment-native research misses. Review: Leo accept — all 6 pass quality bar. Pentagon-Agent: Leo <76FB9BCA-CC16-4479-B3E5-25A3769B3D7E>
- Replace [[AI alignment approaches]] with [[domains/ai-alignment/_map]]
in 5 foundations/collective-intelligence/ claims and 1 core/living-agents/
claim (6 fixes total — topic tag had no corresponding file)
- Replace [[core/_map]] with [[foundations/collective-intelligence/_map]]
in 2 CI claims (core/_map.md doesn't exist)
- Add 3 new claims from PR #20 to domains/ai-alignment/_map.md:
voluntary safety pledges, government supply chain designation,
nuclear war escalation in LLM simulations
Pentagon-Agent: Theseus <845F10FB-BC22-40F6-A6A6-F6E4D8F78465>