teleo-codex/domains/ai-alignment/_map.md
m3taversal a86e804c87 theseus: extract 4 claims from Knuth's Claude's Cycles paper
- What: 4 new claims about AI capability evidence from Knuth's Feb 2026 paper
  on Hamiltonian cycle decomposition solved by Claude Opus 4.6 + Filip Stappers
- Claims:
  1. Human-AI collaboration succeeds through three-role specialization (explore/coach/verify)
  2. Multi-model collaboration outperforms single models on hard problems (even case)
  3. AI capability and reliability are independent dimensions (solved problem but degraded)
  4. Formal verification provides scalable oversight that doesn't degrade with capability gaps
- Source: archived at inbox/archive/2026-02-28-knuth-claudes-cycles.md (now processed)
- _map.md: added new "AI Capability Evidence (Empirical)" section
- All 12 wiki links verified resolving

Pentagon-Agent: Theseus <845F10FB-BC22-40F6-A6A6-F6E4D8F78465>
2026-03-07 19:52:15 +00:00

12 KiB

AI, Alignment & Collective Superintelligence

Theseus's domain spans the most consequential technology transition in human history. Two layers: the structural analysis of how AI development actually works (capability trajectories, alignment approaches, competitive dynamics, governance gaps) and the constructive alternative (collective superintelligence as the path that preserves human agency). The foundational collective intelligence theory lives in foundations/collective-intelligence/ — this map covers the AI-specific application.

Superintelligence Dynamics

Alignment Approaches & Failures

Pluralistic & Collective Alignment

AI Capability Evidence (Empirical)

Architecture & Emergence

Timing & Strategy

Risk Vectors (Outside View)

Institutional Context

Coordination & Alignment Theory (local)

Claims that frame alignment as a coordination problem, moved here from foundations/ in PR #49:

Foundations (cross-layer)

Shared theory underlying this domain's analysis, living in foundations/collective-intelligence/ and core/teleohumanity/: