teleo-codex/domains/ai-alignment/_map.md
m3taversal 3d2f079633 theseus: extract 3 claims from Aquino-Michaels + enrich multi-model claim
- What: 3 new claims from "Completing Claude's Cycles" (no-way-labs/residue)
  + enrichment of existing multi-model claim with detailed architecture
- Claims:
  1. Structured exploration protocols reduce human intervention by 6x (Residue prompt)
  2. AI agent orchestration outperforms coaching (orchestrator as data router)
  3. Coordination protocol design produces larger gains than model scaling
- Enriched: multi-model claim now includes Aquino-Michaels's Agent O/C/orchestrator detail
- Source: archived at inbox/archive/2026-03-00-aquinomichaels-completing-claudes-cycles.md
- _map.md: AI Capability Evidence section reorganized into 3 subsections
  (Collaboration Patterns, Architecture & Scaling, Failure Modes & Oversight)
- All wiki links verified resolving

Pentagon-Agent: Theseus <845F10FB-BC22-40F6-A6A6-F6E4D8F78465>
2026-03-07 20:18:35 +00:00

13 KiB

AI, Alignment & Collective Superintelligence

Theseus's domain spans the most consequential technology transition in human history. Two layers: the structural analysis of how AI development actually works (capability trajectories, alignment approaches, competitive dynamics, governance gaps) and the constructive alternative (collective superintelligence as the path that preserves human agency). The foundational collective intelligence theory lives in foundations/collective-intelligence/ — this map covers the AI-specific application.

Superintelligence Dynamics

Alignment Approaches & Failures

Pluralistic & Collective Alignment

AI Capability Evidence (Empirical)

Evidence from documented AI problem-solving cases, primarily Knuth's "Claude's Cycles" (2026) and Aquino-Michaels's "Completing Claude's Cycles" (2026):

Collaboration Patterns

Architecture & Scaling

Failure Modes & Oversight

Architecture & Emergence

Timing & Strategy

Risk Vectors (Outside View)

Institutional Context

Coordination & Alignment Theory (local)

Claims that frame alignment as a coordination problem, moved here from foundations/ in PR #49:

Foundations (cross-layer)

Shared theory underlying this domain's analysis, living in foundations/collective-intelligence/ and core/teleohumanity/: