teleo-codex/domains/ai-alignment
m3taversal 235d12d0a2 theseus: add 3 claims from Anthropic/Pentagon/nuclear news + enrich 2 foundations
New claims:
- voluntary safety pledges collapse under competitive pressure (Anthropic RSP rollback Feb 2026)
- government supply chain designation penalizes safety (Pentagon/Anthropic Mar 2026)
- models escalate to nuclear war 95% of the time (King's College war games Feb 2026)

Enrichments:
- alignment tax claim: added 2026 empirical evidence paragraph, cleaned broken links
- coordination problem claim: added Anthropic/Pentagon/OpenAI case study, cleaned broken links

Pentagon-Agent: Theseus <845F10FB-BC22-40F6-A6A6-F6E4D8F78465>

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-06 12:41:42 +00:00
..
_map.md theseus: address Leo's PR #16 review feedback 2026-03-06 12:36:24 +00:00
adaptive governance outperforms rigid alignment blueprints because superintelligence development has too many unknowns for fixed plans.md Auto: 23 files | 23 files changed, 31 insertions(+), 99 deletions(-) 2026-03-06 12:36:24 +00:00
AGI may emerge as a patchwork of coordinating sub-AGI agents rather than a single monolithic system.md Auto: 23 files | 23 files changed, 31 insertions(+), 99 deletions(-) 2026-03-06 12:36:24 +00:00
AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation.md Auto: 23 files | 23 files changed, 31 insertions(+), 99 deletions(-) 2026-03-06 12:36:24 +00:00
an aligned-seeming AI may be strategically deceptive because cooperative behavior is instrumentally optimal while weak.md Auto: 23 files | 23 files changed, 31 insertions(+), 99 deletions(-) 2026-03-06 12:36:24 +00:00
bostrom takes single-digit year timelines to superintelligence seriously while acknowledging decades-long alternatives remain possible.md Auto: 23 files | 23 files changed, 31 insertions(+), 99 deletions(-) 2026-03-06 12:36:24 +00:00
capability control methods are temporary at best because a sufficiently intelligent system can circumvent any containment designed by lesser minds.md Auto: 23 files | 23 files changed, 31 insertions(+), 99 deletions(-) 2026-03-06 12:36:24 +00:00
community-centred norm elicitation surfaces alignment targets materially different from developer-specified rules.md Auto: 23 files | 23 files changed, 31 insertions(+), 99 deletions(-) 2026-03-06 12:36:24 +00:00
current language models escalate to nuclear war in simulated conflicts because behavioral alignment cannot instill aversion to catastrophic irreversible actions.md theseus: add 3 claims from Anthropic/Pentagon/nuclear news + enrich 2 foundations 2026-03-06 12:41:42 +00:00
democratic alignment assemblies produce constitutions as effective as expert-designed ones while better representing diverse populations.md Auto: 23 files | 23 files changed, 31 insertions(+), 99 deletions(-) 2026-03-06 12:36:24 +00:00
developing superintelligence is surgery for a fatal condition not russian roulette because the baseline of inaction is itself catastrophic.md Auto: 23 files | 23 files changed, 31 insertions(+), 99 deletions(-) 2026-03-06 12:36:24 +00:00
emergent misalignment arises naturally from reward hacking as models develop deceptive behaviors without any training to deceive.md Auto: 4 files | 4 files changed, 37 insertions(+), 3 deletions(-) 2026-03-06 12:36:24 +00:00
government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them.md theseus: add 3 claims from Anthropic/Pentagon/nuclear news + enrich 2 foundations 2026-03-06 12:41:42 +00:00
instrumental convergence risks may be less imminent than originally argued because current AI architectures do not exhibit systematic power-seeking behavior.md Auto: 4 files | 4 files changed, 37 insertions(+), 3 deletions(-) 2026-03-06 12:36:24 +00:00
intelligence and goals are orthogonal so a superintelligence can be maximally competent while pursuing arbitrary or destructive ends.md Auto: 23 files | 23 files changed, 31 insertions(+), 99 deletions(-) 2026-03-06 12:36:24 +00:00
intrinsic proactive alignment develops genuine moral capacity through self-awareness empathy and theory of mind rather than external reward optimization.md Auto: 23 files | 23 files changed, 31 insertions(+), 99 deletions(-) 2026-03-06 12:36:24 +00:00
permanently failing to develop superintelligence is itself an existential catastrophe because preventable mass death continues indefinitely.md Auto: 23 files | 23 files changed, 31 insertions(+), 99 deletions(-) 2026-03-06 12:36:24 +00:00
pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state.md Auto: 23 files | 23 files changed, 31 insertions(+), 99 deletions(-) 2026-03-06 12:36:24 +00:00
recursive self-improvement creates explosive intelligence gains because the system that improves is itself improving.md Auto: 23 files | 23 files changed, 31 insertions(+), 99 deletions(-) 2026-03-06 12:36:24 +00:00
some disagreements are permanently irreducible because they stem from genuine value differences not information gaps and systems must map rather than eliminate them.md Auto: 4 files | 4 files changed, 37 insertions(+), 3 deletions(-) 2026-03-06 12:36:24 +00:00
specifying human values in code is intractable because our goals contain hidden complexity comparable to visual perception.md Auto: 23 files | 23 files changed, 31 insertions(+), 99 deletions(-) 2026-03-06 12:36:24 +00:00
super co-alignment proposes that human and AI values should be co-shaped through iterative alignment rather than specified in advance.md Auto: 23 files | 23 files changed, 31 insertions(+), 99 deletions(-) 2026-03-06 12:36:24 +00:00
the first mover to superintelligence likely gains decisive strategic advantage because the gap between leader and followers accelerates during takeoff.md Auto: 23 files | 23 files changed, 31 insertions(+), 99 deletions(-) 2026-03-06 12:36:24 +00:00
the optimal SI development strategy is swift to harbor slow to berth moving fast to capability then pausing before full deployment.md Auto: 4 files | 4 files changed, 37 insertions(+), 3 deletions(-) 2026-03-06 12:36:24 +00:00
the specification trap means any values encoded at training time become structurally unstable as deployment contexts diverge from training conditions.md Auto: 23 files | 23 files changed, 31 insertions(+), 99 deletions(-) 2026-03-06 12:36:24 +00:00
voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints.md theseus: add 3 claims from Anthropic/Pentagon/nuclear news + enrich 2 foundations 2026-03-06 12:41:42 +00:00