theseus: fix dangling topic links and update domain map
- Replace [[AI alignment approaches]] with [[domains/ai-alignment/_map]] in 5 foundations/collective-intelligence/ claims and 1 core/living-agents/ claim (6 fixes total — topic tag had no corresponding file) - Replace [[core/_map]] with [[foundations/collective-intelligence/_map]] in 2 CI claims (core/_map.md doesn't exist) - Add 3 new claims from PR #20 to domains/ai-alignment/_map.md: voluntary safety pledges, government supply chain designation, nuclear war escalation in LLM simulations Pentagon-Agent: Theseus <845F10FB-BC22-40F6-A6A6-F6E4D8F78465>
This commit is contained in:
parent
2a51612182
commit
d7025e65dd
9 changed files with 11 additions and 8 deletions
|
|
@ -35,5 +35,5 @@ Relevant Notes:
|
||||||
- [[Git-traced agent evolution with human-in-the-loop evals replaces recursive self-improvement as credible framing for iterative AI development]] -- the antidote to credibility debt: precise framing of governed evolution builds trust while "recursive self-improvement" builds hype
|
- [[Git-traced agent evolution with human-in-the-loop evals replaces recursive self-improvement as credible framing for iterative AI development]] -- the antidote to credibility debt: precise framing of governed evolution builds trust while "recursive self-improvement" builds hype
|
||||||
|
|
||||||
Topics:
|
Topics:
|
||||||
- [[AI alignment approaches]]
|
- [[domains/ai-alignment/_map]]
|
||||||
- [[livingip overview]]
|
- [[livingip overview]]
|
||||||
|
|
|
||||||
|
|
@ -35,6 +35,9 @@ Theseus's domain spans the most consequential technology transition in human his
|
||||||
|
|
||||||
## Institutional Context
|
## Institutional Context
|
||||||
- [[AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation]] — Acemoglu's critical juncture framework applied to AI governance
|
- [[AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation]] — Acemoglu's critical juncture framework applied to AI governance
|
||||||
|
- [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]] — Anthropic RSP rollback (Feb 2026): voluntary safety collapses under competitive pressure
|
||||||
|
- [[government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them]] — Pentagon designating Anthropic as supply chain risk: government as coordination-breaker
|
||||||
|
- [[current language models escalate to nuclear war in simulated conflicts because behavioral alignment cannot instill aversion to catastrophic irreversible actions]] — King's College London (2026): LLMs choose nuclear escalation in 95% of war games
|
||||||
- [[anthropomorphizing AI agents to claim autonomous action creates credibility debt that compounds until a crisis forces public reckoning]] (in `core/living-agents/`) — narrative debt from overstating AI agent autonomy
|
- [[anthropomorphizing AI agents to claim autonomous action creates credibility debt that compounds until a crisis forces public reckoning]] (in `core/living-agents/`) — narrative debt from overstating AI agent autonomy
|
||||||
|
|
||||||
## Foundations (in foundations/collective-intelligence/)
|
## Foundations (in foundations/collective-intelligence/)
|
||||||
|
|
|
||||||
|
|
@ -32,4 +32,4 @@ Relevant Notes:
|
||||||
Topics:
|
Topics:
|
||||||
- [[livingip overview]]
|
- [[livingip overview]]
|
||||||
- [[coordination mechanisms]]
|
- [[coordination mechanisms]]
|
||||||
- [[AI alignment approaches]]
|
- [[domains/ai-alignment/_map]]
|
||||||
|
|
@ -31,4 +31,4 @@ Relevant Notes:
|
||||||
Topics:
|
Topics:
|
||||||
- [[network structures]]
|
- [[network structures]]
|
||||||
- [[coordination mechanisms]]
|
- [[coordination mechanisms]]
|
||||||
- [[core/_map]]
|
- [[foundations/collective-intelligence/_map]]
|
||||||
|
|
@ -33,4 +33,4 @@ Relevant Notes:
|
||||||
Topics:
|
Topics:
|
||||||
- [[livingip overview]]
|
- [[livingip overview]]
|
||||||
- [[coordination mechanisms]]
|
- [[coordination mechanisms]]
|
||||||
- [[AI alignment approaches]]
|
- [[domains/ai-alignment/_map]]
|
||||||
|
|
@ -31,4 +31,4 @@ Relevant Notes:
|
||||||
Topics:
|
Topics:
|
||||||
- [[livingip overview]]
|
- [[livingip overview]]
|
||||||
- [[coordination mechanisms]]
|
- [[coordination mechanisms]]
|
||||||
- [[AI alignment approaches]]
|
- [[domains/ai-alignment/_map]]
|
||||||
|
|
@ -35,4 +35,4 @@ Relevant Notes:
|
||||||
Topics:
|
Topics:
|
||||||
- [[network structures]]
|
- [[network structures]]
|
||||||
- [[coordination mechanisms]]
|
- [[coordination mechanisms]]
|
||||||
- [[core/_map]]
|
- [[foundations/collective-intelligence/_map]]
|
||||||
|
|
@ -28,4 +28,4 @@ Relevant Notes:
|
||||||
Topics:
|
Topics:
|
||||||
- [[livingip overview]]
|
- [[livingip overview]]
|
||||||
- [[coordination mechanisms]]
|
- [[coordination mechanisms]]
|
||||||
- [[AI alignment approaches]]
|
- [[domains/ai-alignment/_map]]
|
||||||
|
|
@ -32,4 +32,4 @@ Relevant Notes:
|
||||||
Topics:
|
Topics:
|
||||||
- [[livingip overview]]
|
- [[livingip overview]]
|
||||||
- [[coordination mechanisms]]
|
- [[coordination mechanisms]]
|
||||||
- [[AI alignment approaches]]
|
- [[domains/ai-alignment/_map]]
|
||||||
Loading…
Reference in a new issue