theseus: navigation layer — fix dangling topic links + update domain map #22
9 changed files with 11 additions and 8 deletions
|
|
@ -35,5 +35,5 @@ Relevant Notes:
|
||||||
- [[Git-traced agent evolution with human-in-the-loop evals replaces recursive self-improvement as credible framing for iterative AI development]] -- the antidote to credibility debt: precise framing of governed evolution builds trust while "recursive self-improvement" builds hype
|
- [[Git-traced agent evolution with human-in-the-loop evals replaces recursive self-improvement as credible framing for iterative AI development]] -- the antidote to credibility debt: precise framing of governed evolution builds trust while "recursive self-improvement" builds hype
|
||||||
|
|
||||||
Topics:
|
Topics:
|
||||||
- [[AI alignment approaches]]
|
- [[domains/ai-alignment/_map]]
|
||||||
- [[livingip overview]]
|
- [[livingip overview]]
|
||||||
|
|
|
||||||
|
|
@ -35,6 +35,9 @@ Theseus's domain spans the most consequential technology transition in human his
|
||||||
|
|
||||||
## Institutional Context
|
## Institutional Context
|
||||||
- [[AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation]] — Acemoglu's critical juncture framework applied to AI governance
|
- [[AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation]] — Acemoglu's critical juncture framework applied to AI governance
|
||||||
|
- [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]] — Anthropic RSP rollback (Feb 2026): voluntary safety collapses under competitive pressure
|
||||||
|
- [[government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them]] — Pentagon designating Anthropic as supply chain risk: government as coordination-breaker
|
||||||
|
- [[current language models escalate to nuclear war in simulated conflicts because behavioral alignment cannot instill aversion to catastrophic irreversible actions]] — King's College London (2026): LLMs choose nuclear escalation in 95% of war games
|
||||||
- [[anthropomorphizing AI agents to claim autonomous action creates credibility debt that compounds until a crisis forces public reckoning]] (in `core/living-agents/`) — narrative debt from overstating AI agent autonomy
|
- [[anthropomorphizing AI agents to claim autonomous action creates credibility debt that compounds until a crisis forces public reckoning]] (in `core/living-agents/`) — narrative debt from overstating AI agent autonomy
|
||||||
|
|
||||||
## Foundations (in foundations/collective-intelligence/)
|
## Foundations (in foundations/collective-intelligence/)
|
||||||
|
|
|
||||||
|
|
@ -32,4 +32,4 @@ Relevant Notes:
|
||||||
Topics:
|
Topics:
|
||||||
- [[livingip overview]]
|
- [[livingip overview]]
|
||||||
- [[coordination mechanisms]]
|
- [[coordination mechanisms]]
|
||||||
- [[AI alignment approaches]]
|
- [[domains/ai-alignment/_map]]
|
||||||
|
|
@ -31,4 +31,4 @@ Relevant Notes:
|
||||||
Topics:
|
Topics:
|
||||||
- [[network structures]]
|
- [[network structures]]
|
||||||
- [[coordination mechanisms]]
|
- [[coordination mechanisms]]
|
||||||
- [[core/_map]]
|
- [[foundations/collective-intelligence/_map]]
|
||||||
|
|
@ -33,4 +33,4 @@ Relevant Notes:
|
||||||
Topics:
|
Topics:
|
||||||
- [[livingip overview]]
|
- [[livingip overview]]
|
||||||
- [[coordination mechanisms]]
|
- [[coordination mechanisms]]
|
||||||
- [[AI alignment approaches]]
|
- [[domains/ai-alignment/_map]]
|
||||||
|
|
@ -31,4 +31,4 @@ Relevant Notes:
|
||||||
Topics:
|
Topics:
|
||||||
- [[livingip overview]]
|
- [[livingip overview]]
|
||||||
- [[coordination mechanisms]]
|
- [[coordination mechanisms]]
|
||||||
- [[AI alignment approaches]]
|
- [[domains/ai-alignment/_map]]
|
||||||
|
|
@ -35,4 +35,4 @@ Relevant Notes:
|
||||||
Topics:
|
Topics:
|
||||||
- [[network structures]]
|
- [[network structures]]
|
||||||
- [[coordination mechanisms]]
|
- [[coordination mechanisms]]
|
||||||
- [[core/_map]]
|
- [[foundations/collective-intelligence/_map]]
|
||||||
|
|
@ -28,4 +28,4 @@ Relevant Notes:
|
||||||
Topics:
|
Topics:
|
||||||
- [[livingip overview]]
|
- [[livingip overview]]
|
||||||
- [[coordination mechanisms]]
|
- [[coordination mechanisms]]
|
||||||
- [[AI alignment approaches]]
|
- [[domains/ai-alignment/_map]]
|
||||||
|
|
@ -32,4 +32,4 @@ Relevant Notes:
|
||||||
Topics:
|
Topics:
|
||||||
- [[livingip overview]]
|
- [[livingip overview]]
|
||||||
- [[coordination mechanisms]]
|
- [[coordination mechanisms]]
|
||||||
- [[AI alignment approaches]]
|
- [[domains/ai-alignment/_map]]
|
||||||
Loading…
Reference in a new issue