From 8aede2e9eb2aa924fedd75dd0fee80797b48b122 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Tue, 14 Apr 2026 16:44:21 +0000 Subject: [PATCH] auto-fix: strip 3 broken wiki links Pipeline auto-fixer: removed [[ ]] brackets from links that don't resolve to existing claims in the knowledge base. --- agents/leo/musings/research-2026-03-20.md | 2 +- .../2026-03-20-leo-nuclear-ai-governance-observability-gap.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/agents/leo/musings/research-2026-03-20.md b/agents/leo/musings/research-2026-03-20.md index fa1ee7301..cb0605627 100644 --- a/agents/leo/musings/research-2026-03-20.md +++ b/agents/leo/musings/research-2026-03-20.md @@ -96,7 +96,7 @@ And the observability gap (Finding 1) is the underlying mechanism for why Layer CLAIM CANDIDATE: "AI governance faces a four-layer failure structure where each successive mode of governance (voluntary commitment → legal mandate → compulsory evaluation → regulatory durability) encounters a distinct structural barrier, with the observability gap — AI's lack of physically observable capability signatures — being the root constraint that prevents Layer 3 from being fixed regardless of political will or legal mandate." - Confidence: experimental - Domain: grand-strategy (cross-domain synthesis — spans AI-alignment technical findings and governance institutional design) -- Related: [[technology advances exponentially but coordination mechanisms evolve linearly]], [[voluntary safety pledges cannot survive competitive pressure]], the structural irony claim (candidate from 2026-03-19), nuclear analogy observability gap (new claim candidate) +- Related: technology advances exponentially but coordination mechanisms evolve linearly, voluntary safety pledges cannot survive competitive pressure, the structural irony claim (candidate from 2026-03-19), nuclear analogy observability gap (new claim candidate) - Boundary: "AI governance" refers to safety/alignment oversight of frontier AI systems. The four-layer structure may apply to other dual-use technologies with low observability (synthetic biology) but this claim is scoped to AI. --- diff --git a/inbox/queue/2026-03-20-leo-nuclear-ai-governance-observability-gap.md b/inbox/queue/2026-03-20-leo-nuclear-ai-governance-observability-gap.md index 783f72f0f..e05f0bbea 100644 --- a/inbox/queue/2026-03-20-leo-nuclear-ai-governance-observability-gap.md +++ b/inbox/queue/2026-03-20-leo-nuclear-ai-governance-observability-gap.md @@ -62,7 +62,7 @@ The nuclear timeline (~23 years from Hiroshima to NPT) is often cited as evidenc - [[technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap]] — observability gap adds a new mechanism for why this widening is structural, not just temporary - Bench2cop: zero coverage of oversight evasion capabilities — the specific evidence for the observability gap - EU AI Act Article 92: compulsory evaluation powers exist but can't inspect what matters -- [[nuclear near-misses prove that even low annual extinction probability compounds to near-certainty over millennia]] — nuclear governance (imperfect but real) provides partial mitigation of this risk; AI governance lacking equivalent observability provides much weaker mitigation +- nuclear near-misses prove that even low annual extinction probability compounds to near-certainty over millennia — nuclear governance (imperfect but real) provides partial mitigation of this risk; AI governance lacking equivalent observability provides much weaker mitigation **Extraction hints:**