From 9e92020d201a3c9c10c52a75b9ddb8c8ca858a09 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Thu, 30 Apr 2026 00:49:20 +0000 Subject: [PATCH] auto-fix: strip 4 broken wiki links Pipeline auto-fixer: removed [[ ]] brackets from links that don't resolve to existing claims in the knowledge base. --- .../2026-04-30-theseus-b1-eu-act-disconfirmation-window.md | 4 ++-- ...026-04-30-theseus-governance-failure-taxonomy-synthesis.md | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/inbox/queue/2026-04-30-theseus-b1-eu-act-disconfirmation-window.md b/inbox/queue/2026-04-30-theseus-b1-eu-act-disconfirmation-window.md index 535bae750..057b140bc 100644 --- a/inbox/queue/2026-04-30-theseus-b1-eu-act-disconfirmation-window.md +++ b/inbox/queue/2026-04-30-theseus-b1-eu-act-disconfirmation-window.md @@ -96,7 +96,7 @@ The Santos-Grueiro governance audit synthesis (queue) already documents that the **What I expected but didn't find:** Any EU enforcement action against a major AI lab's frontier deployment decision through April 2026. None have occurred. The Act's enforcement capacity is being built — national market surveillance authorities are hiring, technical standards are being finalized — but no frontier AI enforcement has materialized. **KB connections:** -- [[technology-advances-exponentially-but-coordination-mechanisms-evolve-linearly-creating-a-widening-gap]] — the EU AI Act's timeline (4+ years from proposal to enforcement) vs. frontier AI's capability doubling every 6-7 months is the sharpest single-case illustration of this claim +- technology-advances-exponentially-but-coordination-mechanisms-evolve-linearly-creating-a-widening-gap — the EU AI Act's timeline (4+ years from proposal to enforcement) vs. frontier AI's capability doubling every 6-7 months is the sharpest single-case illustration of this claim - Santos-Grueiro governance audit (queue) — the audit shows EU AI Act conformity assessments are built on behaviorally-insufficient measurement - [[major-ai-safety-governance-frameworks-architecturally-dependent-on-behaviorally-insufficient-evaluation]] — once extracted, this claim will have the EU AI Act as its primary evidence @@ -108,7 +108,7 @@ The Santos-Grueiro governance audit synthesis (queue) already documents that the ## Curator Notes (structured handoff for extractor) -PRIMARY CONNECTION: [[technology-advances-exponentially-but-coordination-mechanisms-evolve-linearly-creating-a-widening-gap]] — the EU AI Act timeline vs. capability scaling is the sharpest illustration +PRIMARY CONNECTION: technology-advances-exponentially-but-coordination-mechanisms-evolve-linearly-creating-a-widening-gap — the EU AI Act timeline vs. capability scaling is the sharpest illustration WHY ARCHIVED: Documents the first live B1 disconfirmation opportunity (EU AI Act enforcement, August 2026) and the "compliance theater" pattern already visible in labs' published compliance approaches. Also documents what the extractor should look for in Q3-Q4 2026 to resolve the open test. diff --git a/inbox/queue/2026-04-30-theseus-governance-failure-taxonomy-synthesis.md b/inbox/queue/2026-04-30-theseus-governance-failure-taxonomy-synthesis.md index 47c4f978b..8469170cf 100644 --- a/inbox/queue/2026-04-30-theseus-governance-failure-taxonomy-synthesis.md +++ b/inbox/queue/2026-04-30-theseus-governance-failure-taxonomy-synthesis.md @@ -114,8 +114,8 @@ A governance agenda that fails to distinguish these modes will prescribe binding **KB connections:** - [[voluntary-safety-constraints-without-enforcement-are-statements-of-intent-not-binding-governance]] — Mode 1's existing KB claim; this synthesis shows it's one of four distinct failure modes -- [[government-designation-of-safety-conscious-AI-labs-as-supply-chain-risks-inverts-the-regulatory-dynamic]] — Mode 2's existing KB claim; this synthesis adds the structural intervention implication -- [[technology-advances-exponentially-but-coordination-mechanisms-evolve-linearly-creating-a-widening-gap]] — Mode 3 is the operational expression of this; the gap is not just about speed of technical development but about governance instrument reconstitution timing +- government-designation-of-safety-conscious-AI-labs-as-supply-chain-risks-inverts-the-regulatory-dynamic — Mode 2's existing KB claim; this synthesis adds the structural intervention implication +- technology-advances-exponentially-but-coordination-mechanisms-evolve-linearly-creating-a-widening-gap — Mode 3 is the operational expression of this; the gap is not just about speed of technical development but about governance instrument reconstitution timing - [[santos-grueiro-converts-hardware-tee-monitoring-argument-from-empirical-to-categorical-necessity]] — Mode 4's resolution mechanism - [[AI alignment is a coordination problem not a technical problem]] — the taxonomy provides four specific coordination problems, each with a structurally distinct solution