diff --git a/inbox/queue/2026-04-28-theseus-b4-scope-qualification-synthesis.md b/inbox/queue/2026-04-28-theseus-b4-scope-qualification-synthesis.md index ef34c2f25..b312ab3ef 100644 --- a/inbox/queue/2026-04-28-theseus-b4-scope-qualification-synthesis.md +++ b/inbox/queue/2026-04-28-theseus-b4-scope-qualification-synthesis.md @@ -75,7 +75,7 @@ This produces a different policy recommendation than un-scoped B4, which would s - `[[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]]` — primary empirical support for B4 (holds without qualification) - `[[formal verification of AI-generated proofs provides scalable oversight that human review cannot match because machine-checked correctness scales with AI capability while human verification degrades]]` — the established exception - `[[multi-layer-ensemble-probes-outperform-single-layer-by-29-78-percent]]` — the conditional exception -- `[[divergence-representation-monitoring-net-safety]]` — the open divergence this synthesis helps clarify +- `divergence-representation-monitoring-net-safety` — the open divergence this synthesis helps clarify - `[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]` — cross-domain B4 confirmation **Extraction hints:**