diff --git a/inbox/queue/2026-01-01-aisi-sketch-ai-control-safety-case.md b/inbox/null-result/2026-01-01-aisi-sketch-ai-control-safety-case.md similarity index 98% rename from inbox/queue/2026-01-01-aisi-sketch-ai-control-safety-case.md rename to inbox/null-result/2026-01-01-aisi-sketch-ai-control-safety-case.md index c101a17d..1f87d69e 100644 --- a/inbox/queue/2026-01-01-aisi-sketch-ai-control-safety-case.md +++ b/inbox/null-result/2026-01-01-aisi-sketch-ai-control-safety-case.md @@ -7,10 +7,11 @@ date: 2026-01-01 domain: ai-alignment secondary_domains: [grand-strategy] format: paper -status: unprocessed +status: null-result priority: medium tags: [AISI, control-safety-case, safety-argument, loss-of-control, governance-framework, institutional] flagged_for_leo: ["this is the governance architecture side — AISI is building not just evaluation tools but a structured argument framework for claiming AI is safe to deploy; the gap between this framework and the sandbagging/detection-failure findings in other AISI papers is itself a governance signal"] +extraction_model: "anthropic/claude-sonnet-4.5" --- ## Content