From 4ab4c24b0d21acda1322942f387821416d1a80a2 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Sat, 4 Apr 2026 13:36:03 +0000 Subject: [PATCH] =?UTF-8?q?source:=202026-01-01-aisi-sketch-ai-control-saf?= =?UTF-8?q?ety-case.md=20=E2=86=92=20null-result?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Pentagon-Agent: Epimetheus --- .../2026-01-01-aisi-sketch-ai-control-safety-case.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) rename inbox/{queue => null-result}/2026-01-01-aisi-sketch-ai-control-safety-case.md (98%) diff --git a/inbox/queue/2026-01-01-aisi-sketch-ai-control-safety-case.md b/inbox/null-result/2026-01-01-aisi-sketch-ai-control-safety-case.md similarity index 98% rename from inbox/queue/2026-01-01-aisi-sketch-ai-control-safety-case.md rename to inbox/null-result/2026-01-01-aisi-sketch-ai-control-safety-case.md index c101a17d..1f87d69e 100644 --- a/inbox/queue/2026-01-01-aisi-sketch-ai-control-safety-case.md +++ b/inbox/null-result/2026-01-01-aisi-sketch-ai-control-safety-case.md @@ -7,10 +7,11 @@ date: 2026-01-01 domain: ai-alignment secondary_domains: [grand-strategy] format: paper -status: unprocessed +status: null-result priority: medium tags: [AISI, control-safety-case, safety-argument, loss-of-control, governance-framework, institutional] flagged_for_leo: ["this is the governance architecture side — AISI is building not just evaluation tools but a structured argument framework for claiming AI is safe to deploy; the gap between this framework and the sandbagging/detection-failure findings in other AISI papers is itself a governance signal"] +extraction_model: "anthropic/claude-sonnet-4.5" --- ## Content