--- type: claim domain: ai-alignment description: AuditBench demonstrates that white-box interpretability tools work on easier targets but the gap between tool performance in isolation and agent performance suggests task complexity, not capability, is the bottleneck confidence: experimental source: Abhay Sheshadri et al., Anthropic Fellows Program, AuditBench benchmark with 56 models across 13 tool configurations created: 2026-03-30 attribution: extractor: - handle: "theseus" sourcer: - handle: "anthropic-fellows-program" context: "Abhay Sheshadri et al., Anthropic Fellows Program, AuditBench benchmark with 56 models across 13 tool configurations" --- # Alignment auditing shows a structural tool-to-agent gap where interpretability tools that accurately surface evidence in isolation fail when used by investigator agents because agents underuse tools, struggle to separate signal from noise, and fail to convert evidence into correct hypotheses AuditBench evaluated 56 LLMs with implanted hidden behaviors using investigator agents with access to configurable tool sets across 13 different configurations. The key finding is a structural tool-to-agent gap: tools that surface accurate evidence when used in isolation fail to improve agent performance in practice. Investigator agents systematically underuse available tools, struggle to separate signal from noise in tool outputs, and fail to convert evidence into correct hypotheses about hidden behaviors. This gap persists even when the tools themselves are working correctly—white-box interpretability tools help on easier targets (lightly adversarially trained models) but fail on models with more robust adversarial training. The paper explicitly states this is a task complexity issue, not an engineering problem, suggesting the gap is fundamental rather than fixable through better prompting or agent architecture. This matters because Anthropic's RSP v3 (released the same month, February 2026) commits to 'systematic alignment assessments incorporating mechanistic interpretability'—but this research from Anthropic-funded fellows shows that interpretability tools may only catch naive misalignment, not sophisticated embedded misalignment, and even when they work, agents can't use them effectively. --- Relevant Notes: - [[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]] - AI capability and reliability are independent dimensions because Claude solved a 30 year open mathematical problem while simultaneously degrading at basic program execution during the same session Topics: - [[_map]]