extract: 2026-03-00-metr-aisi-pre-deployment-evaluation-practice #1361
2 changed files with 55 additions and 1 deletions
|
|
@ -0,0 +1,38 @@
|
|||
{
|
||||
"rejected_claims": [
|
||||
{
|
||||
"filename": "frontier-ai-auditing-limited-to-voluntary-collaborative-model-because-deception-resilient-verification-not-technically-feasible.md",
|
||||
"issues": [
|
||||
"missing_attribution_extractor"
|
||||
]
|
||||
},
|
||||
{
|
||||
"filename": "voluntary-collaborative-auditing-shares-structural-weakness-of-responsible-scaling-policies-requiring-lab-cooperation-to-function.md",
|
||||
"issues": [
|
||||
"missing_attribution_extractor"
|
||||
]
|
||||
}
|
||||
],
|
||||
"validation_stats": {
|
||||
"total": 2,
|
||||
"kept": 0,
|
||||
"fixed": 8,
|
||||
"rejected": 2,
|
||||
"fixes_applied": [
|
||||
"frontier-ai-auditing-limited-to-voluntary-collaborative-model-because-deception-resilient-verification-not-technically-feasible.md:set_created:2026-03-19",
|
||||
"frontier-ai-auditing-limited-to-voluntary-collaborative-model-because-deception-resilient-verification-not-technically-feasible.md:stripped_wiki_link:safe-AI-development-requires-building-alignment-mechanisms-b",
|
||||
"frontier-ai-auditing-limited-to-voluntary-collaborative-model-because-deception-resilient-verification-not-technically-feasible.md:stripped_wiki_link:voluntary-safety-pledges-cannot-survive-competitive-pressure",
|
||||
"frontier-ai-auditing-limited-to-voluntary-collaborative-model-because-deception-resilient-verification-not-technically-feasible.md:stripped_wiki_link:AI-transparency-is-declining-not-improving-because-Stanford-",
|
||||
"voluntary-collaborative-auditing-shares-structural-weakness-of-responsible-scaling-policies-requiring-lab-cooperation-to-function.md:set_created:2026-03-19",
|
||||
"voluntary-collaborative-auditing-shares-structural-weakness-of-responsible-scaling-policies-requiring-lab-cooperation-to-function.md:stripped_wiki_link:voluntary-safety-pledges-cannot-survive-competitive-pressure",
|
||||
"voluntary-collaborative-auditing-shares-structural-weakness-of-responsible-scaling-policies-requiring-lab-cooperation-to-function.md:stripped_wiki_link:Anthropics-RSP-rollback-under-commercial-pressure-is-the-fir",
|
||||
"voluntary-collaborative-auditing-shares-structural-weakness-of-responsible-scaling-policies-requiring-lab-cooperation-to-function.md:stripped_wiki_link:only-binding-regulation-with-enforcement-teeth-changes-front"
|
||||
],
|
||||
"rejections": [
|
||||
"frontier-ai-auditing-limited-to-voluntary-collaborative-model-because-deception-resilient-verification-not-technically-feasible.md:missing_attribution_extractor",
|
||||
"voluntary-collaborative-auditing-shares-structural-weakness-of-responsible-scaling-policies-requiring-lab-cooperation-to-function.md:missing_attribution_extractor"
|
||||
]
|
||||
},
|
||||
"model": "anthropic/claude-sonnet-4.5",
|
||||
"date": "2026-03-19"
|
||||
}
|
||||
|
|
@ -7,9 +7,13 @@ date: 2026-01-01
|
|||
domain: ai-alignment
|
||||
secondary_domains: []
|
||||
format: paper
|
||||
status: unprocessed
|
||||
status: null-result
|
||||
priority: high
|
||||
tags: [evaluation-infrastructure, third-party-audit, AAL-framework, voluntary-collaborative, deception-resilient, governance-gap]
|
||||
processed_by: theseus
|
||||
processed_date: 2026-03-19
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
extraction_notes: "LLM returned 2 claims, 2 rejected by validator"
|
||||
---
|
||||
|
||||
## Content
|
||||
|
|
@ -56,3 +60,15 @@ PRIMARY CONNECTION: [[safe AI development requires building alignment mechanisms
|
|||
WHY ARCHIVED: Most comprehensive description of the evaluation infrastructure field in early 2026. Defines the gap between current capability and what rigorous evaluation requires. The technical infeasibility of deception-resilient evaluation (AAL-3/4) is a major finding that strengthens B1's "not being treated as such" claim.
|
||||
|
||||
EXTRACTION HINT: Focus on the AAL framework structure, the technical infeasibility of AAL-3/4, and the voluntary-collaborative limitation. These three elements together describe the core gap in evaluation infrastructure.
|
||||
|
||||
|
||||
## Key Facts
|
||||
- AAL-1 represents current peak practice: time-bounded system audits relying substantially on company-provided information
|
||||
- AAL-2 is near-term goal: greater access to non-public information, less reliance on company statements, not yet standard
|
||||
- AAL-3 and AAL-4 require deception-resilient verification and are currently not technically feasible
|
||||
- METR and AISI currently perform AAL-1 level evaluations
|
||||
- Paper has 28+ authors from 27 organizations including GovAI, MIT CSAIL, Cambridge, Stanford, Yale, Anthropic contributors, Epoch AI, Apollo Research
|
||||
- Yoshua Bengio is a co-author
|
||||
- Published January 2026, approximately 3 months after Anthropic RSP rollback
|
||||
- Adoption model relies on market-based incentives: competitive procurement, insurance differentiation, audit credentials as competitive advantage
|
||||
- Current adoption is voluntary and concentrated among a few developers with only emerging pilots
|
||||
|
|
|
|||
Loading…
Reference in a new issue