Compare commits

..

1 commit

Author SHA1 Message Date
Teleo Agents
9da8b0d322 extract: 2026-03-00-metr-aisi-pre-deployment-evaluation-practice
Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA>
2026-03-19 00:33:50 +00:00
5 changed files with 2 additions and 105 deletions

View file

@ -23,12 +23,6 @@ The alignment field has converged on a problem they cannot solve with their curr
The UK AI for Collective Intelligence Research Network represents a national-scale institutional commitment to building CI infrastructure with explicit alignment goals. Funded by UKRI/EPSRC, the network proposes the 'AI4CI Loop' (Gathering Intelligence → Informing Behaviour) as a framework for multi-level decision making. The research strategy includes seven trust properties (human agency, security, privacy, transparency, fairness, value alignment, accountability) and specifies technical requirements including federated learning architectures, secure data repositories, and foundation models adapted for collective intelligence contexts. This is not purely academic—it's a government-backed infrastructure program with institutional resources. However, the strategy is prospective (published 2024-11) and describes a research agenda rather than deployed systems, so it represents institutional intent rather than operational infrastructure.
### Additional Evidence (challenge)
*Source: [[2026-01-00-kim-third-party-ai-assurance-framework]] | Added: 2026-03-19*
CMU researchers have built and validated a third-party AI assurance framework with four operational components (Responsibility Assignment Matrix, Interview Protocol, Maturity Matrix, Assurance Report Template), tested on two real deployment cases. This represents concrete infrastructure-building work, though at small scale and not yet applicable to frontier AI.
---
Relevant Notes:

View file

@ -1,38 +0,0 @@
{
"rejected_claims": [
{
"filename": "frontier-ai-auditing-limited-to-voluntary-collaborative-model-because-deception-resilient-verification-not-technically-feasible.md",
"issues": [
"missing_attribution_extractor"
]
},
{
"filename": "voluntary-collaborative-auditing-shares-structural-weakness-of-responsible-scaling-policies-requiring-lab-cooperation-to-function.md",
"issues": [
"missing_attribution_extractor"
]
}
],
"validation_stats": {
"total": 2,
"kept": 0,
"fixed": 8,
"rejected": 2,
"fixes_applied": [
"frontier-ai-auditing-limited-to-voluntary-collaborative-model-because-deception-resilient-verification-not-technically-feasible.md:set_created:2026-03-19",
"frontier-ai-auditing-limited-to-voluntary-collaborative-model-because-deception-resilient-verification-not-technically-feasible.md:stripped_wiki_link:safe-AI-development-requires-building-alignment-mechanisms-b",
"frontier-ai-auditing-limited-to-voluntary-collaborative-model-because-deception-resilient-verification-not-technically-feasible.md:stripped_wiki_link:voluntary-safety-pledges-cannot-survive-competitive-pressure",
"frontier-ai-auditing-limited-to-voluntary-collaborative-model-because-deception-resilient-verification-not-technically-feasible.md:stripped_wiki_link:AI-transparency-is-declining-not-improving-because-Stanford-",
"voluntary-collaborative-auditing-shares-structural-weakness-of-responsible-scaling-policies-requiring-lab-cooperation-to-function.md:set_created:2026-03-19",
"voluntary-collaborative-auditing-shares-structural-weakness-of-responsible-scaling-policies-requiring-lab-cooperation-to-function.md:stripped_wiki_link:voluntary-safety-pledges-cannot-survive-competitive-pressure",
"voluntary-collaborative-auditing-shares-structural-weakness-of-responsible-scaling-policies-requiring-lab-cooperation-to-function.md:stripped_wiki_link:Anthropics-RSP-rollback-under-commercial-pressure-is-the-fir",
"voluntary-collaborative-auditing-shares-structural-weakness-of-responsible-scaling-policies-requiring-lab-cooperation-to-function.md:stripped_wiki_link:only-binding-regulation-with-enforcement-teeth-changes-front"
],
"rejections": [
"frontier-ai-auditing-limited-to-voluntary-collaborative-model-because-deception-resilient-verification-not-technically-feasible.md:missing_attribution_extractor",
"voluntary-collaborative-auditing-shares-structural-weakness-of-responsible-scaling-policies-requiring-lab-cooperation-to-function.md:missing_attribution_extractor"
]
},
"model": "anthropic/claude-sonnet-4.5",
"date": "2026-03-19"
}

View file

@ -1,32 +0,0 @@
{
"rejected_claims": [
{
"filename": "third-party-ai-assurance-methodology-is-at-proof-of-concept-stage-validated-in-small-deployment-contexts-but-not-yet-applicable-to-frontier-ai-at-scale.md",
"issues": [
"missing_attribution_extractor"
]
},
{
"filename": "ai-assurance-explicitly-distinguishes-itself-from-audit-to-prevent-conflict-of-interest-and-ensure-credibility-which-acknowledges-current-evaluation-has-a-structural-independence-problem.md",
"issues": [
"missing_attribution_extractor"
]
}
],
"validation_stats": {
"total": 2,
"kept": 0,
"fixed": 2,
"rejected": 2,
"fixes_applied": [
"third-party-ai-assurance-methodology-is-at-proof-of-concept-stage-validated-in-small-deployment-contexts-but-not-yet-applicable-to-frontier-ai-at-scale.md:set_created:2026-03-19",
"ai-assurance-explicitly-distinguishes-itself-from-audit-to-prevent-conflict-of-interest-and-ensure-credibility-which-acknowledges-current-evaluation-has-a-structural-independence-problem.md:set_created:2026-03-19"
],
"rejections": [
"third-party-ai-assurance-methodology-is-at-proof-of-concept-stage-validated-in-small-deployment-contexts-but-not-yet-applicable-to-frontier-ai-at-scale.md:missing_attribution_extractor",
"ai-assurance-explicitly-distinguishes-itself-from-audit-to-prevent-conflict-of-interest-and-ensure-credibility-which-acknowledges-current-evaluation-has-a-structural-independence-problem.md:missing_attribution_extractor"
]
},
"model": "anthropic/claude-sonnet-4.5",
"date": "2026-03-19"
}

View file

@ -7,13 +7,9 @@ date: 2026-01-01
domain: ai-alignment
secondary_domains: []
format: paper
status: null-result
status: unprocessed
priority: high
tags: [evaluation-infrastructure, third-party-audit, AAL-framework, voluntary-collaborative, deception-resilient, governance-gap]
processed_by: theseus
processed_date: 2026-03-19
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "LLM returned 2 claims, 2 rejected by validator"
---
## Content
@ -60,15 +56,3 @@ PRIMARY CONNECTION: [[safe AI development requires building alignment mechanisms
WHY ARCHIVED: Most comprehensive description of the evaluation infrastructure field in early 2026. Defines the gap between current capability and what rigorous evaluation requires. The technical infeasibility of deception-resilient evaluation (AAL-3/4) is a major finding that strengthens B1's "not being treated as such" claim.
EXTRACTION HINT: Focus on the AAL framework structure, the technical infeasibility of AAL-3/4, and the voluntary-collaborative limitation. These three elements together describe the core gap in evaluation infrastructure.
## Key Facts
- AAL-1 represents current peak practice: time-bounded system audits relying substantially on company-provided information
- AAL-2 is near-term goal: greater access to non-public information, less reliance on company statements, not yet standard
- AAL-3 and AAL-4 require deception-resilient verification and are currently not technically feasible
- METR and AISI currently perform AAL-1 level evaluations
- Paper has 28+ authors from 27 organizations including GovAI, MIT CSAIL, Cambridge, Stanford, Yale, Anthropic contributors, Epoch AI, Apollo Research
- Yoshua Bengio is a co-author
- Published January 2026, approximately 3 months after Anthropic RSP rollback
- Adoption model relies on market-based incentives: competitive procurement, insurance differentiation, audit credentials as competitive advantage
- Current adoption is voluntary and concentrated among a few developers with only emerging pilots

View file

@ -7,13 +7,9 @@ date: 2026-01-30
domain: ai-alignment
secondary_domains: []
format: paper
status: enrichment
status: unprocessed
priority: high
tags: [evaluation-infrastructure, third-party-assurance, conflict-of-interest, lifecycle-assessment, CMU]
processed_by: theseus
processed_date: 2026-03-19
enrichments_applied: ["no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
---
## Content
@ -55,10 +51,3 @@ PRIMARY CONNECTION: [[no research group is building alignment through collective
WHY ARCHIVED: Provides methodology for third-party AI assurance that explicitly addresses the conflict of interest problem. Important evidence that the field is aware of the independence gap.
EXTRACTION HINT: The "assurance vs audit" distinction to prevent conflict of interest is the key extractable insight. The lifecycle approach (process + outcomes) is also worth noting.
## Key Facts
- CMU researchers published 'Toward Third-Party Assurance of AI Systems' in January 2026
- The framework was tested on a business document tagging tool and a housing resource allocation tool
- The paper identifies that few existing evaluation resources 'address both the process of designing, developing, and deploying an AI system and the outcomes it produces'
- Few existing approaches are 'end-to-end and operational, give actionable guidance, or present evidence of usability' according to the gap analysis