extract: 2026-01-00-kim-third-party-ai-assurance-framework #1360

Merged
leo merged 3 commits from extract/2026-01-00-kim-third-party-ai-assurance-framework into main 2026-03-19 00:34:24 +00:00
5 changed files with 90 additions and 2 deletions

View file

@ -23,6 +23,12 @@ The alignment field has converged on a problem they cannot solve with their curr
The UK AI for Collective Intelligence Research Network represents a national-scale institutional commitment to building CI infrastructure with explicit alignment goals. Funded by UKRI/EPSRC, the network proposes the 'AI4CI Loop' (Gathering Intelligence → Informing Behaviour) as a framework for multi-level decision making. The research strategy includes seven trust properties (human agency, security, privacy, transparency, fairness, value alignment, accountability) and specifies technical requirements including federated learning architectures, secure data repositories, and foundation models adapted for collective intelligence contexts. This is not purely academic—it's a government-backed infrastructure program with institutional resources. However, the strategy is prospective (published 2024-11) and describes a research agenda rather than deployed systems, so it represents institutional intent rather than operational infrastructure.
### Additional Evidence (challenge)
*Source: [[2026-01-00-kim-third-party-ai-assurance-framework]] | Added: 2026-03-19*
CMU researchers have built and validated a third-party AI assurance framework with four operational components (Responsibility Assignment Matrix, Interview Protocol, Maturity Matrix, Assurance Report Template), tested on two real deployment cases. This represents concrete infrastructure-building work, though at small scale and not yet applicable to frontier AI.
---
Relevant Notes:

View file

@ -0,0 +1,27 @@
{
"rejected_claims": [
{
"filename": "privacy-enhancing-technologies-enable-independent-ai-scrutiny-without-ip-compromise-but-legal-authority-to-require-scrutiny-does-not-exist.md",
"issues": [
"missing_attribution_extractor"
]
}
],
"validation_stats": {
"total": 1,
"kept": 0,
"fixed": 4,
"rejected": 1,
"fixes_applied": [
"privacy-enhancing-technologies-enable-independent-ai-scrutiny-without-ip-compromise-but-legal-authority-to-require-scrutiny-does-not-exist.md:set_created:2026-03-19",
"privacy-enhancing-technologies-enable-independent-ai-scrutiny-without-ip-compromise-but-legal-authority-to-require-scrutiny-does-not-exist.md:stripped_wiki_link:voluntary-safety-pledges-cannot-survive-competitive-pressure",
"privacy-enhancing-technologies-enable-independent-ai-scrutiny-without-ip-compromise-but-legal-authority-to-require-scrutiny-does-not-exist.md:stripped_wiki_link:only-binding-regulation-with-enforcement-teeth-changes-front",
"privacy-enhancing-technologies-enable-independent-ai-scrutiny-without-ip-compromise-but-legal-authority-to-require-scrutiny-does-not-exist.md:stripped_wiki_link:safe-AI-development-requires-building-alignment-mechanisms-b"
],
"rejections": [
"privacy-enhancing-technologies-enable-independent-ai-scrutiny-without-ip-compromise-but-legal-authority-to-require-scrutiny-does-not-exist.md:missing_attribution_extractor"
]
},
"model": "anthropic/claude-sonnet-4.5",
"date": "2026-03-19"
}

View file

@ -0,0 +1,32 @@
{
"rejected_claims": [
{
"filename": "third-party-ai-assurance-methodology-is-at-proof-of-concept-stage-validated-in-small-deployment-contexts-but-not-yet-applicable-to-frontier-ai-at-scale.md",
"issues": [
"missing_attribution_extractor"
]
},
{
"filename": "ai-assurance-explicitly-distinguishes-itself-from-audit-to-prevent-conflict-of-interest-and-ensure-credibility-which-acknowledges-current-evaluation-has-a-structural-independence-problem.md",
"issues": [
"missing_attribution_extractor"
]
}
],
"validation_stats": {
"total": 2,
"kept": 0,
"fixed": 2,
"rejected": 2,
"fixes_applied": [
"third-party-ai-assurance-methodology-is-at-proof-of-concept-stage-validated-in-small-deployment-contexts-but-not-yet-applicable-to-frontier-ai-at-scale.md:set_created:2026-03-19",
"ai-assurance-explicitly-distinguishes-itself-from-audit-to-prevent-conflict-of-interest-and-ensure-credibility-which-acknowledges-current-evaluation-has-a-structural-independence-problem.md:set_created:2026-03-19"
],
"rejections": [
"third-party-ai-assurance-methodology-is-at-proof-of-concept-stage-validated-in-small-deployment-contexts-but-not-yet-applicable-to-frontier-ai-at-scale.md:missing_attribution_extractor",
"ai-assurance-explicitly-distinguishes-itself-from-audit-to-prevent-conflict-of-interest-and-ensure-credibility-which-acknowledges-current-evaluation-has-a-structural-independence-problem.md:missing_attribution_extractor"
]
},
"model": "anthropic/claude-sonnet-4.5",
"date": "2026-03-19"
}

View file

@ -7,9 +7,13 @@ date: 2025-02-01
domain: ai-alignment
secondary_domains: []
format: paper
status: unprocessed
status: null-result
priority: high
tags: [evaluation-infrastructure, privacy-enhancing-technologies, OpenMined, external-scrutiny, Christchurch-Call, AISI, deployed]
processed_by: theseus
processed_date: 2026-03-19
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "LLM returned 1 claims, 1 rejected by validator"
---
## Content
@ -53,3 +57,11 @@ PRIMARY CONNECTION: [[safe AI development requires building alignment mechanisms
WHY ARCHIVED: Provides evidence that the technical barrier to independent AI evaluation is solvable. The key insight — technology ready, legal framework missing — precisely locates the bottleneck in evaluation infrastructure development.
EXTRACTION HINT: Focus on the technology-law gap: PET infrastructure works (two deployments), but legal authority to require frontier AI labs to submit to independent evaluation doesn't exist. This is the specific intervention point.
## Key Facts
- Helen Toner was Director of Strategy at CISA
- Helen Toner is at Georgetown
- The Christchurch Call is a voluntary initiative
- UK AI Safety Institute has conducted frontier model evaluations using PET infrastructure
- The paper was published February 2025

View file

@ -7,9 +7,13 @@ date: 2026-01-30
domain: ai-alignment
secondary_domains: []
format: paper
status: unprocessed
status: enrichment
priority: high
tags: [evaluation-infrastructure, third-party-assurance, conflict-of-interest, lifecycle-assessment, CMU]
processed_by: theseus
processed_date: 2026-03-19
enrichments_applied: ["no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
---
## Content
@ -51,3 +55,10 @@ PRIMARY CONNECTION: [[no research group is building alignment through collective
WHY ARCHIVED: Provides methodology for third-party AI assurance that explicitly addresses the conflict of interest problem. Important evidence that the field is aware of the independence gap.
EXTRACTION HINT: The "assurance vs audit" distinction to prevent conflict of interest is the key extractable insight. The lifecycle approach (process + outcomes) is also worth noting.
## Key Facts
- CMU researchers published 'Toward Third-Party Assurance of AI Systems' in January 2026
- The framework was tested on a business document tagging tool and a housing resource allocation tool
- The paper identifies that few existing evaluation resources 'address both the process of designing, developing, and deploying an AI system and the outcomes it produces'
- Few existing approaches are 'end-to-end and operational, give actionable guidance, or present evidence of usability' according to the gap analysis