diff --git a/domains/ai-alignment/no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it.md b/domains/ai-alignment/no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it.md index 00dc9b17..e0e9dd2e 100644 --- a/domains/ai-alignment/no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it.md +++ b/domains/ai-alignment/no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it.md @@ -30,20 +30,20 @@ The alignment field has converged on a problem they cannot solve with their curr ### Additional Evidence (challenge) -*Source: [[2024-11-00-ai4ci-national-scale-collective-intelligence]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5* +*Source: 2024-11-00-ai4ci-national-scale-collective-intelligence | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5* The UK AI for Collective Intelligence Research Network represents a national-scale institutional commitment to building CI infrastructure with explicit alignment goals. Funded by UKRI/EPSRC, the network proposes the 'AI4CI Loop' (Gathering Intelligence → Informing Behaviour) as a framework for multi-level decision making. The research strategy includes seven trust properties (human agency, security, privacy, transparency, fairness, value alignment, accountability) and specifies technical requirements including federated learning architectures, secure data repositories, and foundation models adapted for collective intelligence contexts. This is not purely academic—it's a government-backed infrastructure program with institutional resources. However, the strategy is prospective (published 2024-11) and describes a research agenda rather than deployed systems, so it represents institutional intent rather than operational infrastructure. ### Additional Evidence (challenge) -*Source: [[2026-01-00-kim-third-party-ai-assurance-framework]] | Added: 2026-03-19* +*Source: 2026-01-00-kim-third-party-ai-assurance-framework | Added: 2026-03-19* CMU researchers have built and validated a third-party AI assurance framework with four operational components (Responsibility Assignment Matrix, Interview Protocol, Maturity Matrix, Assurance Report Template), tested on two real deployment cases. This represents concrete infrastructure-building work, though at small scale and not yet applicable to frontier AI. --- ### Additional Evidence (challenge) -*Source: [[2026-03-21-aisi-control-research-program-synthesis]] | Added: 2026-03-21* +*Source: 2026-03-21-aisi-control-research-program-synthesis | Added: 2026-03-21* UK AISI has built systematic evaluation infrastructure for loss-of-control capabilities (monitoring, sandbagging, self-replication, cyber attack scenarios) across 11+ papers in 2025-2026. The infrastructure gap is not in evaluation research but in collective intelligence approaches and in the governance-research translation layer that would integrate these evaluations into binding compliance requirements. @@ -66,4 +66,4 @@ Relevant Notes: Topics: - [[livingip overview]] - [[coordination mechanisms]] -- [[domains/ai-alignment/_map]] \ No newline at end of file +- domains/ai-alignment/_map \ No newline at end of file