auto-fix: strip 4 broken wiki links
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pipeline auto-fixer: removed [[ ]] brackets from links that don't resolve to existing claims in the knowledge base.
This commit is contained in:
parent
43982050c3
commit
1a80fe850f
1 changed files with 4 additions and 4 deletions
|
|
@ -30,20 +30,20 @@ The alignment field has converged on a problem they cannot solve with their curr
|
||||||
|
|
||||||
|
|
||||||
### Additional Evidence (challenge)
|
### Additional Evidence (challenge)
|
||||||
*Source: [[2024-11-00-ai4ci-national-scale-collective-intelligence]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
|
*Source: 2024-11-00-ai4ci-national-scale-collective-intelligence | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
|
||||||
|
|
||||||
The UK AI for Collective Intelligence Research Network represents a national-scale institutional commitment to building CI infrastructure with explicit alignment goals. Funded by UKRI/EPSRC, the network proposes the 'AI4CI Loop' (Gathering Intelligence → Informing Behaviour) as a framework for multi-level decision making. The research strategy includes seven trust properties (human agency, security, privacy, transparency, fairness, value alignment, accountability) and specifies technical requirements including federated learning architectures, secure data repositories, and foundation models adapted for collective intelligence contexts. This is not purely academic—it's a government-backed infrastructure program with institutional resources. However, the strategy is prospective (published 2024-11) and describes a research agenda rather than deployed systems, so it represents institutional intent rather than operational infrastructure.
|
The UK AI for Collective Intelligence Research Network represents a national-scale institutional commitment to building CI infrastructure with explicit alignment goals. Funded by UKRI/EPSRC, the network proposes the 'AI4CI Loop' (Gathering Intelligence → Informing Behaviour) as a framework for multi-level decision making. The research strategy includes seven trust properties (human agency, security, privacy, transparency, fairness, value alignment, accountability) and specifies technical requirements including federated learning architectures, secure data repositories, and foundation models adapted for collective intelligence contexts. This is not purely academic—it's a government-backed infrastructure program with institutional resources. However, the strategy is prospective (published 2024-11) and describes a research agenda rather than deployed systems, so it represents institutional intent rather than operational infrastructure.
|
||||||
|
|
||||||
|
|
||||||
### Additional Evidence (challenge)
|
### Additional Evidence (challenge)
|
||||||
*Source: [[2026-01-00-kim-third-party-ai-assurance-framework]] | Added: 2026-03-19*
|
*Source: 2026-01-00-kim-third-party-ai-assurance-framework | Added: 2026-03-19*
|
||||||
|
|
||||||
CMU researchers have built and validated a third-party AI assurance framework with four operational components (Responsibility Assignment Matrix, Interview Protocol, Maturity Matrix, Assurance Report Template), tested on two real deployment cases. This represents concrete infrastructure-building work, though at small scale and not yet applicable to frontier AI.
|
CMU researchers have built and validated a third-party AI assurance framework with four operational components (Responsibility Assignment Matrix, Interview Protocol, Maturity Matrix, Assurance Report Template), tested on two real deployment cases. This represents concrete infrastructure-building work, though at small scale and not yet applicable to frontier AI.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### Additional Evidence (challenge)
|
### Additional Evidence (challenge)
|
||||||
*Source: [[2026-03-21-aisi-control-research-program-synthesis]] | Added: 2026-03-21*
|
*Source: 2026-03-21-aisi-control-research-program-synthesis | Added: 2026-03-21*
|
||||||
|
|
||||||
UK AISI has built systematic evaluation infrastructure for loss-of-control capabilities (monitoring, sandbagging, self-replication, cyber attack scenarios) across 11+ papers in 2025-2026. The infrastructure gap is not in evaluation research but in collective intelligence approaches and in the governance-research translation layer that would integrate these evaluations into binding compliance requirements.
|
UK AISI has built systematic evaluation infrastructure for loss-of-control capabilities (monitoring, sandbagging, self-replication, cyber attack scenarios) across 11+ papers in 2025-2026. The infrastructure gap is not in evaluation research but in collective intelligence approaches and in the governance-research translation layer that would integrate these evaluations into binding compliance requirements.
|
||||||
|
|
||||||
|
|
@ -66,4 +66,4 @@ Relevant Notes:
|
||||||
Topics:
|
Topics:
|
||||||
- [[livingip overview]]
|
- [[livingip overview]]
|
||||||
- [[coordination mechanisms]]
|
- [[coordination mechanisms]]
|
||||||
- [[domains/ai-alignment/_map]]
|
- domains/ai-alignment/_map
|
||||||
Loading…
Reference in a new issue