Threshold: 0.7, Haiku classification, 80 files modified. Pentagon-Agent: Epimetheus <0144398e-4ed3-4fe2-95a3-3d72e1abf887>
56 lines
No EOL
3.4 KiB
Markdown
56 lines
No EOL
3.4 KiB
Markdown
---
|
|
|
|
type: claim
|
|
domain: ai-alignment
|
|
description: "UK research strategy identifies human agency, security, privacy, transparency, fairness, value alignment, and accountability as necessary trust conditions"
|
|
confidence: experimental
|
|
source: "UK AI for CI Research Network, Artificial Intelligence for Collective Intelligence: A National-Scale Research Strategy (2024)"
|
|
created: 2026-03-11
|
|
secondary_domains: [collective-intelligence, critical-systems]
|
|
related:
|
|
- "ai enhanced collective intelligence requires federated learning architectures to preserve data sovereignty at scale"
|
|
reweave_edges:
|
|
- "ai enhanced collective intelligence requires federated learning architectures to preserve data sovereignty at scale|related|2026-03-28"
|
|
---
|
|
|
|
# National-scale collective intelligence infrastructure requires seven trust properties to achieve legitimacy
|
|
|
|
The UK AI4CI research strategy proposes that collective intelligence systems operating at national scale must satisfy seven trust properties to achieve public legitimacy and effective governance:
|
|
|
|
1. **Human agency** — individuals retain meaningful control over their participation
|
|
2. **Security** — infrastructure resists attack and manipulation
|
|
3. **Privacy** — personal data is protected from misuse
|
|
4. **Transparency** — system operation is interpretable and auditable
|
|
5. **Fairness** — outcomes don't systematically disadvantage groups
|
|
6. **Value alignment** — systems incorporate user values rather than imposing predetermined priorities
|
|
7. **Accountability** — clear responsibility for system behavior and outcomes
|
|
|
|
This is not a theoretical framework—it's a proposed design requirement for actual infrastructure being built with UK government backing (UKRI/EPSRC funding). The strategy treats these seven properties as necessary conditions for trustworthiness at scale, not as optional enhancements.
|
|
|
|
The framing is significant: trust is treated as a structural property of the system architecture, not as a communication or adoption challenge. The research agenda focuses on "establishing and managing appropriate infrastructure in a way that is secure, well-governed and sustainable."
|
|
|
|
## Evidence
|
|
|
|
From the UK AI4CI national research strategy:
|
|
- Seven trust properties explicitly listed as requirements
|
|
- Governance infrastructure includes "trustworthiness assessment" as a core component
|
|
- Scale brings challenges in "establishing and managing appropriate infrastructure in a way that is secure, well-governed and sustainable"
|
|
- Systems must incorporate "user values" rather than imposing predetermined priorities
|
|
|
|
## Relationship to Existing Work
|
|
|
|
This connects to [[safe AI development requires building alignment mechanisms before scaling capability]]—the UK strategy treats trust infrastructure as a prerequisite for deployment, not a post-hoc addition.
|
|
|
|
It also relates to [[collective intelligence requires diversity as a structural precondition not a moral preference]]—fairness appears in the trust properties list as a structural requirement, not just a normative goal.
|
|
|
|
---
|
|
|
|
Relevant Notes:
|
|
- [[safe AI development requires building alignment mechanisms before scaling capability]]
|
|
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
|
|
- [[AI alignment is a coordination problem not a technical problem]]
|
|
|
|
Topics:
|
|
- domains/ai-alignment/_map
|
|
- foundations/collective-intelligence/_map
|
|
- foundations/critical-systems/_map |