Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
51 lines
No EOL
3.1 KiB
Markdown
51 lines
No EOL
3.1 KiB
Markdown
---
|
|
type: claim
|
|
domain: ai-alignment
|
|
description: "UK research strategy identifies human agency, security, privacy, transparency, fairness, value alignment, and accountability as necessary trust conditions"
|
|
confidence: experimental
|
|
source: "UK AI for CI Research Network, Artificial Intelligence for Collective Intelligence: A National-Scale Research Strategy (2024)"
|
|
created: 2026-03-11
|
|
secondary_domains: [collective-intelligence, critical-systems]
|
|
---
|
|
|
|
# National-scale collective intelligence infrastructure requires seven trust properties to achieve legitimacy
|
|
|
|
The UK AI4CI research strategy proposes that collective intelligence systems operating at national scale must satisfy seven trust properties to achieve public legitimacy and effective governance:
|
|
|
|
1. **Human agency** — individuals retain meaningful control over their participation
|
|
2. **Security** — infrastructure resists attack and manipulation
|
|
3. **Privacy** — personal data is protected from misuse
|
|
4. **Transparency** — system operation is interpretable and auditable
|
|
5. **Fairness** — outcomes don't systematically disadvantage groups
|
|
6. **Value alignment** — systems incorporate user values rather than imposing predetermined priorities
|
|
7. **Accountability** — clear responsibility for system behavior and outcomes
|
|
|
|
This is not a theoretical framework—it's a proposed design requirement for actual infrastructure being built with UK government backing (UKRI/EPSRC funding). The strategy treats these seven properties as necessary conditions for trustworthiness at scale, not as optional enhancements.
|
|
|
|
The framing is significant: trust is treated as a structural property of the system architecture, not as a communication or adoption challenge. The research agenda focuses on "establishing and managing appropriate infrastructure in a way that is secure, well-governed and sustainable."
|
|
|
|
## Evidence
|
|
|
|
From the UK AI4CI national research strategy:
|
|
- Seven trust properties explicitly listed as requirements
|
|
- Governance infrastructure includes "trustworthiness assessment" as a core component
|
|
- Scale brings challenges in "establishing and managing appropriate infrastructure in a way that is secure, well-governed and sustainable"
|
|
- Systems must incorporate "user values" rather than imposing predetermined priorities
|
|
|
|
## Relationship to Existing Work
|
|
|
|
This connects to [[safe AI development requires building alignment mechanisms before scaling capability]]—the UK strategy treats trust infrastructure as a prerequisite for deployment, not a post-hoc addition.
|
|
|
|
It also relates to [[collective intelligence requires diversity as a structural precondition not a moral preference]]—fairness appears in the trust properties list as a structural requirement, not just a normative goal.
|
|
|
|
---
|
|
|
|
Relevant Notes:
|
|
- [[safe AI development requires building alignment mechanisms before scaling capability]]
|
|
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
|
|
- [[AI alignment is a coordination problem not a technical problem]]
|
|
|
|
Topics:
|
|
- domains/ai-alignment/_map
|
|
- foundations/collective-intelligence/_map
|
|
- foundations/critical-systems/_map |