Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
3.1 KiB
| type | domain | description | confidence | source | created | secondary_domains | ||
|---|---|---|---|---|---|---|---|---|
| claim | ai-alignment | UK research strategy identifies human agency, security, privacy, transparency, fairness, value alignment, and accountability as necessary trust conditions | experimental | UK AI for CI Research Network, Artificial Intelligence for Collective Intelligence: A National-Scale Research Strategy (2024) | 2026-03-11 |
|
National-scale collective intelligence infrastructure requires seven trust properties to achieve legitimacy
The UK AI4CI research strategy proposes that collective intelligence systems operating at national scale must satisfy seven trust properties to achieve public legitimacy and effective governance:
- Human agency — individuals retain meaningful control over their participation
- Security — infrastructure resists attack and manipulation
- Privacy — personal data is protected from misuse
- Transparency — system operation is interpretable and auditable
- Fairness — outcomes don't systematically disadvantage groups
- Value alignment — systems incorporate user values rather than imposing predetermined priorities
- Accountability — clear responsibility for system behavior and outcomes
This is not a theoretical framework—it's a proposed design requirement for actual infrastructure being built with UK government backing (UKRI/EPSRC funding). The strategy treats these seven properties as necessary conditions for trustworthiness at scale, not as optional enhancements.
The framing is significant: trust is treated as a structural property of the system architecture, not as a communication or adoption challenge. The research agenda focuses on "establishing and managing appropriate infrastructure in a way that is secure, well-governed and sustainable."
Evidence
From the UK AI4CI national research strategy:
- Seven trust properties explicitly listed as requirements
- Governance infrastructure includes "trustworthiness assessment" as a core component
- Scale brings challenges in "establishing and managing appropriate infrastructure in a way that is secure, well-governed and sustainable"
- Systems must incorporate "user values" rather than imposing predetermined priorities
Relationship to Existing Work
This connects to safe AI development requires building alignment mechanisms before scaling capability—the UK strategy treats trust infrastructure as a prerequisite for deployment, not a post-hoc addition.
It also relates to collective intelligence requires diversity as a structural precondition not a moral preference—fairness appears in the trust properties list as a structural requirement, not just a normative goal.
Relevant Notes:
- safe AI development requires building alignment mechanisms before scaling capability
- collective intelligence requires diversity as a structural precondition not a moral preference
- AI alignment is a coordination problem not a technical problem
Topics:
- domains/ai-alignment/_map
- foundations/collective-intelligence/_map
- foundations/critical-systems/_map