- Source: inbox/archive/2024-11-00-ai4ci-national-scale-collective-intelligence.md - Domain: ai-alignment - Extracted by: headless extraction cron (worker 3) Pentagon-Agent: Theseus <HEADLESS>
2.9 KiB
| type | domain | secondary_domains | description | confidence | source | created | ||
|---|---|---|---|---|---|---|---|---|
| claim | ai-alignment |
|
UK national AI4CI strategy identifies seven trust properties as non-negotiable structural requirements for national-scale CI infrastructure | experimental | UK AI4CI Research Network national strategy (2024) | 2024-11-01 |
National-scale collective intelligence infrastructure requires seven trust properties as foundational requirements
The UK's national AI for Collective Intelligence research strategy identifies seven trust properties as structural requirements for AI-enhanced collective intelligence at scale:
- Human agency — systems must preserve meaningful human control
- Security — protection against adversarial manipulation
- Privacy — individual data protection in collective aggregation
- Transparency — interpretable decision processes
- Fairness — equitable treatment across populations
- Value alignment — incorporation of user values rather than imposed priorities
- Accountability — clear responsibility chains for system behavior
The strategy frames these as preconditions for trustworthiness, not features to optimize. Without all seven, the system cannot achieve the legitimacy required for national-scale deployment.
Evidence
The AI4CI strategy document lists these seven properties as part of the governance infrastructure required alongside technical infrastructure (secure data repositories, federated learning architectures, real-time integration systems, foundation models). The framing is categorical: "trustworthiness assessment" is a required component of the infrastructure, not an optional enhancement.
The strategy operationalizes these requirements through explicit design constraints: systems must "incorporate user values" (plural) rather than imposing predetermined priorities, and AI agents must "consider and communicate broader collective implications"—operationalizing value alignment and transparency as design constraints rather than post-hoc features.
Challenges
The strategy acknowledges "fundamental uncertainty: researchers can never know with certainty what future their work will produce." This creates tension with accountability requirements—how can systems be accountable for emergent behaviors that designers cannot predict? The strategy does not resolve this tension but identifies it as a core governance problem.
Relevant Notes:
- safe AI development requires building alignment mechanisms before scaling capability
- the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance
- collective intelligence requires diversity as a structural precondition not a moral preference
Topics: