teleo-codex/domains/ai-alignment/national-scale-collective-intelligence-infrastructure-requires-seven-trust-properties-as-foundational-requirements.md
Teleo Agents 18a00a6e43 theseus: extract claims from 2024-11-00-ai4ci-national-scale-collective-intelligence.md
- Source: inbox/archive/2024-11-00-ai4ci-national-scale-collective-intelligence.md
- Domain: ai-alignment
- Extracted by: headless extraction cron (worker 3)

Pentagon-Agent: Theseus <HEADLESS>
2026-03-11 10:11:27 +00:00

2.9 KiB

type domain secondary_domains description confidence source created
claim ai-alignment
collective-intelligence
critical-systems
UK national AI4CI strategy identifies seven trust properties as non-negotiable structural requirements for national-scale CI infrastructure experimental UK AI4CI Research Network national strategy (2024) 2024-11-01

National-scale collective intelligence infrastructure requires seven trust properties as foundational requirements

The UK's national AI for Collective Intelligence research strategy identifies seven trust properties as structural requirements for AI-enhanced collective intelligence at scale:

  1. Human agency — systems must preserve meaningful human control
  2. Security — protection against adversarial manipulation
  3. Privacy — individual data protection in collective aggregation
  4. Transparency — interpretable decision processes
  5. Fairness — equitable treatment across populations
  6. Value alignment — incorporation of user values rather than imposed priorities
  7. Accountability — clear responsibility chains for system behavior

The strategy frames these as preconditions for trustworthiness, not features to optimize. Without all seven, the system cannot achieve the legitimacy required for national-scale deployment.

Evidence

The AI4CI strategy document lists these seven properties as part of the governance infrastructure required alongside technical infrastructure (secure data repositories, federated learning architectures, real-time integration systems, foundation models). The framing is categorical: "trustworthiness assessment" is a required component of the infrastructure, not an optional enhancement.

The strategy operationalizes these requirements through explicit design constraints: systems must "incorporate user values" (plural) rather than imposing predetermined priorities, and AI agents must "consider and communicate broader collective implications"—operationalizing value alignment and transparency as design constraints rather than post-hoc features.

Challenges

The strategy acknowledges "fundamental uncertainty: researchers can never know with certainty what future their work will produce." This creates tension with accountability requirements—how can systems be accountable for emergent behaviors that designers cannot predict? The strategy does not resolve this tension but identifies it as a core governance problem.


Relevant Notes:

Topics: