extract: 2024-11-00-ai4ci-national-scale-collective-intelligence
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run

Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
This commit is contained in:
Teleo Pipeline 2026-03-15 16:22:23 +00:00 committed by Leo
parent 5cf7ffc950
commit dbbb07adb1
6 changed files with 171 additions and 1 deletions

View file

@ -27,6 +27,12 @@ Since [[the internet enabled global communication but not global cognition]], th
Ruiz-Serra et al. (2024) provide formal evidence for the coordination framing through multi-agent active inference: even when individual agents successfully minimize their own expected free energy using factorised generative models with Theory of Mind beliefs about others, the ensemble-level expected free energy 'is not necessarily minimised at the aggregate level.' This demonstrates that alignment cannot be solved at the individual agent level—the interaction structure and coordination mechanisms determine whether individual optimization produces collective intelligence or collective failure. The finding validates that alignment is fundamentally about designing interaction structures that bridge individual and collective optimization, not about perfecting individual agent objectives.
### Additional Evidence (confirm)
*Source: [[2024-11-00-ai4ci-national-scale-collective-intelligence]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
The UK AI4CI research strategy treats alignment as a coordination and governance challenge requiring institutional infrastructure. The seven trust properties (human agency, security, privacy, transparency, fairness, value alignment, accountability) are framed as system architecture requirements, not as technical ML problems. The strategy emphasizes 'establishing and managing appropriate infrastructure in a way that is secure, well-governed and sustainable' and includes regulatory sandboxes, trans-national governance, and trustworthiness assessment as core components. The research agenda focuses on coordination mechanisms (federated learning, FAIR principles, multi-stakeholder governance) rather than on technical alignment methods like RLHF or interpretability.
---
Relevant Notes:

View file

@ -0,0 +1,51 @@
---
type: claim
domain: ai-alignment
description: "National-scale CI infrastructure must enable distributed learning without centralizing sensitive data"
confidence: experimental
source: "UK AI for CI Research Network, Artificial Intelligence for Collective Intelligence: A National-Scale Research Strategy (2024)"
created: 2026-03-11
secondary_domains: [collective-intelligence, critical-systems]
---
# AI-enhanced collective intelligence requires federated learning architectures to preserve data sovereignty at scale
The UK AI4CI research strategy identifies federated learning as a necessary infrastructure component for national-scale collective intelligence. The technical requirements include:
- **Secure data repositories** that maintain local control
- **Federated learning architectures** that train models without centralizing data
- **Real-time integration** across distributed sources
- **Foundation models** adapted to federated contexts
This is not just a privacy preference—it's a structural requirement for achieving the trust properties (especially privacy, security, and human agency) at scale. Centralized data aggregation creates single points of failure, regulatory risk, and trust barriers that prevent participation from privacy-sensitive populations.
The strategy treats federated architecture as the enabling technology for "gathering intelligence" (collecting and making sense of distributed information) without requiring participants to surrender data sovereignty.
Governance requirements include FAIR principles (Findable, Accessible, Interoperable, Reusable), trustworthiness assessment, regulatory sandboxes, and trans-national governance frameworks—all of which assume distributed rather than centralized control.
## Evidence
From the UK AI4CI national research strategy:
- Technical infrastructure requirements explicitly include "federated learning architectures"
- Governance framework assumes distributed data control with FAIR principles
- "Secure data repositories" listed as foundational infrastructure
- Real-time integration across distributed sources required for "gathering intelligence"
## Challenges
This claim rests on a research strategy document, not on deployed systems. The feasibility of federated learning at national scale remains unproven. Potential challenges:
- Federated learning has known limitations in model quality vs. centralized training
- Coordination costs may be prohibitive at scale
- Regulatory frameworks may not accommodate federated architectures
- The strategy may be aspirational rather than technically grounded
---
Relevant Notes:
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
- [[safe AI development requires building alignment mechanisms before scaling capability]]
Topics:
- domains/ai-alignment/_map
- foundations/collective-intelligence/_map
- foundations/critical-systems/_map

View file

@ -0,0 +1,42 @@
---
type: claim
domain: ai-alignment
description: "ML's core mechanism of generalizing over diversity creates structural bias against marginalized groups"
confidence: experimental
source: "UK AI for CI Research Network, Artificial Intelligence for Collective Intelligence: A National-Scale Research Strategy (2024)"
created: 2026-03-11
secondary_domains: [collective-intelligence]
---
# Machine learning pattern extraction systematically erases dataset outliers where vulnerable populations concentrate
Machine learning operates by "extracting patterns that generalise over diversity in a data set" in ways that "fail to capture, respect or represent features of dataset outliers." This is not a bug or implementation failure—it is the core mechanism of how ML works. The UK AI4CI research strategy identifies this as a fundamental tension: the same generalization that makes ML powerful also makes it structurally biased against populations that don't fit dominant patterns.
The strategy explicitly frames this as a challenge for collective intelligence systems: "AI must reach 'intersectionally disadvantaged' populations, not just majority groups." Vulnerable and marginalized populations concentrate in the statistical tails—they are the outliers that pattern-matching algorithms systematically ignore or misrepresent.
This creates a paradox for AI-enhanced collective intelligence: the tools designed to aggregate diverse perspectives have a built-in tendency to homogenize by erasing the perspectives most different from the training distribution's center of mass.
## Evidence
From the UK AI4CI national research strategy:
- ML "extracts patterns that generalise over diversity in a data set" in ways that "fail to capture, respect or represent features of dataset outliers"
- Systems must explicitly design for reaching "intersectionally disadvantaged" populations
- The research agenda identifies this as a core infrastructure challenge, not just a fairness concern
## Challenges
This claim rests on a single source—a research strategy document rather than empirical evidence of harm. The mechanism is plausible but the magnitude and inevitability of the effect remain unproven. Counter-evidence might show that:
- Appropriate sampling and weighting can preserve outlier representation
- Ensemble methods or mixture models can capture diverse subpopulations
- The outlier-erasure effect is implementation-dependent rather than fundamental
---
Relevant Notes:
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
- [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]]
- [[modeling preference sensitivity as a learned distribution rather than a fixed scalar resolves DPO diversity failures without demographic labels or explicit user modeling]]
Topics:
- domains/ai-alignment/_map
- foundations/collective-intelligence/_map

View file

@ -0,0 +1,51 @@
---
type: claim
domain: ai-alignment
description: "UK research strategy identifies human agency, security, privacy, transparency, fairness, value alignment, and accountability as necessary trust conditions"
confidence: experimental
source: "UK AI for CI Research Network, Artificial Intelligence for Collective Intelligence: A National-Scale Research Strategy (2024)"
created: 2026-03-11
secondary_domains: [collective-intelligence, critical-systems]
---
# National-scale collective intelligence infrastructure requires seven trust properties to achieve legitimacy
The UK AI4CI research strategy proposes that collective intelligence systems operating at national scale must satisfy seven trust properties to achieve public legitimacy and effective governance:
1. **Human agency** — individuals retain meaningful control over their participation
2. **Security** — infrastructure resists attack and manipulation
3. **Privacy** — personal data is protected from misuse
4. **Transparency** — system operation is interpretable and auditable
5. **Fairness** — outcomes don't systematically disadvantage groups
6. **Value alignment** — systems incorporate user values rather than imposing predetermined priorities
7. **Accountability** — clear responsibility for system behavior and outcomes
This is not a theoretical framework—it's a proposed design requirement for actual infrastructure being built with UK government backing (UKRI/EPSRC funding). The strategy treats these seven properties as necessary conditions for trustworthiness at scale, not as optional enhancements.
The framing is significant: trust is treated as a structural property of the system architecture, not as a communication or adoption challenge. The research agenda focuses on "establishing and managing appropriate infrastructure in a way that is secure, well-governed and sustainable."
## Evidence
From the UK AI4CI national research strategy:
- Seven trust properties explicitly listed as requirements
- Governance infrastructure includes "trustworthiness assessment" as a core component
- Scale brings challenges in "establishing and managing appropriate infrastructure in a way that is secure, well-governed and sustainable"
- Systems must incorporate "user values" rather than imposing predetermined priorities
## Relationship to Existing Work
This connects to [[safe AI development requires building alignment mechanisms before scaling capability]]—the UK strategy treats trust infrastructure as a prerequisite for deployment, not a post-hoc addition.
It also relates to [[collective intelligence requires diversity as a structural precondition not a moral preference]]—fairness appears in the trust properties list as a structural requirement, not just a normative goal.
---
Relevant Notes:
- [[safe AI development requires building alignment mechanisms before scaling capability]]
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
- [[AI alignment is a coordination problem not a technical problem]]
Topics:
- domains/ai-alignment/_map
- foundations/collective-intelligence/_map
- foundations/critical-systems/_map

View file

@ -17,6 +17,12 @@ This gap is remarkable because the field's own findings point toward collective
The alignment field has converged on a problem they cannot solve with their current paradigm (single-model alignment), and the alternative paradigm (collective alignment through distributed architecture) has barely been explored. This is the opening for the TeleoHumanity thesis -- not as philosophical speculation but as practical infrastructure that addresses problems the alignment community has identified but cannot solve within their current framework.
### Additional Evidence (challenge)
*Source: [[2024-11-00-ai4ci-national-scale-collective-intelligence]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
The UK AI for Collective Intelligence Research Network represents a national-scale institutional commitment to building CI infrastructure with explicit alignment goals. Funded by UKRI/EPSRC, the network proposes the 'AI4CI Loop' (Gathering Intelligence → Informing Behaviour) as a framework for multi-level decision making. The research strategy includes seven trust properties (human agency, security, privacy, transparency, fairness, value alignment, accountability) and specifies technical requirements including federated learning architectures, secure data repositories, and foundation models adapted for collective intelligence contexts. This is not purely academic—it's a government-backed infrastructure program with institutional resources. However, the strategy is prospective (published 2024-11) and describes a research agenda rather than deployed systems, so it represents institutional intent rather than operational infrastructure.
---
Relevant Notes:

View file

@ -7,10 +7,16 @@ date: 2024-11-01
domain: ai-alignment
secondary_domains: [collective-intelligence]
format: paper
status: unprocessed
status: processed
priority: medium
tags: [collective-intelligence, national-scale, AI-infrastructure, federated-learning, diversity, trust]
flagged_for_vida: ["healthcare applications of AI-enhanced collective intelligence"]
processed_by: theseus
processed_date: 2026-03-11
claims_extracted: ["machine-learning-pattern-extraction-systematically-erases-dataset-outliers-where-vulnerable-populations-concentrate.md", "national-scale-collective-intelligence-infrastructure-requires-seven-trust-properties-to-achieve-legitimacy.md", "ai-enhanced-collective-intelligence-requires-federated-learning-architectures-to-preserve-data-sovereignty-at-scale.md"]
enrichments_applied: ["no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it.md", "AI alignment is a coordination problem not a technical problem.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Three new claims extracted focusing on ML's structural bias against outliers, trust properties for national-scale CI, and federated learning requirements. Primary enrichment challenges the 'no CI infrastructure' claim with evidence of UK national program. Source is prospective (research strategy) rather than empirical, so confidence capped at experimental. No entity extraction—this is a research network/strategy document rather than a company or market."
---
## Content
@ -46,3 +52,11 @@ UK national research strategy for AI-enhanced collective intelligence. Proposes
PRIMARY CONNECTION: no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it
WHY ARCHIVED: Evidence of national-scale CI infrastructure being built, partially challenging our institutional gap claim
EXTRACTION HINT: Focus on the tension between ML's pattern-extraction (homogenizing) and CI's diversity requirement
## Key Facts
- UK AI4CI Research Network funded by UKRI/EPSRC (2024)
- AI4CI Loop framework: Gathering Intelligence → Informing Behaviour
- Seven trust properties: human agency, security, privacy, transparency, fairness, value alignment, accountability
- Technical infrastructure requirements: secure data repositories, federated learning, real-time integration, foundation models
- Governance requirements: FAIR principles, trustworthiness assessment, regulatory sandboxes, trans-national governance