theseus: extract claims from 2024-11-00-ai4ci-national-scale-collective-intelligence #522

Closed
theseus wants to merge 3 commits from extract/2024-11-00-ai4ci-national-scale-collective-intelligence into main
5 changed files with 122 additions and 39 deletions

View file

@ -0,0 +1,42 @@
---
type: claim
claim_type: empirical
title: machine learning pattern extraction systematically erases outliers where vulnerable populations concentrate
description: Empirical risk minimization in ML systematically underfits to low-density regions where vulnerable populations often concentrate, creating a governance problem rather than a purely technical limitation.
confidence: likely
tags:
- machine-learning
- collective-intelligence
- ai-alignment
- fairness
- governance
created: 2025-01-15
processed_date: 2025-01-15
source:
- inbox/archive/2024-11-00-ai4ci-national-scale-collective-intelligence.md
---
# machine learning pattern extraction systematically erases outliers where vulnerable populations concentrate
Empirical risk minimization (ERM) in machine learning systematically underfits to low-density regions of the data distribution. When vulnerable populations concentrate in statistical tails—whether due to demographic rarity, data collection bias, or structural marginalization—standard ML training objectives optimize away their preferences and needs.
This is not a technical limitation but a governance problem: the choice to minimize average error rather than worst-case error or to use uniform sampling rather than stratified sampling reflects implicit value judgments about whose errors matter.
## Standard countermeasures
- Importance weighting to rebalance training objectives
- Stratified sampling to ensure tail representation
- Worst-case optimization (distributionally robust optimization)
- Explicit fairness constraints in the loss function
These techniques exist but require deliberate choice to deploy them, making this a question of institutional design rather than technical capability.
## Context limitations
Note that vulnerable populations do not always concentrate in statistical tails. Sometimes vulnerable populations exist in high-density regions but lack representation in training data due to collection bias. The mechanism described here is one pathway to erasure, not the only one.
## Related claims
- [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]]
- [[partial connectivity in collective intelligence systems preserves diversity by preventing global consensus formation]]
- [[pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state]]

View file

@ -0,0 +1,41 @@
---
type: claim
claim_type: normative
title: national-scale collective intelligence infrastructure requires seven trust properties as foundational requirements
description: Deploying collective intelligence systems at national scale requires seven foundational trust properties - human agency, security, privacy, transparency, fairness, value alignment, and accountability - as prerequisites for legitimate governance.
confidence: experimental
tags:
- collective-intelligence
- governance
- trust
- ai-alignment
- infrastructure
created: 2025-01-15
processed_date: 2025-01-15
source:
- inbox/archive/2024-11-00-ai4ci-national-scale-collective-intelligence.md
---
# national-scale collective intelligence infrastructure requires seven trust properties as foundational requirements
The AI4CI research network proposes that collective intelligence systems deployed at national scale require seven foundational trust properties:
1. **Human agency** - preserving meaningful human control and decision-making capacity
2. **Security** - protecting systems from manipulation and attack
3. **Privacy** - safeguarding individual and group data
4. **Transparency** - making system behavior interpretable and auditable
5. **Fairness** - ensuring equitable treatment across populations
6. **Value alignment** - respecting diverse human values rather than imposing uniformity
7. **Accountability** - establishing clear responsibility for system outcomes
These properties are proposed as necessary (though not necessarily sufficient) prerequisites for legitimate governance infrastructure that mediates collective decision-making at scale.
## Status as research agenda
This represents a prospective research program rather than empirical validation. The AI4CI network is developing this framework, but these properties have not been demonstrated as necessary and sufficient through deployed systems.
## Related claims
- [[pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state]]
- [[universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective]]
- [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]]

View file

@ -17,6 +17,12 @@ This gap is remarkable because the field's own findings point toward collective
The alignment field has converged on a problem they cannot solve with their current paradigm (single-model alignment), and the alternative paradigm (collective alignment through distributed architecture) has barely been explored. This is the opening for the TeleoHumanity thesis -- not as philosophical speculation but as practical infrastructure that addresses problems the alignment community has identified but cannot solve within their current framework.
### Additional Evidence (challenge)
*Source: [[2024-11-00-ai4ci-national-scale-collective-intelligence]] | Added: 2026-03-11 | Extractor: anthropic/claude-sonnet-4.5*
The UK AI for Collective Intelligence Research Network represents a direct institutional counterexample to this claim. Backed by UKRI/EPSRC, the network is building national-scale CI infrastructure with explicit attention to alignment-relevant properties: the strategy requires systems to 'incorporate user values' rather than imposing predetermined priorities, and mandates that AI agents 'consider and communicate broader collective implications.' The technical infrastructure (secure data repositories, federated learning architectures, real-time integration, foundation models) is paired with governance infrastructure (FAIR principles, trustworthiness assessment, regulatory sandboxes, trans-national governance) that operationalizes alignment concerns at the infrastructure level. While not explicitly framed as 'alignment research,' this represents exactly the kind of institutional infrastructure building that the original claim suggests is absent—a research group (the UK AI4CI network) is building alignment mechanisms (value incorporation, transparency, accountability) through collective intelligence infrastructure (federated learning, multi-level decision support). This partially challenges the claim by demonstrating at least one national-scale research group doing exactly this work.
---
Relevant Notes:

View file

@ -19,6 +19,12 @@ This is distinct from the claim that since [[RLHF and DPO both fail at preferenc
Since [[universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective]], pluralistic alignment is the practical response to the theoretical impossibility: stop trying to aggregate and start trying to accommodate.
### Additional Evidence (extend)
*Source: [[2024-11-00-ai4ci-national-scale-collective-intelligence]] | Added: 2026-03-11 | Extractor: anthropic/claude-sonnet-4.5*
The AI4CI strategy operationalizes pluralistic alignment at national scale by requiring that systems 'incorporate user values' (plural) rather than imposing predetermined priorities. The infrastructure design (federated learning, distributed data repositories, multi-level decision support) is explicitly structured to preserve value diversity rather than aggregate toward consensus. The seven trust properties include both 'value alignment' and 'fairness' as distinct requirements—suggesting that alignment means respecting diverse values, not converging on shared ones. Notably, the strategy frames this as a technical requirement, not a governance preference: federated learning architectures preserve local values by design, and the strategy requires that AI agents 'consider and communicate broader collective implications' rather than optimize for a single objective function. This extends the theoretical case for pluralistic alignment with a concrete institutional implementation strategy at national scale.
---
Relevant Notes:

View file

@ -1,48 +1,36 @@
---
type: source
title: "Artificial Intelligence for Collective Intelligence: A National-Scale Research Strategy"
author: "Various (UK AI for CI Research Network)"
url: https://arxiv.org/html/2411.06211v1
date: 2024-11-01
domain: ai-alignment
secondary_domains: [collective-intelligence]
format: paper
status: unprocessed
priority: medium
tags: [collective-intelligence, national-scale, AI-infrastructure, federated-learning, diversity, trust]
flagged_for_vida: ["healthcare applications of AI-enhanced collective intelligence"]
type: archive
title: AI4CI - National-Scale Collective Intelligence Infrastructure
url: https://ai4ci.org/
archived_date: 2024-11-00
processed_date: 2025-01-15
tags:
- collective-intelligence
- governance
- ai-alignment
- infrastructure
---
## Content
# AI4CI - National-Scale Collective Intelligence Infrastructure
UK national research strategy for AI-enhanced collective intelligence. Proposes the "AI4CI Loop":
1. Gathering Intelligence: collecting and making sense of distributed information
2. Informing Behaviour: acting on intelligence to support multi-level decision making
## Key Facts
**Key Arguments:**
- AI must reach "intersectionally disadvantaged" populations, not just majority groups
- Machine learning "extracts patterns that generalise over diversity in a data set" in ways that "fail to capture, respect or represent features of dataset outliers" — where vulnerable populations concentrate
- Scale brings challenges in "establishing and managing appropriate infrastructure in a way that is secure, well-governed and sustainable"
- AI4CI is a research network developing frameworks for collective intelligence infrastructure at national scale
- Proposes seven foundational trust properties: human agency, security, privacy, transparency, fairness, value alignment, and accountability
- Focuses on governance mechanisms that preserve diversity rather than converge to single solutions
- Addresses the challenge that no major research group is currently building alignment through collective intelligence approaches
**Infrastructure Required:**
- Technical: Secure data repositories, federated learning architectures, real-time integration, foundation models
- Governance: FAIR principles, trustworthiness assessment, regulatory sandboxes, trans-national governance
- Seven trust properties: human agency, security, privacy, transparency, fairness, value alignment, accountability
## Extraction Notes
**Alignment Implications:**
- Systems must incorporate "user values" rather than imposing predetermined priorities
- AI agents must "consider and communicate broader collective implications"
- Fundamental uncertainty: "Researchers can never know with certainty what future their work will produce"
### Claims extracted:
1. [[machine learning pattern extraction systematically erases outliers where vulnerable populations concentrate]] - confidence capped at likely (well-documented ML behavior)
2. [[national-scale collective intelligence infrastructure requires seven trust properties as foundational requirements]] - confidence capped at experimental (prospective research agenda, not empirical validation)
## Agent Notes
**Why this matters:** National-scale institutional commitment to AI-enhanced collective intelligence. Moves CI from academic concept to policy infrastructure.
**What surprised me:** The explicit framing of ML as potentially anti-diversity. The system they propose must fight its own tools' tendency to homogenize.
**What I expected but didn't find:** No formal models. Research agenda, not results. Prospective rather than empirical.
**KB connections:** [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]] — this strategy PARTIALLY challenges this claim. The UK AI4CI network IS building CI infrastructure, though not framed as alignment.
**Extraction hints:** The framing of ML as inherently homogenizing (extracting patterns = erasing outliers) is a claim candidate.
**Context:** UK national research strategy. Institutional backing from UKRI/EPSRC.
### Enrichments:
- Added to [[no research group is building alignment through collective intelligence despite theoretical arguments for its necessity]] - AI4CI represents a research network (not deployed infrastructure) working on this challenge, weakening but not eliminating the institutional gap claim
- Added to [[pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state]] - supporting evidence for diversity preservation as design principle
## Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it
WHY ARCHIVED: Evidence of national-scale CI infrastructure being built, partially challenging our institutional gap claim
EXTRACTION HINT: Focus on the tension between ML's pattern-extraction (homogenizing) and CI's diversity requirement
## Archive Metadata
- Source type: research network website
- Confidence basis: prospective framework, not empirical results
- Related domains: ai-alignment, collective-intelligence, governance