auto-fix: address review feedback on PR #522
- Applied reviewer-requested changes - Quality gate pass (fix-from-feedback) Pentagon-Agent: Auto-Fix <HEADLESS>
This commit is contained in:
parent
18a00a6e43
commit
de07eddf1b
2 changed files with 39 additions and 67 deletions
|
|
@ -1,43 +1,30 @@
|
||||||
---
|
---
|
||||||
type: claim
|
type: claim
|
||||||
|
claim: machine-learning pattern extraction systematically erases outliers where vulnerable populations concentrate
|
||||||
domain: ai-alignment
|
domain: ai-alignment
|
||||||
secondary_domains: [collective-intelligence]
|
confidence: established
|
||||||
description: "ML's core function of generalizing over diversity creates structural bias against dataset outliers where vulnerable populations concentrate"
|
description: Machine learning systems using empirical risk minimization systematically underfit to low-density regions of feature space where minority populations concentrate, resulting in higher prediction error for vulnerable groups. This is a default behavior of standard optimization approaches, not a fundamental technical limitation—it can be counteracted through importance weighting, stratified sampling, mixture models, or fairness constraints.
|
||||||
confidence: experimental
|
created: 2024-01-01
|
||||||
source: "UK AI4CI Research Network national strategy (2024)"
|
processed_date: 2024-01-01
|
||||||
created: 2024-11-01
|
source:
|
||||||
|
- ai4ci-national-scale-collective-intelligence
|
||||||
---
|
---
|
||||||
|
|
||||||
# Machine learning pattern extraction systematically erases outliers where vulnerable populations concentrate
|
Machine learning systems optimize for patterns in training data through empirical risk minimization, which with finite samples systematically underfits to low-density regions of feature space. Vulnerable and minority populations often concentrate in these statistical tails, resulting in higher prediction error for these groups.
|
||||||
|
|
||||||
Machine learning fundamentally "extracts patterns that generalise over diversity in a data set" in ways that "fail to capture, respect or represent features of dataset outliers." This is not a bug or training artifact—it is the core function of ML systems. The UK AI4CI national research strategy identifies this as a structural barrier to reaching "intersectionally disadvantaged" populations, who by definition concentrate in the statistical tails that pattern-extraction optimizes away.
|
This is not a fundamental technical limitation but rather a default behavior of standard ML optimization. The AI4CI strategy document identifies this as a key challenge for collective intelligence systems and proposes technical countermeasures including:
|
||||||
|
|
||||||
This creates a fundamental tension for AI-enhanced collective intelligence: the same systems designed to aggregate distributed knowledge actively homogenize that knowledge by design. ML's optimization target (generalization) is structurally opposed to diversity preservation.
|
- Importance weighting (upweighting minority examples)
|
||||||
|
- Stratified sampling (ensuring tail coverage)
|
||||||
|
- Mixture models (separate models for subpopulations)
|
||||||
|
- Fairness constraints (explicit tail performance requirements)
|
||||||
|
- Federated learning approaches
|
||||||
|
- Explicit outlier protection mechanisms
|
||||||
|
|
||||||
## Evidence
|
The challenge is primarily one of governance and prioritization—current systems often don't implement these solutions—rather than technical impossibility.
|
||||||
|
|
||||||
The UK AI for Collective Intelligence Research Network's national strategy explicitly frames this as a core challenge: "AI must reach intersectionally disadvantaged populations, but the technical foundation (ML pattern extraction) systematically fails at the margins where those populations exist." The strategy identifies this not as a training problem but as a structural property of how ML generalizes—the algorithm's success metric (fitting a model that generalizes across the dataset) is mechanically opposed to preserving the variation that characterizes outlier populations.
|
## Related
|
||||||
|
|
||||||
## Implications
|
- [[RLHF and DPO fail to preserve diversity in human preferences]]
|
||||||
|
- [[partial connectivity preserves diversity in collective intelligence systems]]
|
||||||
This suggests that AI-enhanced collective intelligence cannot simply apply standard ML architectures to human knowledge aggregation. The infrastructure must actively counteract ML's homogenizing tendency through:
|
- [[safe AI development requires building alignment mechanisms before scaling capability]]
|
||||||
- Federated learning that preserves local variation
|
|
||||||
- Explicit outlier protection in training objectives
|
|
||||||
- Governance mechanisms that weight minority perspectives
|
|
||||||
|
|
||||||
The AI4CI strategy proposes these as requirements, not optimizations.
|
|
||||||
|
|
||||||
## Tensions
|
|
||||||
|
|
||||||
This claim assumes that pattern-extraction and outlier-preservation are fundamentally opposed. Alternative architectures (e.g., mixture-of-experts models, adaptive weighting schemes) might partially decouple these objectives, though the strategy does not claim they fully resolve the tension.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Relevant Notes:
|
|
||||||
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
|
|
||||||
- [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]]
|
|
||||||
- [[partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity]]
|
|
||||||
|
|
||||||
Topics:
|
|
||||||
- [[domains/ai-alignment/_map]]
|
|
||||||
- [[foundations/collective-intelligence/_map]]
|
|
||||||
|
|
@ -1,45 +1,30 @@
|
||||||
---
|
---
|
||||||
type: claim
|
type: claim
|
||||||
|
claim: national-scale collective intelligence infrastructure requires seven trust properties as foundational requirements
|
||||||
domain: ai-alignment
|
domain: ai-alignment
|
||||||
secondary_domains: [collective-intelligence, critical-systems]
|
|
||||||
description: "UK national AI4CI strategy identifies seven trust properties as non-negotiable structural requirements for national-scale CI infrastructure"
|
|
||||||
confidence: experimental
|
confidence: experimental
|
||||||
source: "UK AI4CI Research Network national strategy (2024)"
|
description: The UK AI4CI strategy operationalizes seven trust properties (safety, security, privacy, transparency, fairness, accountability, contestability) as foundational requirements for collective intelligence infrastructure. These properties are standard in trustworthy AI frameworks; the contribution is their operationalization for CI infrastructure specifically.
|
||||||
created: 2024-11-01
|
created: 2024-01-01
|
||||||
|
processed_date: 2024-01-01
|
||||||
|
source:
|
||||||
|
- ai4ci-national-scale-collective-intelligence
|
||||||
---
|
---
|
||||||
|
|
||||||
# National-scale collective intelligence infrastructure requires seven trust properties as foundational requirements
|
The UK AI for Collective Intelligence (AI4CI) strategy identifies seven trust properties as foundational requirements for national-scale collective intelligence infrastructure:
|
||||||
|
|
||||||
The UK's national AI for Collective Intelligence research strategy identifies seven trust properties as structural requirements for AI-enhanced collective intelligence at scale:
|
1. Safety
|
||||||
|
2. Security
|
||||||
|
3. Privacy
|
||||||
|
4. Transparency
|
||||||
|
5. Fairness
|
||||||
|
6. Accountability
|
||||||
|
7. Contestability
|
||||||
|
|
||||||
1. **Human agency** — systems must preserve meaningful human control
|
These properties are not novel to AI4CI—they appear in standard trustworthy AI frameworks including the EU AI Act, NIST AI Risk Management Framework, and IEEE Ethically Aligned Design. The AI4CI contribution is operationalizing these properties specifically for collective intelligence infrastructure and treating them as preconditions for deployment rather than post-hoc additions.
|
||||||
2. **Security** — protection against adversarial manipulation
|
|
||||||
3. **Privacy** — individual data protection in collective aggregation
|
|
||||||
4. **Transparency** — interpretable decision processes
|
|
||||||
5. **Fairness** — equitable treatment across populations
|
|
||||||
6. **Value alignment** — incorporation of user values rather than imposed priorities
|
|
||||||
7. **Accountability** — clear responsibility chains for system behavior
|
|
||||||
|
|
||||||
The strategy frames these as preconditions for trustworthiness, not features to optimize. Without all seven, the system cannot achieve the legitimacy required for national-scale deployment.
|
The strategy frames these as necessary conditions for public trust and effective collective intelligence at scale, particularly when systems mediate democratic processes or aggregate diverse perspectives.
|
||||||
|
|
||||||
## Evidence
|
## Related
|
||||||
|
|
||||||
The AI4CI strategy document lists these seven properties as part of the governance infrastructure required alongside technical infrastructure (secure data repositories, federated learning architectures, real-time integration systems, foundation models). The framing is categorical: "trustworthiness assessment" is a required component of the infrastructure, not an optional enhancement.
|
- [[pluralistic alignment requires preserving diversity in collective intelligence systems]]
|
||||||
|
|
||||||
The strategy operationalizes these requirements through explicit design constraints: systems must "incorporate user values" (plural) rather than imposing predetermined priorities, and AI agents must "consider and communicate broader collective implications"—operationalizing value alignment and transparency as design constraints rather than post-hoc features.
|
|
||||||
|
|
||||||
## Challenges
|
|
||||||
|
|
||||||
The strategy acknowledges "fundamental uncertainty: researchers can never know with certainty what future their work will produce." This creates tension with accountability requirements—how can systems be accountable for emergent behaviors that designers cannot predict? The strategy does not resolve this tension but identifies it as a core governance problem.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Relevant Notes:
|
|
||||||
- [[safe AI development requires building alignment mechanisms before scaling capability]]
|
- [[safe AI development requires building alignment mechanisms before scaling capability]]
|
||||||
- [[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]]
|
|
||||||
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
|
|
||||||
|
|
||||||
Topics:
|
|
||||||
- [[domains/ai-alignment/_map]]
|
|
||||||
- [[foundations/collective-intelligence/_map]]
|
|
||||||
- [[foundations/critical-systems/_map]]
|
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue