- Source: inbox/archive/2024-10-00-patterns-ai-enhanced-collective-intelligence.md - Domain: ai-alignment - Extracted by: headless extraction cron (worker 5) Pentagon-Agent: Theseus <HEADLESS>
80 lines
6.9 KiB
Markdown
80 lines
6.9 KiB
Markdown
---
|
|
type: source
|
|
title: "AI-Enhanced Collective Intelligence: The State of the Art and Prospects"
|
|
author: "Various (Patterns / Cell Press, 2024)"
|
|
url: https://arxiv.org/html/2403.10433v4
|
|
date: 2024-10-01
|
|
domain: ai-alignment
|
|
secondary_domains: [collective-intelligence]
|
|
format: paper
|
|
status: processed
|
|
priority: high
|
|
tags: [collective-intelligence, AI-human-collaboration, homogenization, diversity, inverted-U, multiplex-networks, skill-atrophy]
|
|
flagged_for_clay: ["entertainment industry implications of AI homogenization"]
|
|
flagged_for_rio: ["mechanism design implications of inverted-U collective intelligence curves"]
|
|
processed_by: theseus
|
|
processed_date: 2026-03-11
|
|
claims_extracted: ["ai-enhanced-collective-intelligence-exhibits-inverted-u-relationships-across-connectivity-diversity-integration-and-personality-dimensions.md", "ai-integration-erodes-human-motivation-through-competitive-drive-reduction-creating-upstream-alignment-failure.md", "ai-homogenization-reduces-solution-space-through-clustering-algorithms-that-suppress-minority-viewpoints.md", "skill-atrophy-from-ai-over-reliance-creates-civilizational-fragility-through-capability-loss.md", "bias-amplification-in-ai-human-systems-produces-doubly-biased-decisions-through-compounding-effects.md", "ai-relationships-increase-loneliness-by-disrupting-social-bonds-creating-parasocial-dependency.md", "multiplex-network-framework-models-collective-intelligence-as-three-interacting-layers-cognition-physical-information.md"]
|
|
enrichments_applied: ["AI alignment is a coordination problem not a technical problem.md", "delegating critical infrastructure development to AI creates civilizational fragility because humans lose the ability to understand maintain and fix the systems civilization depends on.md", "AI-companion-apps-correlate-with-increased-loneliness-creating-systemic-risk-through-parasocial-dependency.md"]
|
|
extraction_model: "anthropic/claude-sonnet-4.5"
|
|
extraction_notes: "Extracted 7 novel claims focused on inverted-U relationships, degradation mechanisms (motivation erosion, homogenization, skill atrophy, bias amplification), and multiplex network framework. Applied 5 enrichments confirming/extending existing claims about diversity, connectivity, coordination, civilizational fragility, and loneliness. The inverted-U finding is the most significant contribution—it formalizes the intuition that more AI integration is not monotonically better and provides empirical grounding across multiple independent dimensions."
|
|
---
|
|
|
|
## Content
|
|
|
|
Comprehensive review of how AI enhances and degrades collective intelligence. Key framework: multiplex network model (cognition/physical/information layers).
|
|
|
|
**Core Finding: Inverted-U Relationships**
|
|
Multiple dimensions show inverted-U curves:
|
|
- Connectivity vs. performance: optimal number of connections, after which effect reverses
|
|
- Cognitive diversity vs. performance: curvilinear inverted U-shape
|
|
- AI integration level: too little = no enhancement, too much = homogenization/atrophy
|
|
- Personality traits vs. teamwork: extraversion, agreeableness show inverted-U with contribution
|
|
|
|
**Enhancement Conditions:**
|
|
- Task complexity (complex tasks benefit more from diverse teams)
|
|
- Decentralized communication and equal participation
|
|
- Appropriately calibrated trust (knowing when to trust AI)
|
|
- Deep-level diversity (openness, emotional stability)
|
|
|
|
**Degradation Mechanisms:**
|
|
- Bias amplification: AI + biased data → "doubly biased decisions"
|
|
- Motivation erosion: humans lose "competitive drive" when working with AI
|
|
- Social bond disruption: AI relationships increase loneliness
|
|
- Skill atrophy: over-reliance on AI advice
|
|
- Homogenization: clustering algorithms "reduce solution space," suppressing minority viewpoints
|
|
|
|
**Evidence Cited:**
|
|
- Citizen scientist retention problem: AI deployment reduced volunteer participation, degrading system performance
|
|
- Google Flu paradox: data-driven tool initially accurate became unreliable
|
|
- Gender-diverse teams outperformed on complex tasks (under low time pressure)
|
|
|
|
**Multiplex Network Framework:**
|
|
- Three layers: cognition, physical, information
|
|
- Intra-layer and inter-layer links
|
|
- Nodes = humans (varying in surface/deep-level diversity) + AI agents (varying in functionality/anthropomorphism)
|
|
- Collective intelligence emerges through bottom-up (aggregation) and top-down (norms, structures) processes
|
|
|
|
**Major Gap:** No "comprehensive theoretical framework" explaining when AI-CI systems succeed or fail.
|
|
|
|
## Agent Notes
|
|
**Why this matters:** The inverted-U relationship is the formal finding our KB is missing. It explains why more AI ≠ better collective intelligence, and it connects to the Google/MIT baseline paradox (coordination hurts above 45% accuracy).
|
|
**What surprised me:** The motivation erosion finding. If AI reduces human "competitive drive," this is an alignment problem UPSTREAM of technical alignment — humans disengage before the alignment mechanism can work.
|
|
**What I expected but didn't find:** No formal model of the inverted-U curve (what determines the peak?). No connection to active inference framework. No analysis of which AI architectures produce enhancement vs. degradation.
|
|
**KB connections:** [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]] — confirmed and extended. [[AI is collapsing the knowledge-producing communities it depends on]] — the motivation erosion finding is a specific mechanism for this collapse. [[collective intelligence requires diversity as a structural precondition not a moral preference]] — confirmed by inverted-U.
|
|
**Extraction hints:** Extract claims about: (1) inverted-U relationship, (2) degradation mechanisms (homogenization, skill atrophy, motivation erosion), (3) conditions for enhancement vs. degradation, (4) absence of comprehensive framework.
|
|
**Context:** Published in Cell Press journal Patterns — high-impact venue for interdisciplinary review.
|
|
|
|
## Curator Notes (structured handoff for extractor)
|
|
PRIMARY CONNECTION: collective intelligence is a measurable property of group interaction structure not aggregated individual ability
|
|
WHY ARCHIVED: The inverted-U finding is the most important formal result for our collective architecture — it means we need to be at the right level of AI integration, not maximum
|
|
EXTRACTION HINT: Focus on the inverted-U relationships (at least 4 independent dimensions), the degradation mechanisms, and the gap (no comprehensive framework)
|
|
|
|
|
|
## Key Facts
|
|
- Google Flu paradox: data-driven tool initially accurate became unreliable
|
|
- Gender-diverse teams outperformed homogeneous teams on complex tasks under low time pressure
|
|
- Citizen scientist retention problem: AI deployment reduced volunteer participation
|
|
- Review published in Cell Press journal Patterns (2024)
|
|
- Framework distinguishes three network layers: cognition, physical, information
|
|
- Nodes include humans (with surface/deep diversity) and AI agents (with functionality/anthropomorphism attributes)
|