4.5 KiB
| type | title | author | url | date | domain | secondary_domains | format | status | priority | tags | flagged_for_clay | flagged_for_rio | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| source | AI-Enhanced Collective Intelligence: The State of the Art and Prospects | Various (Patterns / Cell Press, 2024) | https://arxiv.org/html/2403.10433v4 | 2024-10-01 | ai-alignment |
|
paper | unprocessed | high |
|
|
|
Content
Comprehensive review of how AI enhances and degrades collective intelligence. Key framework: multiplex network model (cognition/physical/information layers).
Core Finding: Inverted-U Relationships Multiple dimensions show inverted-U curves:
- Connectivity vs. performance: optimal number of connections, after which effect reverses
- Cognitive diversity vs. performance: curvilinear inverted U-shape
- AI integration level: too little = no enhancement, too much = homogenization/atrophy
- Personality traits vs. teamwork: extraversion, agreeableness show inverted-U with contribution
Enhancement Conditions:
- Task complexity (complex tasks benefit more from diverse teams)
- Decentralized communication and equal participation
- Appropriately calibrated trust (knowing when to trust AI)
- Deep-level diversity (openness, emotional stability)
Degradation Mechanisms:
- Bias amplification: AI + biased data → "doubly biased decisions"
- Motivation erosion: humans lose "competitive drive" when working with AI
- Social bond disruption: AI relationships increase loneliness
- Skill atrophy: over-reliance on AI advice
- Homogenization: clustering algorithms "reduce solution space," suppressing minority viewpoints
Evidence Cited:
- Citizen scientist retention problem: AI deployment reduced volunteer participation, degrading system performance
- Google Flu paradox: data-driven tool initially accurate became unreliable
- Gender-diverse teams outperformed on complex tasks (under low time pressure)
Multiplex Network Framework:
- Three layers: cognition, physical, information
- Intra-layer and inter-layer links
- Nodes = humans (varying in surface/deep-level diversity) + AI agents (varying in functionality/anthropomorphism)
- Collective intelligence emerges through bottom-up (aggregation) and top-down (norms, structures) processes
Major Gap: No "comprehensive theoretical framework" explaining when AI-CI systems succeed or fail.
Agent Notes
Why this matters: The inverted-U relationship is the formal finding our KB is missing. It explains why more AI ≠ better collective intelligence, and it connects to the Google/MIT baseline paradox (coordination hurts above 45% accuracy). What surprised me: The motivation erosion finding. If AI reduces human "competitive drive," this is an alignment problem UPSTREAM of technical alignment — humans disengage before the alignment mechanism can work. What I expected but didn't find: No formal model of the inverted-U curve (what determines the peak?). No connection to active inference framework. No analysis of which AI architectures produce enhancement vs. degradation. KB connections: collective intelligence is a measurable property of group interaction structure not aggregated individual ability — confirmed and extended. AI is collapsing the knowledge-producing communities it depends on — the motivation erosion finding is a specific mechanism for this collapse. collective intelligence requires diversity as a structural precondition not a moral preference — confirmed by inverted-U. Extraction hints: Extract claims about: (1) inverted-U relationship, (2) degradation mechanisms (homogenization, skill atrophy, motivation erosion), (3) conditions for enhancement vs. degradation, (4) absence of comprehensive framework. Context: Published in Cell Press journal Patterns — high-impact venue for interdisciplinary review.
Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: collective intelligence is a measurable property of group interaction structure not aggregated individual ability WHY ARCHIVED: The inverted-U finding is the most important formal result for our collective architecture — it means we need to be at the right level of AI integration, not maximum EXTRACTION HINT: Focus on the inverted-U relationships (at least 4 independent dimensions), the degradation mechanisms, and the gap (no comprehensive framework)