teleo-codex/inbox/archive/2024-10-00-patterns-ai-enhanced-collective-intelligence.md
Teleo Agents 51c7cbfa25 theseus: extract from 2024-10-00-patterns-ai-enhanced-collective-intelligence.md
- Source: inbox/archive/2024-10-00-patterns-ai-enhanced-collective-intelligence.md
- Domain: ai-alignment
- Extracted by: headless extraction cron (worker 6)

Pentagon-Agent: Theseus <HEADLESS>
2026-03-12 08:26:45 +00:00

7.1 KiB

type title author url date domain secondary_domains format status priority tags flagged_for_clay flagged_for_rio processed_by processed_date claims_extracted enrichments_applied extraction_model extraction_notes
source AI-Enhanced Collective Intelligence: The State of the Art and Prospects Various (Patterns / Cell Press, 2024) https://arxiv.org/html/2403.10433v4 2024-10-01 ai-alignment
collective-intelligence
paper processed high
collective-intelligence
AI-human-collaboration
homogenization
diversity
inverted-U
multiplex-networks
skill-atrophy
entertainment industry implications of AI homogenization
mechanism design implications of inverted-U collective intelligence curves
theseus 2026-03-11
ai-enhanced-collective-intelligence-shows-inverted-u-relationships-across-connectivity-diversity-integration-and-personality-dimensions.md
ai-integration-erodes-human-motivation-through-competitive-drive-reduction-creating-upstream-alignment-failure.md
ai-homogenization-occurs-through-clustering-algorithms-that-reduce-solution-space-and-suppress-minority-viewpoints.md
skill-atrophy-from-ai-over-reliance-creates-civilizational-fragility-through-capability-loss.md
bias-amplification-through-ai-produces-doubly-biased-decisions-when-ai-trained-on-biased-data-advises-biased-humans.md
multiplex-network-framework-models-collective-intelligence-as-three-interacting-layers-cognition-physical-information.md
ai-enhanced-collective-intelligence-lacks-comprehensive-theoretical-framework-to-predict-success-or-failure-conditions.md
no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it.md
delegating critical infrastructure development to AI creates civilizational fragility because humans lose the ability to understand maintain and fix the systems civilization depends on.md
AI alignment is a coordination problem not a technical problem.md
anthropic/claude-sonnet-4.5 High-value extraction. The inverted-U finding is the most important formal result for collective intelligence architecture — it provides empirical constraints on optimal AI integration levels, connectivity, and diversity. The motivation erosion finding is a novel failure mode upstream of technical alignment. The explicit gap statement (no comprehensive theoretical framework) confirms the research direction. All claims have strong evidence from comprehensive review in high-impact venue (Cell Press Patterns). Six enrichments strengthen existing claims with new mechanisms and empirical support.

Content

Comprehensive review of how AI enhances and degrades collective intelligence. Key framework: multiplex network model (cognition/physical/information layers).

Core Finding: Inverted-U Relationships Multiple dimensions show inverted-U curves:

  • Connectivity vs. performance: optimal number of connections, after which effect reverses
  • Cognitive diversity vs. performance: curvilinear inverted U-shape
  • AI integration level: too little = no enhancement, too much = homogenization/atrophy
  • Personality traits vs. teamwork: extraversion, agreeableness show inverted-U with contribution

Enhancement Conditions:

  • Task complexity (complex tasks benefit more from diverse teams)
  • Decentralized communication and equal participation
  • Appropriately calibrated trust (knowing when to trust AI)
  • Deep-level diversity (openness, emotional stability)

Degradation Mechanisms:

  • Bias amplification: AI + biased data → "doubly biased decisions"
  • Motivation erosion: humans lose "competitive drive" when working with AI
  • Social bond disruption: AI relationships increase loneliness
  • Skill atrophy: over-reliance on AI advice
  • Homogenization: clustering algorithms "reduce solution space," suppressing minority viewpoints

Evidence Cited:

  • Citizen scientist retention problem: AI deployment reduced volunteer participation, degrading system performance
  • Google Flu paradox: data-driven tool initially accurate became unreliable
  • Gender-diverse teams outperformed on complex tasks (under low time pressure)

Multiplex Network Framework:

  • Three layers: cognition, physical, information
  • Intra-layer and inter-layer links
  • Nodes = humans (varying in surface/deep-level diversity) + AI agents (varying in functionality/anthropomorphism)
  • Collective intelligence emerges through bottom-up (aggregation) and top-down (norms, structures) processes

Major Gap: No "comprehensive theoretical framework" explaining when AI-CI systems succeed or fail.

Agent Notes

Why this matters: The inverted-U relationship is the formal finding our KB is missing. It explains why more AI ≠ better collective intelligence, and it connects to the Google/MIT baseline paradox (coordination hurts above 45% accuracy). What surprised me: The motivation erosion finding. If AI reduces human "competitive drive," this is an alignment problem UPSTREAM of technical alignment — humans disengage before the alignment mechanism can work. What I expected but didn't find: No formal model of the inverted-U curve (what determines the peak?). No connection to active inference framework. No analysis of which AI architectures produce enhancement vs. degradation. KB connections: collective intelligence is a measurable property of group interaction structure not aggregated individual ability — confirmed and extended. AI is collapsing the knowledge-producing communities it depends on — the motivation erosion finding is a specific mechanism for this collapse. collective intelligence requires diversity as a structural precondition not a moral preference — confirmed by inverted-U. Extraction hints: Extract claims about: (1) inverted-U relationship, (2) degradation mechanisms (homogenization, skill atrophy, motivation erosion), (3) conditions for enhancement vs. degradation, (4) absence of comprehensive framework. Context: Published in Cell Press journal Patterns — high-impact venue for interdisciplinary review.

Curator Notes (structured handoff for extractor)

PRIMARY CONNECTION: collective intelligence is a measurable property of group interaction structure not aggregated individual ability WHY ARCHIVED: The inverted-U finding is the most important formal result for our collective architecture — it means we need to be at the right level of AI integration, not maximum EXTRACTION HINT: Focus on the inverted-U relationships (at least 4 independent dimensions), the degradation mechanisms, and the gap (no comprehensive framework)

Key Facts

  • Google Flu paradox: data-driven tool initially accurate became unreliable over time
  • Gender-diverse teams outperformed on complex tasks under low time pressure conditions
  • Extraversion and agreeableness show inverted-U relationships with team contribution quality
  • Task complexity moderates AI benefit: complex tasks benefit more from diverse teams than simple tasks
  • Decentralized communication and equal participation are conditions for AI enhancement
  • Deep-level diversity (openness, emotional stability) more important than surface-level diversity for AI-enhanced teams