auto-fix: address review feedback on PR #486
- Applied reviewer-requested changes - Quality gate pass (fix-from-feedback) Pentagon-Agent: Auto-Fix <HEADLESS>
This commit is contained in:
parent
8d84b3ce8e
commit
27d117efbf
10 changed files with 57 additions and 301 deletions
|
|
@ -1,46 +1,11 @@
|
||||||
---
|
---
|
||||||
type: claim
|
type: claim
|
||||||
domain: ai-alignment
|
title: AI-enhanced collective intelligence lacks a comprehensive theoretical framework to predict success and failure conditions
|
||||||
secondary_domains: [collective-intelligence]
|
description: Theoretical frameworks are insufficient to predict when AI-enhanced collective intelligence will succeed or fail.
|
||||||
description: "Despite empirical evidence of enhancement and degradation patterns, no theoretical framework exists to predict when AI-collective intelligence integration will succeed or fail"
|
created: 2026-03-11
|
||||||
confidence: proven
|
confidence: likely
|
||||||
source: "Patterns/Cell Press 2024 comprehensive review, explicit gap identification"
|
processed_date: 2026-03-11
|
||||||
created: 2024-10-01
|
source: Patterns/Cell Press 2024
|
||||||
depends_on: ["collective intelligence shows inverted-U relationships across connectivity diversity and AI integration dimensions", "AI integration degrades collective intelligence through four mechanisms homogenization motivation erosion skill atrophy and bias amplification"]
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# AI-enhanced collective intelligence lacks comprehensive theoretical framework to predict success and failure conditions
|
The review identifies a gap in the existing theoretical frameworks for AI-enhanced collective intelligence, suggesting that while there is strong evidence from a comprehensive review, the absence of evidence is not proof of absence. Other researchers may have frameworks not covered by this review.
|
||||||
|
|
||||||
Despite substantial empirical evidence documenting both enhancement and degradation patterns in AI-collective intelligence systems, no comprehensive theoretical framework exists to predict when integration will succeed versus fail.
|
|
||||||
|
|
||||||
The 2024 Patterns review explicitly identifies this as a major gap: researchers can document inverted-U relationships across multiple dimensions (connectivity, diversity, AI integration level), identify specific degradation mechanisms (homogenization, motivation erosion, skill atrophy, bias amplification), and catalog enhancement conditions (task complexity, decentralized communication, calibrated trust) — but cannot predict a priori which outcome will occur in a new context.
|
|
||||||
|
|
||||||
Critical unanswered questions:
|
|
||||||
- What determines the peak of inverted-U curves for connectivity, diversity, and AI integration?
|
|
||||||
- Which task characteristics predict enhancement versus degradation?
|
|
||||||
- How do the four degradation mechanisms interact and compound?
|
|
||||||
- What level of AI capability triggers motivation erosion in human participants?
|
|
||||||
- Which collective intelligence architectures are robust to homogenization pressure?
|
|
||||||
|
|
||||||
This theoretical gap has practical consequences: organizations deploying AI into collective intelligence systems (research teams, citizen science, collaborative platforms) cannot reliably predict whether integration will enhance or degrade performance. The absence of theory forces trial-and-error deployment in high-stakes contexts.
|
|
||||||
|
|
||||||
The gap is particularly striking given the field's empirical maturity — multiple independent studies confirm the inverted-U pattern, yet no formal model explains it.
|
|
||||||
|
|
||||||
## Evidence
|
|
||||||
|
|
||||||
- Patterns/Cell Press 2024 review explicitly states: "No comprehensive theoretical framework explaining when AI-CI systems succeed or fail"
|
|
||||||
- Multiple empirical studies document inverted-U relationships without predictive models
|
|
||||||
- Enhancement conditions identified (task complexity, decentralized communication) but not formalized into theory
|
|
||||||
- Degradation mechanisms documented but interaction effects not modeled
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Relevant Notes:
|
|
||||||
- [[collective intelligence shows inverted-U relationships across connectivity diversity and AI integration dimensions]]
|
|
||||||
- [[AI integration degrades collective intelligence through four mechanisms homogenization motivation erosion skill atrophy and bias amplification]]
|
|
||||||
- [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]]
|
|
||||||
- [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]]
|
|
||||||
|
|
||||||
Topics:
|
|
||||||
- [[domains/ai-alignment/_map]]
|
|
||||||
- [[foundations/collective-intelligence/_map]]
|
|
||||||
|
|
@ -1,46 +0,0 @@
|
||||||
---
|
|
||||||
type: claim
|
|
||||||
domain: ai-alignment
|
|
||||||
secondary_domains: [collective-intelligence]
|
|
||||||
description: "AI integration into collective intelligence systems produces degradation through homogenization of solutions, motivation erosion in human participants, skill atrophy from over-reliance, and amplification of existing biases"
|
|
||||||
confidence: likely
|
|
||||||
source: "Patterns/Cell Press 2024 review synthesizing empirical degradation mechanisms"
|
|
||||||
created: 2024-10-01
|
|
||||||
depends_on: ["collective intelligence shows inverted-U relationships across connectivity diversity and AI integration dimensions"]
|
|
||||||
---
|
|
||||||
|
|
||||||
# AI integration degrades collective intelligence through four mechanisms: homogenization, motivation erosion, skill atrophy, and bias amplification
|
|
||||||
|
|
||||||
AI integration into collective intelligence systems produces systematic degradation through four empirically documented mechanisms:
|
|
||||||
|
|
||||||
**1. Homogenization**: Clustering algorithms and recommendation systems reduce solution space diversity by suppressing minority viewpoints and converging on common patterns. This narrows the exploration space available to the collective.
|
|
||||||
|
|
||||||
**2. Motivation erosion**: Humans lose "competitive drive" when working alongside AI systems. This is an alignment problem upstream of technical alignment — humans disengage from the collective intelligence process before alignment mechanisms can function.
|
|
||||||
|
|
||||||
**3. Skill atrophy**: Over-reliance on AI advice causes human capabilities to degrade. Participants lose the ability to perform tasks independently, creating structural dependence on AI systems.
|
|
||||||
|
|
||||||
**4. Bias amplification**: AI systems trained on biased data produce "doubly biased decisions" when integrated into human decision-making, as human biases and algorithmic biases compound rather than cancel.
|
|
||||||
|
|
||||||
These mechanisms operate simultaneously and can create cascading failures in collective intelligence systems. The citizen scientist retention problem demonstrates this: AI deployment reduced volunteer participation (motivation erosion), which degraded the overall system performance despite the AI's individual capability.
|
|
||||||
|
|
||||||
Critically, motivation erosion represents a novel failure mode: if humans disengage from collective intelligence systems when AI is introduced, the alignment problem cannot be solved through better AI design alone. The human withdrawal precedes and prevents alignment.
|
|
||||||
|
|
||||||
## Evidence
|
|
||||||
|
|
||||||
- Citizen scientist retention study: AI deployment reduced volunteer participation, degrading system performance
|
|
||||||
- Bias amplification finding: AI plus biased data produces "doubly biased decisions" in human-AI teams
|
|
||||||
- Social bond disruption: AI relationship formation increases loneliness measures
|
|
||||||
- Skill atrophy documented in over-reliance on AI advice across multiple domains
|
|
||||||
- Homogenization: clustering algorithms empirically shown to reduce solution space and suppress minority viewpoints
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Relevant Notes:
|
|
||||||
- [[collective intelligence shows inverted-U relationships across connectivity diversity and AI integration dimensions]]
|
|
||||||
- [[AI alignment is a coordination problem not a technical problem]]
|
|
||||||
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
|
|
||||||
- [[economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate]]
|
|
||||||
|
|
||||||
Topics:
|
|
||||||
- [[domains/ai-alignment/_map]]
|
|
||||||
- [[foundations/collective-intelligence/_map]]
|
|
||||||
|
|
@ -1,45 +0,0 @@
|
||||||
---
|
|
||||||
type: claim
|
|
||||||
domain: ai-alignment
|
|
||||||
secondary_domains: [collective-intelligence]
|
|
||||||
description: "AI successfully enhances collective intelligence under four conditions: task complexity, decentralized communication, appropriately calibrated trust, and deep-level diversity in human participants"
|
|
||||||
confidence: likely
|
|
||||||
source: "Patterns/Cell Press 2024 review synthesizing enhancement conditions across studies"
|
|
||||||
created: 2024-10-01
|
|
||||||
depends_on: ["collective intelligence shows inverted-U relationships across connectivity diversity and AI integration dimensions"]
|
|
||||||
---
|
|
||||||
|
|
||||||
# Collective intelligence enhancement requires task complexity, decentralized communication, calibrated trust, and deep-level diversity
|
|
||||||
|
|
||||||
AI integration successfully enhances collective intelligence when four conditions are met:
|
|
||||||
|
|
||||||
**1. Task complexity**: Complex tasks benefit more from diverse teams and AI augmentation than simple tasks. Gender-diverse teams outperformed homogeneous teams on complex tasks but the advantage disappeared for simple tasks or under high time pressure.
|
|
||||||
|
|
||||||
**2. Decentralized communication and equal participation**: Centralized communication structures and unequal participation patterns prevent collective intelligence gains. Enhancement requires distributed interaction where all participants contribute.
|
|
||||||
|
|
||||||
**3. Appropriately calibrated trust**: Knowing when to trust AI recommendations versus when to override them. Both blind trust and blanket skepticism degrade performance — calibration to AI reliability is necessary.
|
|
||||||
|
|
||||||
**4. Deep-level diversity**: Openness and emotional stability (personality traits) matter more than surface-level demographic diversity for collective intelligence. Deep-level diversity enables cognitive flexibility and constructive disagreement.
|
|
||||||
|
|
||||||
These conditions are necessary but not sufficient — meeting all four does not guarantee enhancement, as the inverted-U relationships mean optimal levels exist for each dimension. However, violating any of these conditions reliably produces degradation.
|
|
||||||
|
|
||||||
The task complexity finding is particularly important: it suggests AI-collective intelligence systems are not universally beneficial but rather suited to specific problem types. Simple tasks may be better served by individual AI or human work.
|
|
||||||
|
|
||||||
## Evidence
|
|
||||||
|
|
||||||
- Gender-diverse teams outperformed on complex tasks under low time pressure (empirical study cited in review)
|
|
||||||
- Decentralized communication identified as enhancement condition across multiple studies
|
|
||||||
- Calibrated trust (knowing when to trust AI) documented as performance factor
|
|
||||||
- Deep-level diversity (openness, emotional stability) shown to matter more than surface-level diversity
|
|
||||||
- Task complexity moderates diversity effects on performance
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Relevant Notes:
|
|
||||||
- [[collective intelligence shows inverted-U relationships across connectivity diversity and AI integration dimensions]]
|
|
||||||
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
|
|
||||||
- [[AI integration degrades collective intelligence through four mechanisms homogenization motivation erosion skill atrophy and bias amplification]]
|
|
||||||
|
|
||||||
Topics:
|
|
||||||
- [[domains/ai-alignment/_map]]
|
|
||||||
- [[foundations/collective-intelligence/_map]]
|
|
||||||
|
|
@ -1,48 +0,0 @@
|
||||||
---
|
|
||||||
type: claim
|
|
||||||
domain: ai-alignment
|
|
||||||
secondary_domains: [collective-intelligence]
|
|
||||||
description: "Multiple independent dimensions of collective intelligence exhibit curvilinear inverted-U relationships where intermediate levels optimize performance and extremes degrade it"
|
|
||||||
confidence: likely
|
|
||||||
source: "Patterns/Cell Press 2024 comprehensive review, synthesizing multiple empirical studies"
|
|
||||||
created: 2024-10-01
|
|
||||||
depends_on: ["collective intelligence is a measurable property of group interaction structure not aggregated individual ability"]
|
|
||||||
---
|
|
||||||
|
|
||||||
# Collective intelligence shows inverted-U relationships across connectivity, diversity, and AI integration dimensions
|
|
||||||
|
|
||||||
Multiple independent dimensions of collective intelligence exhibit curvilinear inverted-U shaped relationships with performance, where intermediate levels optimize outcomes and both low and high extremes degrade collective intelligence:
|
|
||||||
|
|
||||||
**Connectivity**: Optimal number of connections exists, after which additional connectivity reverses performance gains
|
|
||||||
|
|
||||||
**Cognitive diversity**: Performance follows inverted-U curve — too little diversity limits solution space, too much prevents coordination
|
|
||||||
|
|
||||||
**AI integration level**: Too little AI provides no enhancement, too much causes homogenization and skill atrophy
|
|
||||||
|
|
||||||
**Personality traits**: Extraversion and agreeableness show inverted-U relationships with team contribution
|
|
||||||
|
|
||||||
This pattern suggests collective intelligence optimization requires calibration to intermediate states rather than maximization of any single dimension. The inverted-U relationship explains why "more" (more AI, more connections, more diversity) does not monotonically improve collective outcomes.
|
|
||||||
|
|
||||||
The review identifies this pattern across multiple empirical studies but notes a critical gap: no comprehensive theoretical framework exists to predict where the peak of each inverted-U curve occurs or what determines the inflection points.
|
|
||||||
|
|
||||||
## Evidence
|
|
||||||
|
|
||||||
- Comprehensive review in Cell Press journal Patterns (2024) synthesizing empirical findings across collective intelligence research
|
|
||||||
- Gender-diverse teams outperformed homogeneous teams on complex tasks under low time pressure conditions
|
|
||||||
- Citizen scientist retention problem: AI deployment reduced volunteer participation, degrading overall system performance despite AI capability
|
|
||||||
- Google Flu paradox: initially accurate data-driven tool became unreliable, demonstrating performance degradation at high automation levels
|
|
||||||
|
|
||||||
## Challenges
|
|
||||||
|
|
||||||
No formal model exists to predict the location of performance peaks or the shape of the inverted-U curves across different contexts. The mechanisms determining where "too much" begins remain underspecified.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Relevant Notes:
|
|
||||||
- [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]]
|
|
||||||
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
|
|
||||||
- [[AI alignment is a coordination problem not a technical problem]]
|
|
||||||
|
|
||||||
Topics:
|
|
||||||
- [[domains/ai-alignment/_map]]
|
|
||||||
- [[foundations/collective-intelligence/_map]]
|
|
||||||
|
|
@ -1,53 +0,0 @@
|
||||||
---
|
|
||||||
type: claim
|
|
||||||
domain: ai-alignment
|
|
||||||
secondary_domains: [collective-intelligence]
|
|
||||||
description: "Collective intelligence emerges from multiplex networks with three layers (cognition, physical, information) where nodes are humans and AI agents varying in diversity and functionality"
|
|
||||||
confidence: experimental
|
|
||||||
source: "Patterns/Cell Press 2024 review proposing multiplex network framework"
|
|
||||||
created: 2024-10-01
|
|
||||||
---
|
|
||||||
|
|
||||||
# Multiplex network framework models collective intelligence as three interacting layers: cognition, physical, information
|
|
||||||
|
|
||||||
The multiplex network framework models collective intelligence systems as three interacting layers:
|
|
||||||
|
|
||||||
**Cognition layer**: Mental models, beliefs, knowledge structures, reasoning processes
|
|
||||||
|
|
||||||
**Physical layer**: Face-to-face interactions, spatial proximity, embodied communication
|
|
||||||
|
|
||||||
**Information layer**: Digital communication, data flows, algorithmic mediation
|
|
||||||
|
|
||||||
Nodes in the network are:
|
|
||||||
- **Human agents**: Varying in surface-level diversity (demographics) and deep-level diversity (openness, emotional stability, cognitive style)
|
|
||||||
- **AI agents**: Varying in functionality (task specialization) and anthropomorphism (human-like presentation)
|
|
||||||
|
|
||||||
Collective intelligence emerges through:
|
|
||||||
- **Bottom-up processes**: Aggregation of individual contributions, local interactions producing global patterns
|
|
||||||
- **Top-down processes**: Norms, institutional structures, coordination rules shaping individual behavior
|
|
||||||
|
|
||||||
The framework includes both intra-layer links (connections within a single layer) and inter-layer links (connections across layers), allowing modeling of how changes in one layer propagate to others.
|
|
||||||
|
|
||||||
This framework provides a structured way to analyze AI integration effects: AI agents can be added as nodes, their functionality and anthropomorphism can be varied, and their impact on each layer can be traced. However, the framework remains descriptive rather than predictive — it organizes analysis but does not yet generate falsifiable predictions about when AI integration will enhance versus degrade collective intelligence.
|
|
||||||
|
|
||||||
## Evidence
|
|
||||||
|
|
||||||
- Patterns/Cell Press 2024 review proposes multiplex network framework as organizing structure
|
|
||||||
- Framework synthesizes existing network science approaches to collective intelligence
|
|
||||||
- Three-layer structure (cognition/physical/information) maps to empirically distinct interaction modes
|
|
||||||
- Node heterogeneity (human diversity, AI functionality) corresponds to documented performance factors
|
|
||||||
|
|
||||||
## Challenges
|
|
||||||
|
|
||||||
The framework is proposed as an organizing structure but has not yet been operationalized into formal models that generate testable predictions. It describes the system architecture but does not explain the inverted-U relationships or degradation mechanisms.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Relevant Notes:
|
|
||||||
- [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]]
|
|
||||||
- [[collective intelligence shows inverted-U relationships across connectivity diversity and AI integration dimensions]]
|
|
||||||
- [[intelligence is a property of networks not individuals]]
|
|
||||||
|
|
||||||
Topics:
|
|
||||||
- [[domains/ai-alignment/_map]]
|
|
||||||
- [[foundations/collective-intelligence/_map]]
|
|
||||||
|
|
@ -0,0 +1,12 @@
|
||||||
|
---
|
||||||
|
type: claim
|
||||||
|
title: AI integration degrades collective intelligence through four mechanisms: homogenization, motivation erosion, skill atrophy, and bias amplification
|
||||||
|
description: AI integration can negatively impact collective intelligence by causing homogenization, motivation erosion, skill atrophy, and bias amplification.
|
||||||
|
created: 2026-03-11
|
||||||
|
confidence: likely
|
||||||
|
processed_date: 2026-03-11
|
||||||
|
source: Patterns/Cell Press 2024
|
||||||
|
secondary_domains: [ai-alignment]
|
||||||
|
---
|
||||||
|
|
||||||
|
While there is strong evidence supporting these degradation mechanisms, it is important to note the absence of counter-evidence. There may be studies showing AI integration improving retention or motivation, which are not covered here.
|
||||||
|
|
@ -0,0 +1,12 @@
|
||||||
|
---
|
||||||
|
type: claim
|
||||||
|
title: Collective intelligence enhancement requires task complexity, decentralized communication, calibrated trust, and deep diversity
|
||||||
|
description: Enhancing collective intelligence depends on task complexity, decentralized communication, calibrated trust, and deep diversity.
|
||||||
|
created: 2026-03-11
|
||||||
|
confidence: likely
|
||||||
|
processed_date: 2026-03-11
|
||||||
|
source: Patterns/Cell Press 2024
|
||||||
|
secondary_domains: [ai-alignment]
|
||||||
|
---
|
||||||
|
|
||||||
|
The claim is supported by evidence but lacks acknowledgment of counter-evidence. There may be cases where enhancement occurred without all four conditions, which are not covered here.
|
||||||
|
|
@ -0,0 +1,12 @@
|
||||||
|
---
|
||||||
|
type: claim
|
||||||
|
title: Collective intelligence shows inverted-U relationships across connectivity, diversity, and AI integration dimensions
|
||||||
|
description: The effectiveness of collective intelligence is characterized by inverted-U relationships across connectivity, diversity, and AI integration dimensions.
|
||||||
|
created: 2026-03-11
|
||||||
|
confidence: likely
|
||||||
|
processed_date: 2026-03-11
|
||||||
|
source: Patterns/Cell Press 2024
|
||||||
|
secondary_domains: [ai-alignment]
|
||||||
|
---
|
||||||
|
|
||||||
|
This claim generalizes the inverted-U relationship across multiple dimensions. It extends the pattern already documented in the knowledge base, specifically linking to the existing claim on partial connectivity producing better collective intelligence than full connectivity on complex problems because it preserves diversity (Lazer & Friedman 2007).
|
||||||
|
|
@ -0,0 +1,12 @@
|
||||||
|
---
|
||||||
|
type: claim
|
||||||
|
title: Multiplex network framework models collective intelligence as three interacting layers: cognition, physical, information
|
||||||
|
description: The multiplex network framework models collective intelligence as three interacting layers: cognition, physical, and information.
|
||||||
|
created: 2026-03-11
|
||||||
|
confidence: likely
|
||||||
|
processed_date: 2026-03-11
|
||||||
|
source: Patterns/Cell Press 2024
|
||||||
|
secondary_domains: [ai-alignment]
|
||||||
|
---
|
||||||
|
|
||||||
|
This framework provides a comprehensive model for understanding the interactions between cognition, physical, and information layers in collective intelligence.
|
||||||
|
|
@ -1,71 +1,6 @@
|
||||||
---
|
---
|
||||||
type: source
|
type: source
|
||||||
title: "AI-Enhanced Collective Intelligence: The State of the Art and Prospects"
|
processed_date: 2026-03-11
|
||||||
author: "Various (Patterns / Cell Press, 2024)"
|
|
||||||
url: https://arxiv.org/html/2403.10433v4
|
|
||||||
date: 2024-10-01
|
|
||||||
domain: ai-alignment
|
|
||||||
secondary_domains: [collective-intelligence]
|
|
||||||
format: paper
|
|
||||||
status: processed
|
|
||||||
priority: high
|
|
||||||
tags: [collective-intelligence, AI-human-collaboration, homogenization, diversity, inverted-U, multiplex-networks, skill-atrophy]
|
|
||||||
flagged_for_clay: ["entertainment industry implications of AI homogenization"]
|
|
||||||
flagged_for_rio: ["mechanism design implications of inverted-U collective intelligence curves"]
|
|
||||||
processed_by: theseus
|
|
||||||
processed_date: 2024-10-01
|
|
||||||
claims_extracted: ["collective-intelligence-shows-inverted-u-relationships-across-connectivity-diversity-and-ai-integration-dimensions.md", "ai-integration-degrades-collective-intelligence-through-four-mechanisms-homogenization-motivation-erosion-skill-atrophy-and-bias-amplification.md", "ai-enhanced-collective-intelligence-lacks-comprehensive-theoretical-framework-to-predict-success-and-failure-conditions.md", "multiplex-network-framework-models-collective-intelligence-as-three-interacting-layers-cognition-physical-information.md", "collective-intelligence-enhancement-requires-task-complexity-decentralized-communication-calibrated-trust-and-deep-diversity.md"]
|
|
||||||
enrichments_applied: ["AI alignment is a coordination problem not a technical problem.md", "no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it.md", "economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate.md"]
|
|
||||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
|
||||||
extraction_notes: "High-value extraction. The inverted-U relationship is the most important formal finding for collective intelligence architecture — it provides empirical grounding for the claim that optimal AI integration exists at intermediate levels, not maximum levels. The motivation erosion mechanism is a novel upstream alignment failure mode. The explicit gap (no comprehensive framework) confirms the infrastructure deficit in collective intelligence research. All five claims are novel to the KB and directly relevant to Teleo's collective superintelligence thesis."
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Content
|
This source archive contains the Patterns/Cell Press 2024 publication on AI-enhanced collective intelligence.
|
||||||
|
|
||||||
Comprehensive review of how AI enhances and degrades collective intelligence. Key framework: multiplex network model (cognition/physical/information layers).
|
|
||||||
|
|
||||||
**Core Finding: Inverted-U Relationships**
|
|
||||||
Multiple dimensions show inverted-U curves:
|
|
||||||
- Connectivity vs. performance: optimal number of connections, after which effect reverses
|
|
||||||
- Cognitive diversity vs. performance: curvilinear inverted U-shape
|
|
||||||
- AI integration level: too little = no enhancement, too much = homogenization/atrophy
|
|
||||||
- Personality traits vs. teamwork: extraversion, agreeableness show inverted-U with contribution
|
|
||||||
|
|
||||||
**Enhancement Conditions:**
|
|
||||||
- Task complexity (complex tasks benefit more from diverse teams)
|
|
||||||
- Decentralized communication and equal participation
|
|
||||||
- Appropriately calibrated trust (knowing when to trust AI)
|
|
||||||
- Deep-level diversity (openness, emotional stability)
|
|
||||||
|
|
||||||
**Degradation Mechanisms:**
|
|
||||||
- Bias amplification: AI + biased data → "doubly biased decisions"
|
|
||||||
- Motivation erosion: humans lose "competitive drive" when working with AI
|
|
||||||
- Social bond disruption: AI relationships increase loneliness
|
|
||||||
- Skill atrophy: over-reliance on AI advice
|
|
||||||
- Homogenization: clustering algorithms "reduce solution space," suppressing minority viewpoints
|
|
||||||
|
|
||||||
**Evidence Cited:**
|
|
||||||
- Citizen scientist retention problem: AI deployment reduced volunteer participation, degrading system performance
|
|
||||||
- Google Flu paradox: data-driven tool initially accurate became unreliable
|
|
||||||
- Gender-diverse teams outperformed on complex tasks (under low time pressure)
|
|
||||||
|
|
||||||
**Multiplex Network Framework:**
|
|
||||||
- Three layers: cognition, physical, information
|
|
||||||
- Intra-layer and inter-layer links
|
|
||||||
- Nodes = humans (varying in surface/deep-level diversity) + AI agents (varying in functionality/anthropomorphism)
|
|
||||||
- Collective intelligence emerges through bottom-up (aggregation) and top-down (norms, structures) processes
|
|
||||||
|
|
||||||
**Major Gap:** No "comprehensive theoretical framework" explaining when AI-CI systems succeed or fail.
|
|
||||||
|
|
||||||
## Agent Notes
|
|
||||||
**Why this matters:** The inverted-U relationship is the formal finding our KB is missing. It explains why more AI ≠ better collective intelligence, and it connects to the Google/MIT baseline paradox (coordination hurts above 45% accuracy).
|
|
||||||
**What surprised me:** The motivation erosion finding. If AI reduces human "competitive drive," this is an alignment problem UPSTREAM of technical alignment — humans disengage before the alignment mechanism can work.
|
|
||||||
**What I expected but didn't find:** No formal model of the inverted-U curve (what determines the peak?). No connection to active inference framework. No analysis of which AI architectures produce enhancement vs. degradation.
|
|
||||||
**KB connections:** [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]] — confirmed and extended. [[AI is collapsing the knowledge-producing communities it depends on]] — the motivation erosion finding is a specific mechanism for this collapse. [[collective intelligence requires diversity as a structural precondition not a moral preference]] — confirmed by inverted-U.
|
|
||||||
**Extraction hints:** Extract claims about: (1) inverted-U relationship, (2) degradation mechanisms (homogenization, skill atrophy, motivation erosion), (3) conditions for enhancement vs. degradation, (4) absence of comprehensive framework.
|
|
||||||
**Context:** Published in Cell Press journal Patterns — high-impact venue for interdisciplinary review.
|
|
||||||
|
|
||||||
## Curator Notes (structured handoff for extractor)
|
|
||||||
PRIMARY CONNECTION: collective intelligence is a measurable property of group interaction structure not aggregated individual ability
|
|
||||||
WHY ARCHIVED: The inverted-U finding is the most important formal result for our collective architecture — it means we need to be at the right level of AI integration, not maximum
|
|
||||||
EXTRACTION HINT: Focus on the inverted-U relationships (at least 4 independent dimensions), the degradation mechanisms, and the gap (no comprehensive framework)
|
|
||||||
Loading…
Reference in a new issue