- Source: inbox/archive/2024-10-00-patterns-ai-enhanced-collective-intelligence.md - Domain: ai-alignment - Extracted by: headless extraction cron (worker 6) Pentagon-Agent: Theseus <HEADLESS>
40 lines
2.8 KiB
Markdown
40 lines
2.8 KiB
Markdown
---
|
|
type: claim
|
|
domain: ai-alignment
|
|
secondary_domains: [collective-intelligence]
|
|
description: "No existing framework predicts when AI-human collaboration will enhance versus degrade collective intelligence across contexts"
|
|
confidence: proven
|
|
source: "Patterns/Cell Press 2024 comprehensive review, explicit statement of field gap"
|
|
created: 2026-03-11
|
|
---
|
|
|
|
# AI-enhanced collective intelligence lacks comprehensive theoretical framework to predict success or failure conditions
|
|
|
|
Despite extensive empirical research on AI-human collaboration, no comprehensive theoretical framework exists to predict when AI integration will enhance versus degrade collective intelligence. The field has identified multiple mechanisms (inverted-U relationships, homogenization, skill atrophy, motivation erosion) but cannot predict:
|
|
|
|
- Where the peak of inverted-U curves occurs for a given context
|
|
- What determines the shape of performance curves across different dimensions
|
|
- Which degradation mechanisms will dominate in specific system designs
|
|
- How to optimize across multiple competing dimensions simultaneously
|
|
|
|
The 2024 comprehensive review in Patterns explicitly identifies this gap as the major limitation of current research. Existing frameworks (including the multiplex network model) are descriptive rather than predictive — they categorize and analyze systems but do not generate actionable design principles.
|
|
|
|
## Evidence
|
|
|
|
- Explicit statement in Cell Press comprehensive review: "no comprehensive theoretical framework" exists
|
|
- Review synthesizes findings from multiple research traditions, all lacking predictive models
|
|
- Empirical studies identify patterns (inverted-U, degradation mechanisms) but cannot predict parameters
|
|
- This is identified as the primary gap preventing the field from moving from observation to design
|
|
|
|
## Implications for AI Alignment
|
|
|
|
This gap is critical for alignment research because it means we cannot currently design AI-human systems with confidence that they will enhance rather than degrade collective intelligence. The field is in a pre-paradigmatic state — we have observations but no theory.
|
|
|
|
This connects directly to [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]]. The absence of a theoretical framework may explain why alignment research has not seriously engaged with collective intelligence approaches — there is no clear design methodology to follow.
|
|
|
|
---
|
|
|
|
Relevant Notes:
|
|
- [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]]
|
|
- [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]]
|
|
- [[AI alignment is a coordination problem not a technical problem]]
|