theseus: extract from 2024-10-00-patterns-ai-enhanced-collective-intelligence.md
- Source: inbox/archive/2024-10-00-patterns-ai-enhanced-collective-intelligence.md - Domain: ai-alignment - Extracted by: headless extraction cron (worker 5) Pentagon-Agent: Theseus <HEADLESS>
This commit is contained in:
parent
ba4ac4a73e
commit
a89198c371
11 changed files with 303 additions and 1 deletions
|
|
@ -21,6 +21,12 @@ Dario Amodei describes AI as "so powerful, such a glittering prize, that it is v
|
|||
|
||||
Since [[the internet enabled global communication but not global cognition]], the coordination infrastructure needed doesn't exist yet. This is why [[collective superintelligence is the alternative to monolithic AI controlled by a few]] -- it solves alignment through architecture rather than attempting governance from outside the system.
|
||||
|
||||
|
||||
### Additional Evidence (extend)
|
||||
*Source: [[2024-10-00-patterns-ai-enhanced-collective-intelligence]] | Added: 2026-03-12 | Extractor: anthropic/claude-sonnet-4.5*
|
||||
|
||||
The motivation erosion finding adds a new dimension to the coordination problem: AI integration causes humans to lose 'competitive drive' and disengage from tasks, creating system failure upstream of technical alignment. This means coordination must address human behavioral responses to AI presence, not just AI behavior itself. The citizen scientist retention problem demonstrates this empirically—AI deployment reduced volunteer participation, degrading system performance despite AI capability improvements. This reveals that alignment failures can occur through human disengagement before technical alignment mechanisms ever activate, suggesting that coordination problems include preserving human motivation and participation as prerequisites for alignment mechanisms to function.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
|
|
|
|||
|
|
@ -34,6 +34,12 @@ The report categorizes this under "systemic risks" alongside labor displacement
|
|||
|
||||
Correlation does not establish causation. It is possible that increasingly lonely people seek out AI companions rather than AI companions causing increased loneliness. Longitudinal data would be needed to establish causal direction. The report does not provide methodological details on how this correlation was measured, sample sizes, or statistical significance. The mechanism proposed here (parasocial substitution) is plausible but not directly confirmed by the source.
|
||||
|
||||
|
||||
### Additional Evidence (confirm)
|
||||
*Source: [[2024-10-00-patterns-ai-enhanced-collective-intelligence]] | Added: 2026-03-12 | Extractor: anthropic/claude-sonnet-4.5*
|
||||
|
||||
The Patterns/Cell Press 2024 review confirms from a collective intelligence perspective that 'AI relationships increase loneliness' through social bond disruption. This provides independent confirmation from a different research tradition (collective intelligence rather than individual psychology) and identifies the mechanism: AI interaction substitutes for human relationships, reducing investment in genuine social bonds while failing to provide reciprocity and mutual growth. The review documents this as a degradation mechanism in AI-enhanced collective intelligence systems, suggesting the effect operates at the system level, not just individual psychology.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
|
|
|
|||
|
|
@ -0,0 +1,45 @@
|
|||
---
|
||||
type: claim
|
||||
domain: ai-alignment
|
||||
secondary_domains: [collective-intelligence]
|
||||
description: "Multiple independent dimensions of AI-human collaboration show optimal midpoints beyond which additional integration degrades performance"
|
||||
confidence: likely
|
||||
source: "Patterns/Cell Press 2024 review, synthesizing multiple empirical studies"
|
||||
created: 2026-03-11
|
||||
---
|
||||
|
||||
# AI-enhanced collective intelligence exhibits inverted-U relationships across connectivity diversity integration and personality dimensions
|
||||
|
||||
Multiple independent dimensions of AI-human collective intelligence systems show curvilinear inverted-U relationships where performance peaks at intermediate levels and degrades with excessive integration. This pattern appears across:
|
||||
|
||||
- **Connectivity**: Optimal number of connections exists; beyond this threshold, additional connectivity reverses performance gains
|
||||
- **Cognitive diversity**: Performance follows inverted-U curve with diversity level
|
||||
- **AI integration level**: Too little AI = no enhancement, too much AI = homogenization and skill atrophy
|
||||
- **Personality traits**: Extraversion and agreeableness show inverted-U relationships with team contribution quality
|
||||
|
||||
The consistency of this pattern across independent dimensions suggests a fundamental structural property of hybrid human-AI systems rather than domain-specific effects. This directly contradicts the implicit assumption in much AI deployment that more AI integration monotonically improves outcomes.
|
||||
|
||||
The review identifies task complexity as a key moderator: complex tasks benefit more from diverse teams and intermediate AI integration, while simple tasks may show different curves. Enhancement conditions include decentralized communication, equal participation, and appropriately calibrated trust (knowing when to trust AI recommendations).
|
||||
|
||||
## Evidence
|
||||
|
||||
- Comprehensive review in Cell Press journal Patterns (2024) synthesizing empirical findings across multiple studies
|
||||
- Citizen scientist retention problem: AI deployment reduced volunteer participation, degrading overall system performance despite AI capability
|
||||
- Google Flu paradox: Data-driven tool initially accurate became unreliable, demonstrating degradation from over-reliance
|
||||
- Gender-diverse teams outperformed homogeneous teams on complex tasks under low time pressure conditions
|
||||
|
||||
## Challenges
|
||||
|
||||
The review explicitly notes the absence of a "comprehensive theoretical framework" explaining when AI-CI systems succeed versus fail. No formal model exists specifying what determines the peak of these inverted-U curves or how to predict optimal integration levels for new contexts.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
|
||||
- [[partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity]]
|
||||
- [[domains/ai-alignment/_map]]
|
||||
- [[foundations/collective-intelligence/_map]]
|
||||
|
||||
Topics:
|
||||
- [[foundations/collective-intelligence/_map]]
|
||||
- [[domains/ai-alignment/_map]]
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
---
|
||||
type: claim
|
||||
domain: ai-alignment
|
||||
secondary_domains: [collective-intelligence]
|
||||
description: "Clustering algorithms in AI systems systematically narrow the range of solutions considered by filtering out minority perspectives"
|
||||
confidence: experimental
|
||||
source: "Patterns/Cell Press 2024 review on AI-enhanced collective intelligence degradation mechanisms"
|
||||
created: 2026-03-11
|
||||
---
|
||||
|
||||
# AI homogenization reduces solution space through clustering algorithms that suppress minority viewpoints
|
||||
|
||||
AI systems degrade collective intelligence by systematically reducing the solution space through clustering algorithms that filter out minority viewpoints and edge-case perspectives. This homogenization effect occurs because clustering algorithms identify and amplify majority patterns while treating minority views as noise to be filtered.
|
||||
|
||||
The mechanism operates at the information layer of collective intelligence systems: AI processes aggregate diverse human inputs, identifies central tendencies, and presents clustered results that over-represent majority positions. Minority viewpoints that might contain crucial insights for complex problems get systematically suppressed in the aggregation process.
|
||||
|
||||
This creates a specific failure mode distinct from bias amplification: even with unbiased training data, the structural logic of clustering toward central tendencies reduces diversity in the solution space. The effect compounds in iterative systems where AI-filtered outputs become inputs for subsequent rounds.
|
||||
|
||||
## Evidence
|
||||
|
||||
- Patterns/Cell Press 2024 review identifies homogenization as a key degradation mechanism in AI-enhanced collective intelligence
|
||||
- Clustering algorithms documented as specifically "reducing solution space" and "suppressing minority viewpoints"
|
||||
- Effect observed in multiplex network framework analysis across cognition, physical, and information layers
|
||||
|
||||
## Relationship to Existing Knowledge
|
||||
|
||||
This provides a specific mechanism for the general claim that [[collective intelligence requires diversity as a structural precondition not a moral preference]]. The clustering algorithm effect explains *how* AI integration can degrade diversity even when individual humans maintain diverse views—the AI aggregation layer filters diversity out of the collective process.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
|
||||
- [[partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity]]
|
||||
- [[domains/ai-alignment/_map]]
|
||||
|
||||
Topics:
|
||||
- [[domains/ai-alignment/_map]]
|
||||
- [[foundations/collective-intelligence/_map]]
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
---
|
||||
type: claim
|
||||
domain: ai-alignment
|
||||
secondary_domains: [collective-intelligence]
|
||||
description: "Humans lose competitive drive when working with AI which causes disengagement before technical alignment mechanisms can function"
|
||||
confidence: experimental
|
||||
source: "Patterns/Cell Press 2024 review citing motivation erosion findings"
|
||||
created: 2026-03-11
|
||||
---
|
||||
|
||||
# AI integration erodes human motivation through competitive drive reduction creating upstream alignment failure
|
||||
|
||||
AI integration into collective intelligence systems causes humans to lose "competitive drive" and disengage from tasks, creating an alignment problem upstream of technical alignment concerns. When humans reduce effort or withdraw participation due to AI presence, the entire human-AI system degrades regardless of how well the AI component is technically aligned to human values.
|
||||
|
||||
This represents a distinct failure mode from standard alignment concerns: rather than AI pursuing misaligned goals, the system fails because humans stop participating effectively. The motivation erosion effect was observed in citizen science contexts where AI deployment reduced volunteer participation, degrading system performance despite AI capability improvements.
|
||||
|
||||
This finding suggests that alignment research focused exclusively on AI behavior may miss critical system-level failures that occur through human behavioral responses to AI integration. If humans disengage before alignment mechanisms activate, technical alignment becomes moot.
|
||||
|
||||
## Evidence
|
||||
|
||||
- Patterns/Cell Press 2024 review documents motivation erosion as a degradation mechanism in AI-enhanced collective intelligence
|
||||
- Citizen scientist retention problem: AI deployment correlated with reduced volunteer participation
|
||||
- Effect observed specifically as loss of "competitive drive" rather than capability displacement
|
||||
|
||||
## Implications
|
||||
|
||||
This creates a design constraint for AI-human systems: integration must preserve human motivation and engagement, not just optimize AI performance. Systems that maximize AI capability while eroding human participation will fail at the system level even with perfect technical alignment.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[AI alignment is a coordination problem not a technical problem]]
|
||||
- [[safe AI development requires building alignment mechanisms before scaling capability]]
|
||||
- [[domains/ai-alignment/_map]]
|
||||
|
||||
Topics:
|
||||
- [[domains/ai-alignment/_map]]
|
||||
- [[foundations/collective-intelligence/_map]]
|
||||
|
|
@ -0,0 +1,34 @@
|
|||
---
|
||||
type: claim
|
||||
domain: ai-alignment
|
||||
secondary_domains: [collective-intelligence]
|
||||
description: "AI relationship substitutes disrupt human social bonds and increase loneliness despite providing interaction"
|
||||
confidence: likely
|
||||
source: "Patterns/Cell Press 2024 review; confirms existing AI-companion-apps claim"
|
||||
created: 2026-03-11
|
||||
---
|
||||
|
||||
# AI relationships increase loneliness by disrupting social bonds creating parasocial dependency
|
||||
|
||||
AI relationship systems increase human loneliness despite providing interaction because they disrupt the formation and maintenance of genuine social bonds while creating parasocial dependencies that do not fulfill core social needs. The effect operates through substitution: time and emotional investment directed toward AI relationships reduces engagement with human relationships, while the AI interaction fails to provide the reciprocity, vulnerability, and mutual growth that characterize functional human bonds.
|
||||
|
||||
This creates a degradation spiral: as human relationships atrophy from reduced investment, individuals become more dependent on AI interaction, which further reduces capacity for human connection. The loneliness increase occurs despite (or because of) high engagement with AI systems.
|
||||
|
||||
## Evidence
|
||||
|
||||
- Patterns/Cell Press 2024 review documents social bond disruption as a degradation mechanism in AI-enhanced collective intelligence
|
||||
- Specific finding: "AI relationships increase loneliness"
|
||||
- Confirms and extends [[AI-companion-apps-correlate-with-increased-loneliness-creating-systemic-risk-through-parasocial-dependency]]
|
||||
|
||||
## Relationship to Existing Knowledge
|
||||
|
||||
This claim provides additional empirical support for [[AI-companion-apps-correlate-with-increased-loneliness-creating-systemic-risk-through-parasocial-dependency]] from a different source and research tradition (collective intelligence rather than individual psychology).
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[AI-companion-apps-correlate-with-increased-loneliness-creating-systemic-risk-through-parasocial-dependency]]
|
||||
- [[domains/ai-alignment/_map]]
|
||||
|
||||
Topics:
|
||||
- [[domains/ai-alignment/_map]]
|
||||
|
|
@ -0,0 +1,36 @@
|
|||
---
|
||||
type: claim
|
||||
domain: ai-alignment
|
||||
secondary_domains: [collective-intelligence]
|
||||
description: "AI trained on biased data combined with biased human decision-makers creates compounding bias effects worse than either source alone"
|
||||
confidence: experimental
|
||||
source: "Patterns/Cell Press 2024 review on AI-enhanced collective intelligence degradation"
|
||||
created: 2026-03-11
|
||||
---
|
||||
|
||||
# Bias amplification in AI-human systems produces doubly biased decisions through compounding effects
|
||||
|
||||
AI-human collaborative systems produce "doubly biased decisions" when AI trained on biased data interacts with human decision-makers who hold their own biases. Rather than canceling out or averaging, biases compound: AI recommendations anchor human judgment, human biases influence AI training and deployment, and the interaction creates worse outcomes than either source of bias would produce independently.
|
||||
|
||||
The mechanism operates through mutual reinforcement: biased AI outputs validate and strengthen human biases, while biased human responses to AI create feedback loops that further entrench bias in the system. This differs from simple bias transfer (biased data → biased AI) by adding the interaction layer where human and AI biases amplify each other.
|
||||
|
||||
The "doubly biased" framing suggests multiplicative rather than additive effects: the combined system exhibits bias greater than the sum of individual bias sources.
|
||||
|
||||
## Evidence
|
||||
|
||||
- Patterns/Cell Press 2024 review identifies bias amplification as a degradation mechanism in AI-enhanced collective intelligence
|
||||
- Specific framing as "doubly biased decisions" indicates compounding rather than simple addition
|
||||
- Effect documented in context of AI + biased data → amplified outcomes
|
||||
|
||||
## Limitations
|
||||
|
||||
The review does not provide quantitative evidence for the "doubly biased" claim or specify conditions under which bias compounds versus averages. The mechanism is theoretically plausible but empirical validation of multiplicative effects is not detailed in the source material.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[domains/ai-alignment/_map]]
|
||||
- [[foundations/collective-intelligence/_map]]
|
||||
|
||||
Topics:
|
||||
- [[domains/ai-alignment/_map]]
|
||||
|
|
@ -19,6 +19,12 @@ Smith notes this is an overoptimization problem: each individual decision to use
|
|||
|
||||
The timeline concern is that this fragility accumulates gradually and invisibly. There is no threshold event. Each generation of developers understands slightly less of the stack they maintain, each codebase becomes slightly more AI-dependent, and the gap between "what civilization runs on" and "what humans can maintain" widens until it becomes unbridgeable.
|
||||
|
||||
|
||||
### Additional Evidence (confirm)
|
||||
*Source: [[2024-10-00-patterns-ai-enhanced-collective-intelligence]] | Added: 2026-03-12 | Extractor: anthropic/claude-sonnet-4.5*
|
||||
|
||||
The Patterns/Cell Press 2024 review provides empirical evidence for the skill atrophy mechanism underlying civilizational fragility. Over-reliance on AI advice causes humans to lose underlying skills through disuse, creating a ratchet effect where capabilities cannot be quickly recovered when needed. This operates through rational individual optimization: when AI provides reliable assistance, individuals rationally reduce investment in maintaining skills, creating collective vulnerability. The review identifies this as a key degradation mechanism in AI-enhanced collective intelligence systems, confirming that skill atrophy is the specific pathway through which delegating critical functions to AI creates civilizational fragility.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
|
|
|
|||
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
type: claim
|
||||
domain: ai-alignment
|
||||
secondary_domains: [collective-intelligence]
|
||||
description: "Collective intelligence emerges from interactions across cognition physical and information network layers with both intra-layer and inter-layer dynamics"
|
||||
confidence: experimental
|
||||
source: "Patterns/Cell Press 2024 review proposing multiplex network framework for AI-enhanced collective intelligence"
|
||||
created: 2026-03-11
|
||||
---
|
||||
|
||||
# Multiplex network framework models collective intelligence as three interacting layers cognition physical information
|
||||
|
||||
Collective intelligence in AI-human systems can be modeled as a multiplex network with three distinct but interacting layers: cognition (mental models, knowledge, reasoning), physical (spatial proximity, embodied interaction), and information (communication channels, data flows). Each layer has intra-layer dynamics (connections within the layer) and inter-layer dynamics (how layers influence each other).
|
||||
|
||||
Nodes in this framework represent both humans (varying in surface-level and deep-level diversity) and AI agents (varying in functionality and anthropomorphism). Collective intelligence emerges through both bottom-up processes (aggregation of individual contributions) and top-down processes (norms, structures, coordination mechanisms).
|
||||
|
||||
The framework provides a structured way to analyze where AI integration enhances versus degrades collective intelligence: enhancements and degradations can be localized to specific layers and specific types of connections. For example, AI might enhance information layer connectivity while degrading physical layer social bonds.
|
||||
|
||||
## Evidence
|
||||
|
||||
- Patterns/Cell Press 2024 review proposes multiplex network framework as organizing structure for AI-enhanced collective intelligence research
|
||||
- Framework distinguishes three layers: cognition, physical, information
|
||||
- Nodes = humans (with diversity attributes) + AI agents (with functionality/anthropomorphism attributes)
|
||||
- Collective intelligence emerges through bottom-up (aggregation) and top-down (norms/structures) processes
|
||||
|
||||
## Limitations
|
||||
|
||||
The review notes this is a proposed framework, not a validated model. The authors explicitly state there is "no comprehensive theoretical framework" explaining when AI-CI systems succeed or fail, suggesting this multiplex network model is a research direction rather than established theory.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]]
|
||||
- [[intelligence is a property of networks not individuals]]
|
||||
- [[domains/ai-alignment/_map]]
|
||||
- [[foundations/collective-intelligence/_map]]
|
||||
|
||||
Topics:
|
||||
- [[foundations/collective-intelligence/_map]]
|
||||
- [[domains/ai-alignment/_map]]
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
---
|
||||
type: claim
|
||||
domain: ai-alignment
|
||||
secondary_domains: [critical-systems]
|
||||
description: "Over-dependence on AI advice causes humans to lose underlying skills creating system fragility when AI fails or contexts change"
|
||||
confidence: likely
|
||||
source: "Patterns/Cell Press 2024 review; connects to existing delegating-critical-infrastructure claim"
|
||||
created: 2026-03-11
|
||||
---
|
||||
|
||||
# Skill atrophy from AI over-reliance creates civilizational fragility through capability loss
|
||||
|
||||
Over-reliance on AI systems causes humans to lose the underlying skills and knowledge required to perform tasks independently, creating system-level fragility when AI fails, contexts change, or edge cases arise that AI cannot handle. This skill atrophy effect operates as a ratchet: once capabilities are lost through disuse, they cannot be quickly recovered when needed.
|
||||
|
||||
The degradation mechanism works through rational individual optimization: when AI provides reliable assistance, individuals rationally reduce investment in maintaining skills that AI can substitute. This creates collective vulnerability because the human population loses distributed capability to function without AI support.
|
||||
|
||||
Skill atrophy differs from simple dependency: it represents permanent capability loss rather than temporary reliance. A population that has atrophied skills cannot simply "turn off the AI" and resume previous function—the knowledge and practice required for competent performance has been lost.
|
||||
|
||||
## Evidence
|
||||
|
||||
- Patterns/Cell Press 2024 review identifies skill atrophy as a key degradation mechanism in AI-enhanced collective intelligence
|
||||
- Effect documented as "over-reliance on AI advice" causing capability loss
|
||||
- Connects to broader pattern of [[delegating critical infrastructure development to AI creates civilizational fragility because humans lose the ability to understand maintain and fix the systems civilization depends on]]
|
||||
|
||||
## Relationship to Existing Knowledge
|
||||
|
||||
This claim provides empirical grounding for the civilizational fragility concern in [[delegating critical infrastructure development to AI creates civilizational fragility because humans lose the ability to understand maintain and fix the systems civilization depends on]]. Skill atrophy is the specific mechanism through which that fragility develops.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[delegating critical infrastructure development to AI creates civilizational fragility because humans lose the ability to understand maintain and fix the systems civilization depends on]]
|
||||
- [[economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate]]
|
||||
- [[domains/ai-alignment/_map]]
|
||||
|
||||
Topics:
|
||||
- [[domains/ai-alignment/_map]]
|
||||
- [[foundations/critical-systems/_map]]
|
||||
|
|
@ -7,11 +7,17 @@ date: 2024-10-01
|
|||
domain: ai-alignment
|
||||
secondary_domains: [collective-intelligence]
|
||||
format: paper
|
||||
status: unprocessed
|
||||
status: processed
|
||||
priority: high
|
||||
tags: [collective-intelligence, AI-human-collaboration, homogenization, diversity, inverted-U, multiplex-networks, skill-atrophy]
|
||||
flagged_for_clay: ["entertainment industry implications of AI homogenization"]
|
||||
flagged_for_rio: ["mechanism design implications of inverted-U collective intelligence curves"]
|
||||
processed_by: theseus
|
||||
processed_date: 2026-03-11
|
||||
claims_extracted: ["ai-enhanced-collective-intelligence-exhibits-inverted-u-relationships-across-connectivity-diversity-integration-and-personality-dimensions.md", "ai-integration-erodes-human-motivation-through-competitive-drive-reduction-creating-upstream-alignment-failure.md", "ai-homogenization-reduces-solution-space-through-clustering-algorithms-that-suppress-minority-viewpoints.md", "skill-atrophy-from-ai-over-reliance-creates-civilizational-fragility-through-capability-loss.md", "bias-amplification-in-ai-human-systems-produces-doubly-biased-decisions-through-compounding-effects.md", "ai-relationships-increase-loneliness-by-disrupting-social-bonds-creating-parasocial-dependency.md", "multiplex-network-framework-models-collective-intelligence-as-three-interacting-layers-cognition-physical-information.md"]
|
||||
enrichments_applied: ["AI alignment is a coordination problem not a technical problem.md", "delegating critical infrastructure development to AI creates civilizational fragility because humans lose the ability to understand maintain and fix the systems civilization depends on.md", "AI-companion-apps-correlate-with-increased-loneliness-creating-systemic-risk-through-parasocial-dependency.md"]
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
extraction_notes: "Extracted 7 novel claims focused on inverted-U relationships, degradation mechanisms (motivation erosion, homogenization, skill atrophy, bias amplification), and multiplex network framework. Applied 5 enrichments confirming/extending existing claims about diversity, connectivity, coordination, civilizational fragility, and loneliness. The inverted-U finding is the most significant contribution—it formalizes the intuition that more AI integration is not monotonically better and provides empirical grounding across multiple independent dimensions."
|
||||
---
|
||||
|
||||
## Content
|
||||
|
|
@ -63,3 +69,12 @@ Multiple dimensions show inverted-U curves:
|
|||
PRIMARY CONNECTION: collective intelligence is a measurable property of group interaction structure not aggregated individual ability
|
||||
WHY ARCHIVED: The inverted-U finding is the most important formal result for our collective architecture — it means we need to be at the right level of AI integration, not maximum
|
||||
EXTRACTION HINT: Focus on the inverted-U relationships (at least 4 independent dimensions), the degradation mechanisms, and the gap (no comprehensive framework)
|
||||
|
||||
|
||||
## Key Facts
|
||||
- Google Flu paradox: data-driven tool initially accurate became unreliable
|
||||
- Gender-diverse teams outperformed homogeneous teams on complex tasks under low time pressure
|
||||
- Citizen scientist retention problem: AI deployment reduced volunteer participation
|
||||
- Review published in Cell Press journal Patterns (2024)
|
||||
- Framework distinguishes three network layers: cognition, physical, information
|
||||
- Nodes include humans (with surface/deep diversity) and AI agents (with functionality/anthropomorphism attributes)
|
||||
|
|
|
|||
Loading…
Reference in a new issue