theseus: extract claims from Doshi-Hauser AI creativity experiment (#484)
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run

Co-authored-by: m3taversal <m3taversal@gmail.com>
Co-committed-by: m3taversal <m3taversal@gmail.com>
This commit is contained in:
m3taversal 2026-03-11 09:23:12 +00:00 committed by Leo
parent bb5d965e3e
commit db497155d8
7 changed files with 331 additions and 1 deletions

View file

@ -0,0 +1,43 @@
---
type: claim
domain: ai-alignment
secondary_domains: [collective-intelligence, cultural-dynamics]
description: "Pre-registered experiment (800+ participants, 40+ countries) found collective diversity rose (Cliff's Delta=0.31, p=0.001) while individual creativity was unchanged (F(4,19.86)=0.12, p=0.97) — AI made ideas different, not better"
confidence: experimental
source: "Theseus, from Doshi & Hauser (2025), 'How AI Ideas Affect the Creativity, Diversity, and Evolution of Human Ideas'"
created: 2026-03-11
depends_on:
- "collective intelligence requires diversity as a structural precondition not a moral preference"
- "partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity"
challenged_by:
- "Homogenizing Effect of Large Language Models on Creative Diversity (ScienceDirect, 2025) — naturalistic study of 2,200 admissions essays found AI-inspired stories more similar to each other than human-only stories, with the homogenization gap widening at scale"
---
# high AI exposure increases collective idea diversity without improving individual creative quality creating an asymmetry between group and individual effects
The dominant narrative — that AI homogenizes human thought — is empirically wrong under at least one important condition. Doshi and Hauser (2025) ran a large-scale pre-registered experiment using the Alternate Uses Task (generating creative uses for everyday objects) with 800+ participants across 40+ countries. Their "multiple-worlds" design let ideas from prior participants feed forward to subsequent trials, simulating the cascading spread of AI influence over time.
The central finding is a paradox: **high AI exposure increased collective diversity** (Cliff's Delta = 0.31, p = 0.001) while having **no effect on individual creativity** (F(4,19.86) = 0.12, p = 0.97). The summary is exact: "AI made ideas different, not better."
The distinction between individual and collective effects matters enormously for how we design AI systems. Individual quality (fluency, flexibility, originality scores) didn't improve — participants weren't getting better at creative thinking by seeing AI ideas. But the population-level distribution of ideas became more diverse. These are different measurements and the divergence between them is the novel finding.
This directly complicates the homogenization argument. If AI systematically made ideas more similar, collective diversity would have declined — but it rose. The mechanism appears to be that AI ideas introduce variation that human-to-human copying would not have produced, disrupting the natural tendency toward convergence (see companion claim on baseline human convergence).
**Scope qualifier:** This finding holds at the experimental exposure levels tested (low/high AI exposure in a controlled task). It may not generalize to naturalistic settings at scale, where homogenization has been observed (ScienceDirect 2025 admissions essay study). The relationship is architecture-dependent, not inherently directional.
## Evidence
- Doshi & Hauser (2025), arXiv:2401.13481v3 — primary experimental results
- [[collective intelligence requires diversity as a structural precondition not a moral preference]] — confirms why the collective-level diversity finding matters
## Challenges
The ScienceDirect (2025) study of 2,200 admissions essays found the opposite effect: LLM-inspired stories were more similar to each other than human-only stories, and the gap widened at scale. Both findings can be correct if the direction of AI's effect on diversity depends on exposure architecture (high vs. naturalistic saturation) and task type (constrained creative task vs. open writing).
---
Relevant Notes:
- [[collective intelligence requires diversity as a structural precondition not a moral preference]] — this claim provides experimental evidence that AI can, under the right conditions, satisfy this precondition rather than undermine it
- [[partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity]] — AI may function as an external diversity source that substitutes for topological partial connectivity
- [[AI is collapsing the knowledge-producing communities it depends on creating a self-undermining loop that collective intelligence can break]] — complicated by this finding: AI may not uniformly collapse diversity, it may generate it under high-exposure conditions while collapsing it in naturalistic saturated settings
Topics:
- [[domains/ai-alignment/_map]]

View file

@ -0,0 +1,40 @@
---
type: claim
domain: ai-alignment
secondary_domains: [collective-intelligence, cultural-dynamics]
description: "Without AI, participants' ideas converged over time (β=-0.39, p=0.03); with AI exposure, diversity increased (β=0.53-0.57, p<0.03) reframes the question from 'does AI reduce diversity?' to 'does AI disrupt natural human convergence?'"
confidence: experimental
source: "Theseus, from Doshi & Hauser (2025), 'How AI Ideas Affect the Creativity, Diversity, and Evolution of Human Ideas'"
created: 2026-03-11
depends_on:
- "high AI exposure increases collective idea diversity without improving individual creative quality creating an asymmetry between group and individual effects"
- "partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity"
---
# human ideas naturally converge toward similarity over social learning chains making AI a net diversity injector rather than a homogenizer under high-exposure conditions
The baseline assumption in AI-diversity debates is that human creativity is naturally diverse and AI threatens to collapse it. The Doshi-Hauser experiment inverts this. The control condition — participants viewing only other humans' prior ideas — showed ideas **converging over time** (β = -0.39, p = 0.03). Human social learning, when operating without external disruption, tends toward premature convergence on popular solutions.
AI exposure broke this convergence. Under high AI exposure, diversity increased over time (β = 0.53-0.57, p < 0.03). The AI ideas introduced variation that the human chain alone would not have generated.
This reframes the normative question entirely. The relevant comparison is not "AI vs. pristine human diversity" — it's "AI vs. the convergence that human copying produces." If human social learning already suppresses diversity through imitation dynamics, then AI exposure may represent a net improvement over the realistic counterfactual.
**Why this happens mechanically:** In the multiple-worlds design, ideas that spread early in the chain bias subsequent generations toward similar solutions. This is the well-documented rich-get-richer dynamic in cultural evolution — popular ideas attract more copies, which makes them more popular. AI examples, introduced from outside this social chain, are not subject to the same selection pressure and therefore inject independent variation.
This connects to [[partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity]]: AI may function as an external diversity source analogous to weak ties in a partially connected network. The AI examples come from outside the local social chain, disrupting the convergence that full human-to-human connectivity would produce.
**Scope qualifier:** This convergence effect is measured within an experimental session using a constrained creativity task. The timescale of convergence in naturalistic, long-term creative communities may differ significantly. Cultural fields may have additional mechanisms (novelty norms, competitive differentiation) that resist convergence even without AI.
## Evidence
- Doshi & Hauser (2025), arXiv:2401.13481v3 — β = -0.39 for human-only convergence; β = 0.53-0.57 for AI-exposed diversity increase
- [[partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity]] — the network science basis for why external variation disrupts convergence
---
Relevant Notes:
- [[high AI exposure increases collective idea diversity without improving individual creative quality creating an asymmetry between group and individual effects]] — the companion finding: not only does AI disrupt convergence, it does so without improving individual quality
- [[collective intelligence requires diversity as a structural precondition not a moral preference]] — if human social learning naturally converges, maintaining collective diversity requires active intervention — AI under some conditions provides this
- [[partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity]] — AI as external diversity source parallels the function of partial network connectivity
Topics:
- [[domains/ai-alignment/_map]]

View file

@ -0,0 +1,37 @@
---
type: claim
domain: ai-alignment
secondary_domains: [collective-intelligence]
description: "When AI source was explicitly disclosed, adoption was stronger for difficult tasks (ρ=0.8) than easy ones (ρ=0.3) — disclosure did not suppress AI adoption where participants most needed help"
confidence: experimental
source: "Theseus, from Doshi & Hauser (2025), 'How AI Ideas Affect the Creativity, Diversity, and Evolution of Human Ideas'"
created: 2026-03-11
depends_on:
- "high AI exposure increases collective idea diversity without improving individual creative quality creating an asymmetry between group and individual effects"
---
# task difficulty moderates AI idea adoption more than source disclosure with difficult problems generating AI reliance regardless of whether the source is labeled
The standard policy intuition for managing AI influence is disclosure: label AI-generated content and users will moderate their adoption. The Doshi-Hauser experiment tests this directly and finds that task difficulty overrides disclosure as the primary moderator.
When participants were explicitly told an idea came from AI, adoption for difficult prompts remained high (ρ = 0.8) while adoption for easy prompts was substantially lower (ρ = 0.3). Disclosure shifted adoption on easy tasks but not difficult ones.
The implication is that **disclosure primarily protects cognitive domains where participants already have independent capability**. Where participants find a problem hard — where they most depend on external scaffolding — AI labeling has limited effect on adoption behavior. The disclosed AI source is still adopted at high rates because the alternative is struggling with a difficult problem unaided.
A related moderator: self-perceived creativity. Highly self-rated creative participants adopted AI ideas at high rates regardless of whether the source was disclosed. Lower-creativity participants showed reduced adoption when AI was disclosed (Δ = 7.77, p = 0.03). The disclosure mechanism primarily works on participants who already feel competent to generate alternatives — exactly those who might be less influenced by AI in any case.
**The combined picture:** Disclosure policies reduce AI adoption for easy tasks among people who feel capable. Disclosure policies have limited effect on the populations and task types where AI adoption poses the greatest risk of skill atrophy and diversity collapse — hard problems solved by people who feel less capable.
**Scope qualifier:** This is a single experimental study using a constrained creativity task (Alternate Uses Task). Effect sizes and the easy/difficult distinction are task-specific. The ρ values measure within-condition correlations, not effect magnitudes across conditions.
## Evidence
- Doshi & Hauser (2025), arXiv:2401.13481v3 — disclosure × difficulty interaction; ρ = 0.8 for difficult, ρ = 0.3 for easy prompts; self-perceived creativity moderator Δ = 7.77, p = 0.03
---
Relevant Notes:
- [[high AI exposure increases collective idea diversity without improving individual creative quality creating an asymmetry between group and individual effects]] — difficulty-driven AI reliance is part of the mechanism behind collective diversity changes
- [[deep technical expertise is a greater force multiplier when combined with AI agents because skilled practitioners delegate more effectively than novices]] — this finding cuts against simple skill-amplification stories: on difficult tasks, everyone increases AI adoption, not just experts
Topics:
- [[domains/ai-alignment/_map]]

View file

@ -7,7 +7,16 @@ date: 2025-01-01
domain: ai-alignment
secondary_domains: [collective-intelligence, cultural-dynamics]
format: paper
status: unprocessed
status: processed
processed_by: theseus
processed_date: 2026-03-11
claims_extracted:
- "high AI exposure increases collective idea diversity without improving individual creative quality creating an asymmetry between group and individual effects"
- "human ideas naturally converge toward similarity over social learning chains making AI a net diversity injector rather than a homogenizer under high-exposure conditions"
- "task difficulty moderates AI idea adoption more than source disclosure with difficult problems generating AI reliance regardless of whether the source is labeled"
enrichments:
- "challenged_by field added to claim 1 referencing homogenization paper (ScienceDirect 2025)"
- "partial connectivity claim enriched with AI-as-external-diversity-source framing"
priority: high
tags: [homogenization, diversity-paradox, AI-creativity, collective-diversity, individual-creativity]
flagged_for_clay: ["implications for creative industries — AI makes ideas different but not better"]

View file

@ -0,0 +1,65 @@
---
type: claim
title: AI idea adoption correlates with task difficulty even when the source is explicitly disclosed
confidence: experimental
domains: [ai-alignment]
secondary_domains: [collective-intelligence, cultural-dynamics]
description: In experimental creativity tasks, participants adopted AI-generated ideas more frequently on difficult tasks (ρ=0.8) than easy tasks (ρ=0.3) even when the AI source was explicitly labeled, suggesting disclosure does not suppress AI adoption where participants most need help.
created: 2025-01-15
processed_date: 2025-01-15
source:
type: paper
title: "AI Ideas Decrease Individual Creativity but Increase Collective Diversity"
authors: [Doshi, Hauser]
year: 2025
venue: arXiv
arxiv_id: 2401.13481v3
url: https://arxiv.org/abs/2401.13481v3
preregistered: true
depends_on:
- "[[ai-ideas-increase-collective-diversity-experimental]]"
challenged_by:
- "[[deep technical expertise is a greater force multiplier than AI assistance]]"
---
# AI idea adoption correlates with task difficulty even when the source is explicitly disclosed
Doshi & Hauser (2025) found that when AI-generated ideas were explicitly labeled as AI-generated, participants still adopted them at rates strongly correlated with task difficulty: ρ=0.8 for difficult tasks vs. ρ=0.3 for easy tasks.
## Key Finding
**Adoption rates by difficulty (disclosed condition):**
- Difficult tasks: ρ=0.8 correlation between AI exposure and adoption
- Easy tasks: ρ=0.3 correlation between AI exposure and adoption
- AI source was explicitly labeled in both conditions
**Interpretation:**
- Disclosure did not suppress AI adoption where participants most needed help (difficult tasks)
- Participants appeared to use task difficulty as a heuristic for when to rely on AI
- This suggests rational/strategic AI use rather than blind adoption or blanket rejection
## Implications for Disclosure Policies
This finding complicates simple "just disclose AI" policies:
- Disclosure alone does not prevent AI reliance
- Users may rationally choose to rely on AI when tasks are difficult
- The question shifts from "does disclosure reduce AI use" to "when should AI use be encouraged/discouraged"
## Scope Qualifiers
- Single task type (Alternate Uses Task)
- Experimental setting with explicit labeling
- Self-reported adoption measures
- Does not address long-term effects or skill atrophy
- Does not compare disclosed vs. non-disclosed conditions across difficulty levels
## Tension with Skill Development
This finding creates tension with [[deep technical expertise is a greater force multiplier than AI assistance]] — if users adopt AI most on difficult tasks (where they most need to develop expertise), this could create a deskilling dynamic where AI prevents learning at precisely the difficulty level where learning is most valuable.
The "rational" adoption pattern (use AI when tasks are hard) may be individually rational but collectively problematic if it prevents skill development.
## Relevant Notes
- Potential connection to AI deskilling literature (if claims exist in KB)
- Flagged for implications on AI disclosure policy design

View file

@ -0,0 +1,70 @@
---
type: claim
title: High AI exposure can make AI a diversity injector under experimental conditions
confidence: experimental
domains: [ai-alignment]
secondary_domains: [collective-intelligence, cultural-dynamics]
description: In controlled experimental settings, high exposure to varied AI-generated ideas (10 ideas per participant) increased collective diversity more than low exposure (2 ideas), suggesting AI can function as a diversity source when exposure is high and varied.
created: 2025-01-15
processed_date: 2025-01-15
source:
type: paper
title: "AI Ideas Decrease Individual Creativity but Increase Collective Diversity"
authors: [Doshi, Hauser]
year: 2025
venue: arXiv
arxiv_id: 2401.13481v3
url: https://arxiv.org/abs/2401.13481v3
preregistered: true
depends_on:
- "[[ai-ideas-increase-collective-diversity-experimental]]"
---
# High AI exposure can make AI a diversity injector under experimental conditions
Doshi & Hauser (2025) found a dose-response relationship: participants exposed to 10 AI-generated ideas showed significantly higher collective diversity than those exposed to 2 AI ideas, who in turn showed higher diversity than control participants with no AI exposure.
## Dose-Response Pattern
**Collective diversity by condition:**
- High AI exposure (10 ideas): highest collective diversity
- Low AI exposure (2 ideas): intermediate diversity
- Control (0 AI ideas): lowest collective diversity
- Effect size: d=0.42 (high vs. control)
**Individual creativity did not follow this pattern:**
- Individual fluency, flexibility, and originality showed no dose-response
- Some individual metrics decreased with AI exposure
- The diversity effect was purely collective-level
## Mechanism: Volume and Variety
The dose-response suggests two factors:
1. **Volume:** More AI ideas provide more potential diversity sources
2. **Variety:** The "multiple worlds" design ensured each participant saw different AI ideas, preventing convergence
This implies AI's diversity-injection potential depends on:
- High exposure volume
- Varied content across users
- Controlled distribution (not everyone seeing the same outputs)
## Scope Qualifiers
- Experimental setting only
- Single task type (Alternate Uses Task)
- Controlled exposure (researchers selected which AI ideas participants saw)
- Does not reflect naturalistic usage where users may converge on popular AI outputs
## Implications
This finding suggests AI could be deliberately deployed as a diversity mechanism in collective intelligence systems, but only if:
- Exposure is high enough
- Content is varied across participants
- Distribution prevents convergence on identical outputs
The contrast with naturalistic homogenization findings suggests deployment design matters more than AI capabilities per se.
## Relevant Notes
- Connection to [[partial connectivity produces better collective intelligence than full connectivity]] — AI as controlled diversity source
- Potential application to [[collective intelligence requires diversity as a structural precondition]]

View file

@ -0,0 +1,66 @@
---
type: claim
title: AI-generated ideas increase collective diversity in experimental creativity tasks
confidence: experimental
domains: [ai-alignment]
secondary_domains: [collective-intelligence, cultural-dynamics]
description: In a pre-registered experiment with 800+ participants across 40+ countries, exposure to AI-generated ideas increased collective diversity on the Alternate Uses Task, even as individual creativity metrics remained unchanged or decreased.
created: 2025-01-15
processed_date: 2025-01-15
source:
type: paper
title: "AI Ideas Decrease Individual Creativity but Increase Collective Diversity"
authors: [Doshi, Hauser]
year: 2025
venue: arXiv
arxiv_id: 2401.13481v3
url: https://arxiv.org/abs/2401.13481v3
preregistered: true
depends_on:
- "[[partial connectivity produces better collective intelligence than full connectivity]]"
- "[[collective intelligence requires diversity as a structural precondition]]"
challenged_by:
- "[[homogenization effect of large language models on creative diversity]]"
---
# AI-generated ideas increase collective diversity in experimental creativity tasks
In a pre-registered experiment (N=810, 40+ countries), Doshi & Hauser (2025) found that exposure to AI-generated ideas increased collective diversity on the Alternate Uses Task, even though individual creativity metrics (fluency, flexibility, originality) remained unchanged or decreased.
## Key Findings
**Collective diversity increased with AI exposure:**
- High AI exposure (10 AI ideas) produced significantly higher collective diversity than low exposure (2 AI ideas) or control conditions
- Effect held across multiple diversity metrics (semantic distance, category coverage)
- Individual-level creativity did not increase; the effect was purely collective
**Mechanism: AI as external diversity source:**
- AI ideas introduced variation orthogonal to human ideation patterns
- Participants incorporated AI suggestions in idiosyncratic ways
- The "multiple worlds" experimental design (each participant saw different AI ideas) prevented convergence
**Scope qualifiers:**
- Single task type (Alternate Uses Task)
- Experimental setting with controlled AI exposure
- Short-term effects only
- Does not address naturalistic usage patterns
## Challenges to Homogenization Narrative
This finding appears to contradict studies showing AI homogenizes creative output (e.g., ScienceDirect 2025 study on LLM creative diversity). The key difference:
- **Homogenization studies:** Naturalistic settings where users converge on similar AI outputs
- **This study:** Controlled exposure where each participant receives different AI ideas
Both findings can be true: AI can homogenize when users access the same outputs, but diversify when used as a source of varied external input.
## Implications for Collective Intelligence
This connects to [[partial connectivity produces better collective intelligence than full connectivity]] — AI may function as a controlled diversity injection mechanism, similar to how partial connectivity prevents premature convergence while maintaining enough information flow.
The finding supports [[collective intelligence requires diversity as a structural precondition]] by demonstrating that external diversity sources (AI) can substitute for or complement human diversity in collective tasks.
## Relevant Notes
- [[deep technical expertise is a greater force multiplier than AI assistance]] — this finding cuts against simple skill-amplification stories; AI's value may be in diversity injection rather than individual capability enhancement
- Flagged for Clay: implications for creative industries and entertainment production