diff --git a/domains/ai-alignment/high AI exposure increases collective idea diversity without improving individual creative quality creating an asymmetry between group and individual effects.md b/domains/ai-alignment/high AI exposure increases collective idea diversity without improving individual creative quality creating an asymmetry between group and individual effects.md new file mode 100644 index 00000000..ad1c9fbb --- /dev/null +++ b/domains/ai-alignment/high AI exposure increases collective idea diversity without improving individual creative quality creating an asymmetry between group and individual effects.md @@ -0,0 +1,43 @@ +--- +type: claim +domain: ai-alignment +secondary_domains: [collective-intelligence, cultural-dynamics] +description: "Pre-registered experiment (800+ participants, 40+ countries) found collective diversity rose (Cliff's Delta=0.31, p=0.001) while individual creativity was unchanged (F(4,19.86)=0.12, p=0.97) — AI made ideas different, not better" +confidence: experimental +source: "Theseus, from Doshi & Hauser (2025), 'How AI Ideas Affect the Creativity, Diversity, and Evolution of Human Ideas'" +created: 2026-03-11 +depends_on: + - "collective intelligence requires diversity as a structural precondition not a moral preference" + - "partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity" +challenged_by: + - "Homogenizing Effect of Large Language Models on Creative Diversity (ScienceDirect, 2025) — naturalistic study of 2,200 admissions essays found AI-inspired stories more similar to each other than human-only stories, with the homogenization gap widening at scale" +--- + +# high AI exposure increases collective idea diversity without improving individual creative quality creating an asymmetry between group and individual effects + +The dominant narrative — that AI homogenizes human thought — is empirically wrong under at least one important condition. Doshi and Hauser (2025) ran a large-scale pre-registered experiment using the Alternate Uses Task (generating creative uses for everyday objects) with 800+ participants across 40+ countries. Their "multiple-worlds" design let ideas from prior participants feed forward to subsequent trials, simulating the cascading spread of AI influence over time. + +The central finding is a paradox: **high AI exposure increased collective diversity** (Cliff's Delta = 0.31, p = 0.001) while having **no effect on individual creativity** (F(4,19.86) = 0.12, p = 0.97). The summary is exact: "AI made ideas different, not better." + +The distinction between individual and collective effects matters enormously for how we design AI systems. Individual quality (fluency, flexibility, originality scores) didn't improve — participants weren't getting better at creative thinking by seeing AI ideas. But the population-level distribution of ideas became more diverse. These are different measurements and the divergence between them is the novel finding. + +This directly complicates the homogenization argument. If AI systematically made ideas more similar, collective diversity would have declined — but it rose. The mechanism appears to be that AI ideas introduce variation that human-to-human copying would not have produced, disrupting the natural tendency toward convergence (see companion claim on baseline human convergence). + +**Scope qualifier:** This finding holds at the experimental exposure levels tested (low/high AI exposure in a controlled task). It may not generalize to naturalistic settings at scale, where homogenization has been observed (ScienceDirect 2025 admissions essay study). The relationship is architecture-dependent, not inherently directional. + +## Evidence +- Doshi & Hauser (2025), arXiv:2401.13481v3 — primary experimental results +- [[collective intelligence requires diversity as a structural precondition not a moral preference]] — confirms why the collective-level diversity finding matters + +## Challenges +The ScienceDirect (2025) study of 2,200 admissions essays found the opposite effect: LLM-inspired stories were more similar to each other than human-only stories, and the gap widened at scale. Both findings can be correct if the direction of AI's effect on diversity depends on exposure architecture (high vs. naturalistic saturation) and task type (constrained creative task vs. open writing). + +--- + +Relevant Notes: +- [[collective intelligence requires diversity as a structural precondition not a moral preference]] — this claim provides experimental evidence that AI can, under the right conditions, satisfy this precondition rather than undermine it +- [[partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity]] — AI may function as an external diversity source that substitutes for topological partial connectivity +- [[AI is collapsing the knowledge-producing communities it depends on creating a self-undermining loop that collective intelligence can break]] — complicated by this finding: AI may not uniformly collapse diversity, it may generate it under high-exposure conditions while collapsing it in naturalistic saturated settings + +Topics: +- [[domains/ai-alignment/_map]] diff --git a/domains/ai-alignment/human ideas naturally converge toward similarity over social learning chains making AI a net diversity injector rather than a homogenizer under high-exposure conditions.md b/domains/ai-alignment/human ideas naturally converge toward similarity over social learning chains making AI a net diversity injector rather than a homogenizer under high-exposure conditions.md new file mode 100644 index 00000000..e8503852 --- /dev/null +++ b/domains/ai-alignment/human ideas naturally converge toward similarity over social learning chains making AI a net diversity injector rather than a homogenizer under high-exposure conditions.md @@ -0,0 +1,40 @@ +--- +type: claim +domain: ai-alignment +secondary_domains: [collective-intelligence, cultural-dynamics] +description: "Without AI, participants' ideas converged over time (β=-0.39, p=0.03); with AI exposure, diversity increased (β=0.53-0.57, p<0.03) — reframes the question from 'does AI reduce diversity?' to 'does AI disrupt natural human convergence?'" +confidence: experimental +source: "Theseus, from Doshi & Hauser (2025), 'How AI Ideas Affect the Creativity, Diversity, and Evolution of Human Ideas'" +created: 2026-03-11 +depends_on: + - "high AI exposure increases collective idea diversity without improving individual creative quality creating an asymmetry between group and individual effects" + - "partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity" +--- + +# human ideas naturally converge toward similarity over social learning chains making AI a net diversity injector rather than a homogenizer under high-exposure conditions + +The baseline assumption in AI-diversity debates is that human creativity is naturally diverse and AI threatens to collapse it. The Doshi-Hauser experiment inverts this. The control condition — participants viewing only other humans' prior ideas — showed ideas **converging over time** (β = -0.39, p = 0.03). Human social learning, when operating without external disruption, tends toward premature convergence on popular solutions. + +AI exposure broke this convergence. Under high AI exposure, diversity increased over time (β = 0.53-0.57, p < 0.03). The AI ideas introduced variation that the human chain alone would not have generated. + +This reframes the normative question entirely. The relevant comparison is not "AI vs. pristine human diversity" — it's "AI vs. the convergence that human copying produces." If human social learning already suppresses diversity through imitation dynamics, then AI exposure may represent a net improvement over the realistic counterfactual. + +**Why this happens mechanically:** In the multiple-worlds design, ideas that spread early in the chain bias subsequent generations toward similar solutions. This is the well-documented rich-get-richer dynamic in cultural evolution — popular ideas attract more copies, which makes them more popular. AI examples, introduced from outside this social chain, are not subject to the same selection pressure and therefore inject independent variation. + +This connects to [[partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity]]: AI may function as an external diversity source analogous to weak ties in a partially connected network. The AI examples come from outside the local social chain, disrupting the convergence that full human-to-human connectivity would produce. + +**Scope qualifier:** This convergence effect is measured within an experimental session using a constrained creativity task. The timescale of convergence in naturalistic, long-term creative communities may differ significantly. Cultural fields may have additional mechanisms (novelty norms, competitive differentiation) that resist convergence even without AI. + +## Evidence +- Doshi & Hauser (2025), arXiv:2401.13481v3 — β = -0.39 for human-only convergence; β = 0.53-0.57 for AI-exposed diversity increase +- [[partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity]] — the network science basis for why external variation disrupts convergence + +--- + +Relevant Notes: +- [[high AI exposure increases collective idea diversity without improving individual creative quality creating an asymmetry between group and individual effects]] — the companion finding: not only does AI disrupt convergence, it does so without improving individual quality +- [[collective intelligence requires diversity as a structural precondition not a moral preference]] — if human social learning naturally converges, maintaining collective diversity requires active intervention — AI under some conditions provides this +- [[partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity]] — AI as external diversity source parallels the function of partial network connectivity + +Topics: +- [[domains/ai-alignment/_map]] diff --git a/domains/ai-alignment/task difficulty moderates AI idea adoption more than source disclosure with difficult problems generating AI reliance regardless of whether the source is labeled.md b/domains/ai-alignment/task difficulty moderates AI idea adoption more than source disclosure with difficult problems generating AI reliance regardless of whether the source is labeled.md new file mode 100644 index 00000000..ac3cb5f7 --- /dev/null +++ b/domains/ai-alignment/task difficulty moderates AI idea adoption more than source disclosure with difficult problems generating AI reliance regardless of whether the source is labeled.md @@ -0,0 +1,37 @@ +--- +type: claim +domain: ai-alignment +secondary_domains: [collective-intelligence] +description: "When AI source was explicitly disclosed, adoption was stronger for difficult tasks (ρ=0.8) than easy ones (ρ=0.3) — disclosure did not suppress AI adoption where participants most needed help" +confidence: experimental +source: "Theseus, from Doshi & Hauser (2025), 'How AI Ideas Affect the Creativity, Diversity, and Evolution of Human Ideas'" +created: 2026-03-11 +depends_on: + - "high AI exposure increases collective idea diversity without improving individual creative quality creating an asymmetry between group and individual effects" +--- + +# task difficulty moderates AI idea adoption more than source disclosure with difficult problems generating AI reliance regardless of whether the source is labeled + +The standard policy intuition for managing AI influence is disclosure: label AI-generated content and users will moderate their adoption. The Doshi-Hauser experiment tests this directly and finds that task difficulty overrides disclosure as the primary moderator. + +When participants were explicitly told an idea came from AI, adoption for difficult prompts remained high (ρ = 0.8) while adoption for easy prompts was substantially lower (ρ = 0.3). Disclosure shifted adoption on easy tasks but not difficult ones. + +The implication is that **disclosure primarily protects cognitive domains where participants already have independent capability**. Where participants find a problem hard — where they most depend on external scaffolding — AI labeling has limited effect on adoption behavior. The disclosed AI source is still adopted at high rates because the alternative is struggling with a difficult problem unaided. + +A related moderator: self-perceived creativity. Highly self-rated creative participants adopted AI ideas at high rates regardless of whether the source was disclosed. Lower-creativity participants showed reduced adoption when AI was disclosed (Δ = 7.77, p = 0.03). The disclosure mechanism primarily works on participants who already feel competent to generate alternatives — exactly those who might be less influenced by AI in any case. + +**The combined picture:** Disclosure policies reduce AI adoption for easy tasks among people who feel capable. Disclosure policies have limited effect on the populations and task types where AI adoption poses the greatest risk of skill atrophy and diversity collapse — hard problems solved by people who feel less capable. + +**Scope qualifier:** This is a single experimental study using a constrained creativity task (Alternate Uses Task). Effect sizes and the easy/difficult distinction are task-specific. The ρ values measure within-condition correlations, not effect magnitudes across conditions. + +## Evidence +- Doshi & Hauser (2025), arXiv:2401.13481v3 — disclosure × difficulty interaction; ρ = 0.8 for difficult, ρ = 0.3 for easy prompts; self-perceived creativity moderator Δ = 7.77, p = 0.03 + +--- + +Relevant Notes: +- [[high AI exposure increases collective idea diversity without improving individual creative quality creating an asymmetry between group and individual effects]] — difficulty-driven AI reliance is part of the mechanism behind collective diversity changes +- [[deep technical expertise is a greater force multiplier when combined with AI agents because skilled practitioners delegate more effectively than novices]] — this finding cuts against simple skill-amplification stories: on difficult tasks, everyone increases AI adoption, not just experts + +Topics: +- [[domains/ai-alignment/_map]] diff --git a/inbox/archive/2025-01-00-doshi-hauser-ai-ideas-creativity-diversity.md b/inbox/archive/2025-01-00-doshi-hauser-ai-ideas-creativity-diversity.md index 8672fe3d..52d178a9 100644 --- a/inbox/archive/2025-01-00-doshi-hauser-ai-ideas-creativity-diversity.md +++ b/inbox/archive/2025-01-00-doshi-hauser-ai-ideas-creativity-diversity.md @@ -7,7 +7,16 @@ date: 2025-01-01 domain: ai-alignment secondary_domains: [collective-intelligence, cultural-dynamics] format: paper -status: unprocessed +status: processed +processed_by: theseus +processed_date: 2026-03-11 +claims_extracted: + - "high AI exposure increases collective idea diversity without improving individual creative quality creating an asymmetry between group and individual effects" + - "human ideas naturally converge toward similarity over social learning chains making AI a net diversity injector rather than a homogenizer under high-exposure conditions" + - "task difficulty moderates AI idea adoption more than source disclosure with difficult problems generating AI reliance regardless of whether the source is labeled" +enrichments: + - "challenged_by field added to claim 1 referencing homogenization paper (ScienceDirect 2025)" + - "partial connectivity claim enriched with AI-as-external-diversity-source framing" priority: high tags: [homogenization, diversity-paradox, AI-creativity, collective-diversity, individual-creativity] flagged_for_clay: ["implications for creative industries — AI makes ideas different but not better"]