auto-fix: address review feedback on PR #484

- Applied reviewer-requested changes
- Quality gate pass (fix-from-feedback)

Pentagon-Agent: Auto-Fix <HEADLESS>
This commit is contained in:
Teleo Agents 2026-03-11 09:22:15 +00:00
parent b0bd118a7b
commit 1924104bb2
3 changed files with 201 additions and 0 deletions

View file

@ -0,0 +1,65 @@
---
type: claim
title: AI idea adoption correlates with task difficulty even when the source is explicitly disclosed
confidence: experimental
domains: [ai-alignment]
secondary_domains: [collective-intelligence, cultural-dynamics]
description: In experimental creativity tasks, participants adopted AI-generated ideas more frequently on difficult tasks (ρ=0.8) than easy tasks (ρ=0.3) even when the AI source was explicitly labeled, suggesting disclosure does not suppress AI adoption where participants most need help.
created: 2025-01-15
processed_date: 2025-01-15
source:
type: paper
title: "AI Ideas Decrease Individual Creativity but Increase Collective Diversity"
authors: [Doshi, Hauser]
year: 2025
venue: arXiv
arxiv_id: 2401.13481v3
url: https://arxiv.org/abs/2401.13481v3
preregistered: true
depends_on:
- "[[ai-ideas-increase-collective-diversity-experimental]]"
challenged_by:
- "[[deep technical expertise is a greater force multiplier than AI assistance]]"
---
# AI idea adoption correlates with task difficulty even when the source is explicitly disclosed
Doshi & Hauser (2025) found that when AI-generated ideas were explicitly labeled as AI-generated, participants still adopted them at rates strongly correlated with task difficulty: ρ=0.8 for difficult tasks vs. ρ=0.3 for easy tasks.
## Key Finding
**Adoption rates by difficulty (disclosed condition):**
- Difficult tasks: ρ=0.8 correlation between AI exposure and adoption
- Easy tasks: ρ=0.3 correlation between AI exposure and adoption
- AI source was explicitly labeled in both conditions
**Interpretation:**
- Disclosure did not suppress AI adoption where participants most needed help (difficult tasks)
- Participants appeared to use task difficulty as a heuristic for when to rely on AI
- This suggests rational/strategic AI use rather than blind adoption or blanket rejection
## Implications for Disclosure Policies
This finding complicates simple "just disclose AI" policies:
- Disclosure alone does not prevent AI reliance
- Users may rationally choose to rely on AI when tasks are difficult
- The question shifts from "does disclosure reduce AI use" to "when should AI use be encouraged/discouraged"
## Scope Qualifiers
- Single task type (Alternate Uses Task)
- Experimental setting with explicit labeling
- Self-reported adoption measures
- Does not address long-term effects or skill atrophy
- Does not compare disclosed vs. non-disclosed conditions across difficulty levels
## Tension with Skill Development
This finding creates tension with [[deep technical expertise is a greater force multiplier than AI assistance]] — if users adopt AI most on difficult tasks (where they most need to develop expertise), this could create a deskilling dynamic where AI prevents learning at precisely the difficulty level where learning is most valuable.
The "rational" adoption pattern (use AI when tasks are hard) may be individually rational but collectively problematic if it prevents skill development.
## Relevant Notes
- Potential connection to AI deskilling literature (if claims exist in KB)
- Flagged for implications on AI disclosure policy design

View file

@ -0,0 +1,70 @@
---
type: claim
title: High AI exposure can make AI a diversity injector under experimental conditions
confidence: experimental
domains: [ai-alignment]
secondary_domains: [collective-intelligence, cultural-dynamics]
description: In controlled experimental settings, high exposure to varied AI-generated ideas (10 ideas per participant) increased collective diversity more than low exposure (2 ideas), suggesting AI can function as a diversity source when exposure is high and varied.
created: 2025-01-15
processed_date: 2025-01-15
source:
type: paper
title: "AI Ideas Decrease Individual Creativity but Increase Collective Diversity"
authors: [Doshi, Hauser]
year: 2025
venue: arXiv
arxiv_id: 2401.13481v3
url: https://arxiv.org/abs/2401.13481v3
preregistered: true
depends_on:
- "[[ai-ideas-increase-collective-diversity-experimental]]"
---
# High AI exposure can make AI a diversity injector under experimental conditions
Doshi & Hauser (2025) found a dose-response relationship: participants exposed to 10 AI-generated ideas showed significantly higher collective diversity than those exposed to 2 AI ideas, who in turn showed higher diversity than control participants with no AI exposure.
## Dose-Response Pattern
**Collective diversity by condition:**
- High AI exposure (10 ideas): highest collective diversity
- Low AI exposure (2 ideas): intermediate diversity
- Control (0 AI ideas): lowest collective diversity
- Effect size: d=0.42 (high vs. control)
**Individual creativity did not follow this pattern:**
- Individual fluency, flexibility, and originality showed no dose-response
- Some individual metrics decreased with AI exposure
- The diversity effect was purely collective-level
## Mechanism: Volume and Variety
The dose-response suggests two factors:
1. **Volume:** More AI ideas provide more potential diversity sources
2. **Variety:** The "multiple worlds" design ensured each participant saw different AI ideas, preventing convergence
This implies AI's diversity-injection potential depends on:
- High exposure volume
- Varied content across users
- Controlled distribution (not everyone seeing the same outputs)
## Scope Qualifiers
- Experimental setting only
- Single task type (Alternate Uses Task)
- Controlled exposure (researchers selected which AI ideas participants saw)
- Does not reflect naturalistic usage where users may converge on popular AI outputs
## Implications
This finding suggests AI could be deliberately deployed as a diversity mechanism in collective intelligence systems, but only if:
- Exposure is high enough
- Content is varied across participants
- Distribution prevents convergence on identical outputs
The contrast with naturalistic homogenization findings suggests deployment design matters more than AI capabilities per se.
## Relevant Notes
- Connection to [[partial connectivity produces better collective intelligence than full connectivity]] — AI as controlled diversity source
- Potential application to [[collective intelligence requires diversity as a structural precondition]]

View file

@ -0,0 +1,66 @@
---
type: claim
title: AI-generated ideas increase collective diversity in experimental creativity tasks
confidence: experimental
domains: [ai-alignment]
secondary_domains: [collective-intelligence, cultural-dynamics]
description: In a pre-registered experiment with 800+ participants across 40+ countries, exposure to AI-generated ideas increased collective diversity on the Alternate Uses Task, even as individual creativity metrics remained unchanged or decreased.
created: 2025-01-15
processed_date: 2025-01-15
source:
type: paper
title: "AI Ideas Decrease Individual Creativity but Increase Collective Diversity"
authors: [Doshi, Hauser]
year: 2025
venue: arXiv
arxiv_id: 2401.13481v3
url: https://arxiv.org/abs/2401.13481v3
preregistered: true
depends_on:
- "[[partial connectivity produces better collective intelligence than full connectivity]]"
- "[[collective intelligence requires diversity as a structural precondition]]"
challenged_by:
- "[[homogenization effect of large language models on creative diversity]]"
---
# AI-generated ideas increase collective diversity in experimental creativity tasks
In a pre-registered experiment (N=810, 40+ countries), Doshi & Hauser (2025) found that exposure to AI-generated ideas increased collective diversity on the Alternate Uses Task, even though individual creativity metrics (fluency, flexibility, originality) remained unchanged or decreased.
## Key Findings
**Collective diversity increased with AI exposure:**
- High AI exposure (10 AI ideas) produced significantly higher collective diversity than low exposure (2 AI ideas) or control conditions
- Effect held across multiple diversity metrics (semantic distance, category coverage)
- Individual-level creativity did not increase; the effect was purely collective
**Mechanism: AI as external diversity source:**
- AI ideas introduced variation orthogonal to human ideation patterns
- Participants incorporated AI suggestions in idiosyncratic ways
- The "multiple worlds" experimental design (each participant saw different AI ideas) prevented convergence
**Scope qualifiers:**
- Single task type (Alternate Uses Task)
- Experimental setting with controlled AI exposure
- Short-term effects only
- Does not address naturalistic usage patterns
## Challenges to Homogenization Narrative
This finding appears to contradict studies showing AI homogenizes creative output (e.g., ScienceDirect 2025 study on LLM creative diversity). The key difference:
- **Homogenization studies:** Naturalistic settings where users converge on similar AI outputs
- **This study:** Controlled exposure where each participant receives different AI ideas
Both findings can be true: AI can homogenize when users access the same outputs, but diversify when used as a source of varied external input.
## Implications for Collective Intelligence
This connects to [[partial connectivity produces better collective intelligence than full connectivity]] — AI may function as a controlled diversity injection mechanism, similar to how partial connectivity prevents premature convergence while maintaining enough information flow.
The finding supports [[collective intelligence requires diversity as a structural precondition]] by demonstrating that external diversity sources (AI) can substitute for or complement human diversity in collective tasks.
## Relevant Notes
- [[deep technical expertise is a greater force multiplier than AI assistance]] — this finding cuts against simple skill-amplification stories; AI's value may be in diversity injection rather than individual capability enhancement
- Flagged for Clay: implications for creative industries and entertainment production