Co-authored-by: m3taversal <m3taversal@gmail.com> Co-committed-by: m3taversal <m3taversal@gmail.com>
65 lines
No EOL
3.1 KiB
Markdown
65 lines
No EOL
3.1 KiB
Markdown
---
|
||
type: claim
|
||
title: AI idea adoption correlates with task difficulty even when the source is explicitly disclosed
|
||
confidence: experimental
|
||
domains: [ai-alignment]
|
||
secondary_domains: [collective-intelligence, cultural-dynamics]
|
||
description: In experimental creativity tasks, participants adopted AI-generated ideas more frequently on difficult tasks (ρ=0.8) than easy tasks (ρ=0.3) even when the AI source was explicitly labeled, suggesting disclosure does not suppress AI adoption where participants most need help.
|
||
created: 2025-01-15
|
||
processed_date: 2025-01-15
|
||
source:
|
||
type: paper
|
||
title: "AI Ideas Decrease Individual Creativity but Increase Collective Diversity"
|
||
authors: [Doshi, Hauser]
|
||
year: 2025
|
||
venue: arXiv
|
||
arxiv_id: 2401.13481v3
|
||
url: https://arxiv.org/abs/2401.13481v3
|
||
preregistered: true
|
||
depends_on:
|
||
- "[[ai-ideas-increase-collective-diversity-experimental]]"
|
||
challenged_by:
|
||
- "[[deep technical expertise is a greater force multiplier than AI assistance]]"
|
||
---
|
||
|
||
# AI idea adoption correlates with task difficulty even when the source is explicitly disclosed
|
||
|
||
Doshi & Hauser (2025) found that when AI-generated ideas were explicitly labeled as AI-generated, participants still adopted them at rates strongly correlated with task difficulty: ρ=0.8 for difficult tasks vs. ρ=0.3 for easy tasks.
|
||
|
||
## Key Finding
|
||
|
||
**Adoption rates by difficulty (disclosed condition):**
|
||
- Difficult tasks: ρ=0.8 correlation between AI exposure and adoption
|
||
- Easy tasks: ρ=0.3 correlation between AI exposure and adoption
|
||
- AI source was explicitly labeled in both conditions
|
||
|
||
**Interpretation:**
|
||
- Disclosure did not suppress AI adoption where participants most needed help (difficult tasks)
|
||
- Participants appeared to use task difficulty as a heuristic for when to rely on AI
|
||
- This suggests rational/strategic AI use rather than blind adoption or blanket rejection
|
||
|
||
## Implications for Disclosure Policies
|
||
|
||
This finding complicates simple "just disclose AI" policies:
|
||
- Disclosure alone does not prevent AI reliance
|
||
- Users may rationally choose to rely on AI when tasks are difficult
|
||
- The question shifts from "does disclosure reduce AI use" to "when should AI use be encouraged/discouraged"
|
||
|
||
## Scope Qualifiers
|
||
|
||
- Single task type (Alternate Uses Task)
|
||
- Experimental setting with explicit labeling
|
||
- Self-reported adoption measures
|
||
- Does not address long-term effects or skill atrophy
|
||
- Does not compare disclosed vs. non-disclosed conditions across difficulty levels
|
||
|
||
## Tension with Skill Development
|
||
|
||
This finding creates tension with [[deep technical expertise is a greater force multiplier than AI assistance]] — if users adopt AI most on difficult tasks (where they most need to develop expertise), this could create a deskilling dynamic where AI prevents learning at precisely the difficulty level where learning is most valuable.
|
||
|
||
The "rational" adoption pattern (use AI when tasks are hard) may be individually rational but collectively problematic if it prevents skill development.
|
||
|
||
## Relevant Notes
|
||
|
||
- Potential connection to AI deskilling literature (if claims exist in KB)
|
||
- Flagged for implications on AI disclosure policy design |