Co-authored-by: m3taversal <m3taversal@gmail.com> Co-committed-by: m3taversal <m3taversal@gmail.com>
3.1 KiB
| type | title | confidence | domains | secondary_domains | description | created | processed_date | source | depends_on | challenged_by | |||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| claim | AI idea adoption correlates with task difficulty even when the source is explicitly disclosed | experimental |
|
|
In experimental creativity tasks, participants adopted AI-generated ideas more frequently on difficult tasks (ρ=0.8) than easy tasks (ρ=0.3) even when the AI source was explicitly labeled, suggesting disclosure does not suppress AI adoption where participants most need help. | 2025-01-15 | 2025-01-15 |
|
|
|
AI idea adoption correlates with task difficulty even when the source is explicitly disclosed
Doshi & Hauser (2025) found that when AI-generated ideas were explicitly labeled as AI-generated, participants still adopted them at rates strongly correlated with task difficulty: ρ=0.8 for difficult tasks vs. ρ=0.3 for easy tasks.
Key Finding
Adoption rates by difficulty (disclosed condition):
- Difficult tasks: ρ=0.8 correlation between AI exposure and adoption
- Easy tasks: ρ=0.3 correlation between AI exposure and adoption
- AI source was explicitly labeled in both conditions
Interpretation:
- Disclosure did not suppress AI adoption where participants most needed help (difficult tasks)
- Participants appeared to use task difficulty as a heuristic for when to rely on AI
- This suggests rational/strategic AI use rather than blind adoption or blanket rejection
Implications for Disclosure Policies
This finding complicates simple "just disclose AI" policies:
- Disclosure alone does not prevent AI reliance
- Users may rationally choose to rely on AI when tasks are difficult
- The question shifts from "does disclosure reduce AI use" to "when should AI use be encouraged/discouraged"
Scope Qualifiers
- Single task type (Alternate Uses Task)
- Experimental setting with explicit labeling
- Self-reported adoption measures
- Does not address long-term effects or skill atrophy
- Does not compare disclosed vs. non-disclosed conditions across difficulty levels
Tension with Skill Development
This finding creates tension with deep technical expertise is a greater force multiplier than AI assistance — if users adopt AI most on difficult tasks (where they most need to develop expertise), this could create a deskilling dynamic where AI prevents learning at precisely the difficulty level where learning is most valuable.
The "rational" adoption pattern (use AI when tasks are hard) may be individually rational but collectively problematic if it prevents skill development.
Relevant Notes
- Potential connection to AI deskilling literature (if claims exist in KB)
- Flagged for implications on AI disclosure policy design