teleo-codex/inbox/claims/ai-adoption-correlates-task-difficulty-even-disclosed.md
m3taversal db497155d8
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
theseus: extract claims from Doshi-Hauser AI creativity experiment (#484)
Co-authored-by: m3taversal <m3taversal@gmail.com>
Co-committed-by: m3taversal <m3taversal@gmail.com>
2026-03-11 09:23:12 +00:00

3.1 KiB
Raw Blame History

type title confidence domains secondary_domains description created processed_date source depends_on challenged_by
claim AI idea adoption correlates with task difficulty even when the source is explicitly disclosed experimental
ai-alignment
collective-intelligence
cultural-dynamics
In experimental creativity tasks, participants adopted AI-generated ideas more frequently on difficult tasks (ρ=0.8) than easy tasks (ρ=0.3) even when the AI source was explicitly labeled, suggesting disclosure does not suppress AI adoption where participants most need help. 2025-01-15 2025-01-15
type title authors year venue arxiv_id url preregistered
paper AI Ideas Decrease Individual Creativity but Increase Collective Diversity
Doshi
Hauser
2025 arXiv 2401.13481v3 https://arxiv.org/abs/2401.13481v3 true
ai-ideas-increase-collective-diversity-experimental
deep technical expertise is a greater force multiplier than AI assistance

AI idea adoption correlates with task difficulty even when the source is explicitly disclosed

Doshi & Hauser (2025) found that when AI-generated ideas were explicitly labeled as AI-generated, participants still adopted them at rates strongly correlated with task difficulty: ρ=0.8 for difficult tasks vs. ρ=0.3 for easy tasks.

Key Finding

Adoption rates by difficulty (disclosed condition):

  • Difficult tasks: ρ=0.8 correlation between AI exposure and adoption
  • Easy tasks: ρ=0.3 correlation between AI exposure and adoption
  • AI source was explicitly labeled in both conditions

Interpretation:

  • Disclosure did not suppress AI adoption where participants most needed help (difficult tasks)
  • Participants appeared to use task difficulty as a heuristic for when to rely on AI
  • This suggests rational/strategic AI use rather than blind adoption or blanket rejection

Implications for Disclosure Policies

This finding complicates simple "just disclose AI" policies:

  • Disclosure alone does not prevent AI reliance
  • Users may rationally choose to rely on AI when tasks are difficult
  • The question shifts from "does disclosure reduce AI use" to "when should AI use be encouraged/discouraged"

Scope Qualifiers

  • Single task type (Alternate Uses Task)
  • Experimental setting with explicit labeling
  • Self-reported adoption measures
  • Does not address long-term effects or skill atrophy
  • Does not compare disclosed vs. non-disclosed conditions across difficulty levels

Tension with Skill Development

This finding creates tension with deep technical expertise is a greater force multiplier than AI assistance — if users adopt AI most on difficult tasks (where they most need to develop expertise), this could create a deskilling dynamic where AI prevents learning at precisely the difficulty level where learning is most valuable.

The "rational" adoption pattern (use AI when tasks are hard) may be individually rational but collectively problematic if it prevents skill development.

Relevant Notes

  • Potential connection to AI deskilling literature (if claims exist in KB)
  • Flagged for implications on AI disclosure policy design