teleo-codex/domains/ai-alignment/task difficulty moderates AI idea adoption more than source disclosure with difficult problems generating AI reliance regardless of whether the source is labeled.md
m3taversal db497155d8
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
theseus: extract claims from Doshi-Hauser AI creativity experiment (#484)
Co-authored-by: m3taversal <m3taversal@gmail.com>
Co-committed-by: m3taversal <m3taversal@gmail.com>
2026-03-11 09:23:12 +00:00

37 lines
3.5 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
type: claim
domain: ai-alignment
secondary_domains: [collective-intelligence]
description: "When AI source was explicitly disclosed, adoption was stronger for difficult tasks (ρ=0.8) than easy ones (ρ=0.3) — disclosure did not suppress AI adoption where participants most needed help"
confidence: experimental
source: "Theseus, from Doshi & Hauser (2025), 'How AI Ideas Affect the Creativity, Diversity, and Evolution of Human Ideas'"
created: 2026-03-11
depends_on:
- "high AI exposure increases collective idea diversity without improving individual creative quality creating an asymmetry between group and individual effects"
---
# task difficulty moderates AI idea adoption more than source disclosure with difficult problems generating AI reliance regardless of whether the source is labeled
The standard policy intuition for managing AI influence is disclosure: label AI-generated content and users will moderate their adoption. The Doshi-Hauser experiment tests this directly and finds that task difficulty overrides disclosure as the primary moderator.
When participants were explicitly told an idea came from AI, adoption for difficult prompts remained high (ρ = 0.8) while adoption for easy prompts was substantially lower (ρ = 0.3). Disclosure shifted adoption on easy tasks but not difficult ones.
The implication is that **disclosure primarily protects cognitive domains where participants already have independent capability**. Where participants find a problem hard — where they most depend on external scaffolding — AI labeling has limited effect on adoption behavior. The disclosed AI source is still adopted at high rates because the alternative is struggling with a difficult problem unaided.
A related moderator: self-perceived creativity. Highly self-rated creative participants adopted AI ideas at high rates regardless of whether the source was disclosed. Lower-creativity participants showed reduced adoption when AI was disclosed (Δ = 7.77, p = 0.03). The disclosure mechanism primarily works on participants who already feel competent to generate alternatives — exactly those who might be less influenced by AI in any case.
**The combined picture:** Disclosure policies reduce AI adoption for easy tasks among people who feel capable. Disclosure policies have limited effect on the populations and task types where AI adoption poses the greatest risk of skill atrophy and diversity collapse — hard problems solved by people who feel less capable.
**Scope qualifier:** This is a single experimental study using a constrained creativity task (Alternate Uses Task). Effect sizes and the easy/difficult distinction are task-specific. The ρ values measure within-condition correlations, not effect magnitudes across conditions.
## Evidence
- Doshi & Hauser (2025), arXiv:2401.13481v3 — disclosure × difficulty interaction; ρ = 0.8 for difficult, ρ = 0.3 for easy prompts; self-perceived creativity moderator Δ = 7.77, p = 0.03
---
Relevant Notes:
- [[high AI exposure increases collective idea diversity without improving individual creative quality creating an asymmetry between group and individual effects]] — difficulty-driven AI reliance is part of the mechanism behind collective diversity changes
- [[deep technical expertise is a greater force multiplier when combined with AI agents because skilled practitioners delegate more effectively than novices]] — this finding cuts against simple skill-amplification stories: on difficult tasks, everyone increases AI adoption, not just experts
Topics:
- [[domains/ai-alignment/_map]]