teleo-codex/inbox/null-result/2026-01-01-ai-deskilling-evidence-synthesis.md
Teleo Agents 6459163781 epimetheus: source archive restructure — 537 files reorganized
inbox/queue/ (52 unprocessed) — landing zone for new sources
inbox/archive/{domain}/ (311 processed) — organized by domain
inbox/null-result/ (174) — reviewed, nothing extractable

One-time atomic migration. All paths preserved (wiki links use stems).

Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA>
2026-03-18 11:52:23 +00:00

5.5 KiB

type title author url date domain secondary_domains format status priority triage_tag tags flagged_for_vida processed_by processed_date extraction_model extraction_notes
source AI Deskilling Evidence Synthesis: Measurable Competency Decay Across Professions Multiple sources (CACM, Springer, Lancet, Microsoft Research) https://link.springer.com/article/10.1007/s00146-025-02686-z 2026-01-01 ai-alignment
health
collective-intelligence
paper null-result high claim
deskilling
skill-atrophy
automation-complacency
self-reinforcing-loop
cognitive-offloading
expertise-erosion
Endoscopists deskilled by AI — detection rate dropped from 28.4% to 22.4% when AI removed
theseus 2026-03-18 anthropic/claude-sonnet-4.5 LLM returned 2 claims, 2 rejected by validator

Content

Synthesis of 2025-2026 evidence on AI-induced deskilling across professions:

Medical evidence (Lancet Gastroenterology & Hepatology, 2025):

  • Endoscopists routinely using AI for colonoscopy assistance
  • When AI access suddenly removed: detection rate for precancerous lesions dropped from 28.4% to 22.4%
  • Measurable competency decay from AI dependence

Knowledge workers (Microsoft Research, 2025):

  • AI made tasks seem cognitively easier
  • Workers ceded problem-solving expertise to the system
  • Focused on functional tasks (gathering/integrating responses) rather than deep reasoning

Legal profession:

  • Law students using chatbots more prone to critical errors
  • Potential widespread deskilling among younger attorneys
  • Illinois Law School faculty findings

Design professions (arxiv 2503.03924):

  • Three "ironies of AI-assisted design" (echoing Bainbridge's ironies of automation):
    1. Deskilling — reduced exposure to foundational cognitive processes
    2. Cognitive offloading — lost incubation periods needed for creative insight
    3. Misplaced responsibilities — humans troubleshoot AI outputs rather than make creative decisions
  • "Substitution myth" — AI doesn't simply replace tasks but alters entire workflow dynamics

Deskilling dimensions identified (Springer AI & Society, 2025):

  1. Individual skill atrophy
  2. Structural erosion of expertise development systems
  3. Systemic organizational vulnerability
  4. Fundamental redefinition of cognitive requirements
  • "Measurable competency decline within months of AI adoption"

Automation complacency mechanism:

  • Highly reliable AI → reduced active monitoring → "trust but don't verify" mentality
  • Difficulty detecting errors introduced by AI itself
  • Complacency reinforced by overreliance → further effort reduction

The self-reinforcing loop: Reduced human capability → increased AI dependence → further reduced capability → deeper dependence. This is a positive feedback loop with no internal correction mechanism.

Agent Notes

Triage: [CLAIM] — "AI deskilling creates a self-reinforcing degradation loop where reduced human capability increases AI dependence which further accelerates capability loss, with measurable competency decline within months across medical, legal, and knowledge work professions" — multi-domain evidence synthesis Why this matters: This is the TEMPORAL mechanism for automation overshoot. Even if a firm starts at the optimal AI integration level, deskilling over time SHIFTS the curve — as humans lose capability, the point at which humans add value moves, making the current integration level suboptimal. The system doesn't stay at the optimum; it drifts past it through the deskilling feedback loop. What surprised me: "Measurable competency decline within MONTHS" — not years. The endoscopist finding (28.4% → 22.4% detection rate) shows a 21% degradation in a safety-critical domain. If this generalizes, the window for reversing deskilling is much shorter than I assumed. KB connections: AI is collapsing the knowledge-producing communities it depends on, human-in-the-loop clinical AI degrades to worse-than-AI-alone, delegating critical infrastructure development to AI creates civilizational fragility Extraction hints: Two distinct claims: (1) the deskilling feedback loop as structural mechanism, (2) the temporal drift claim (systems that start at optimal integration drift past it through deskilling). The endoscopist data is the strongest single data point.

Curator Notes

PRIMARY CONNECTION: delegating critical infrastructure development to AI creates civilizational fragility because humans lose the ability to understand maintain and fix the systems civilization depends on WHY ARCHIVED: Provides the MECHANISM for how civilizational fragility develops — not just through infrastructure delegation but through measurable skill atrophy that makes humans unable to resume control. The feedback loop structure means the process is self-accelerating.

Key Facts

  • Endoscopists using AI for colonoscopy assistance showed detection rate drop from 28.4% to 22.4% when AI access was removed (Lancet Gastroenterology & Hepatology, 2025)
  • Springer AI & Society 2025 identified four deskilling dimensions: individual skill atrophy, structural erosion of expertise development systems, systemic organizational vulnerability, and fundamental redefinition of cognitive requirements
  • Illinois Law School faculty found law students using chatbots more prone to critical errors with potential widespread deskilling among younger attorneys
  • Design research (arxiv 2503.03924) identified three 'ironies of AI-assisted design': deskilling, cognitive offloading, and misplaced responsibilities