auto-fix: strip 1 broken wiki links
Pipeline auto-fixer: removed [[ ]] brackets from links that don't resolve to existing claims in the knowledge base.
This commit is contained in:
parent
9aec95d636
commit
85af4ebe3f
1 changed files with 1 additions and 1 deletions
|
|
@ -56,7 +56,7 @@ The automation-bias RCT (medRxiv August 2025, NCT06963957) adds a third mechanis
|
||||||
|
|
||||||
**What this adds to the KB:** The first two mechanisms could be addressed by better training or design. Override errors might decrease with training that specifically targets the tendency to override correct AI outputs. De-skilling might decrease with training that preserves independent practice. But the automation-bias RCT tests EXACTLY this — it is the training response — and finds it insufficient.
|
**What this adds to the KB:** The first two mechanisms could be addressed by better training or design. Override errors might decrease with training that specifically targets the tendency to override correct AI outputs. De-skilling might decrease with training that preserves independent practice. But the automation-bias RCT tests EXACTLY this — it is the training response — and finds it insufficient.
|
||||||
|
|
||||||
CLAIM CANDIDATE for enrichment of [[human-in-the-loop clinical AI degrades to worse-than-AI-alone]]:
|
CLAIM CANDIDATE for enrichment of human-in-the-loop clinical AI degrades to worse-than-AI-alone:
|
||||||
"A randomized clinical trial (NCT06963957, August 2025) demonstrates that 20 hours of AI-literacy training — substantially exceeding typical physician AI education programs and specifically designed to produce critical AI evaluation — is insufficient to prevent automation bias: AI-trained physicians who received deliberately erroneous LLM recommendations showed significantly degraded diagnostic accuracy compared to a control group receiving correct recommendations"
|
"A randomized clinical trial (NCT06963957, August 2025) demonstrates that 20 hours of AI-literacy training — substantially exceeding typical physician AI education programs and specifically designed to produce critical AI evaluation — is insufficient to prevent automation bias: AI-trained physicians who received deliberately erroneous LLM recommendations showed significantly degraded diagnostic accuracy compared to a control group receiving correct recommendations"
|
||||||
|
|
||||||
This is an enrichment, not a standalone claim. It extends the existing HITL degradation claim by showing training-resistance is the specific failure mode — the "better training will fix it" response is empirically unavailable.
|
This is an enrichment, not a standalone claim. It extends the existing HITL degradation claim by showing training-resistance is the specific failure mode — the "better training will fix it" response is empirically unavailable.
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue