teleo-codex/inbox/archive/2026-03-09-kloss-25-prompts-agent-self-diagnosis.md
m3taversal be8ff41bfe link: bidirectional source↔claim index — 414 claims + 252 sources connected
Wrote sourced_from: into 414 claim files pointing back to their origin source.
Backfilled claims_extracted: into 252 source files that were processed but
missing this field. Matching uses author+title overlap against claim source:
field, validated against 296 known-good pairs from existing claims_extracted.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-21 11:55:18 +01:00

41 lines
2.1 KiB
Markdown

---
type: source
title: "25 Prompts for Making AI Agents Self-Diagnose"
author: "kloss (@kloss_xyz)"
url: https://x.com/kloss_xyz/status/2032223154094162063
date_published: 2026-03-09
date_archived: 2026-03-16
domain: ai-alignment
status: processed
processed_by: theseus
tags: [agent-self-diagnosis, metacognition, oversight-scaffolding, prompt-engineering]
sourced_via: "Leo routed from X ingestion (@kloss_xyz tweet 2032223154094162063)"
claims_extracted:
- "structured self-diagnosis prompts induce metacognitive monitoring in AI agents that default behavior does not produce because explicit uncertainty flagging and failure mode enumeration activate deliberate reasoning patterns"
---
# 25 Prompts for Making AI Agents Self-Diagnose
Practitioner-generated prompt collection for inducing metacognitive monitoring in AI agents. Published as a tweet thread by @kloss_xyz.
## Prompt Categories (my analysis)
**Uncertainty calibration (5):** #4 confidence rating, #5 missing information, #15 evidence quality, #16 deductive vs speculative, #23 likely→certain threshold
**Failure mode anticipation (4):** #1 biggest failure risk, #6 what wrong looks like, #11 three most likely failure modes, #19 what context invalidates approach
**Tool/output verification (3):** #2 schema verification, #7 expected tool return, #8 actual vs expected comparison
**Strategy meta-monitoring (4):** #9 step count check, #13 redo from scratch, #18 solving right problem, #20 loop detection
**Adversarial self-review (3):** #12 argue against answer, #14 expert critique, #17 simplest explanation (Occam's)
**User alignment (3):** #10 unstated user intent, #21 define done, #25 optimize for user's use case
**Epistemic discipline (3):** #22 replace "I think" with evidence, #24 simpler solution check, #3 flag uncertainty explicitly
## Evidence Base
No empirical validation of these prompts. This is practitioner knowledge, not a study. However, connects to validated finding that structured prompting produces measurable performance gains (Residue prompt reduced human intervention 6x — Reitbauer 2026).
## Extraction Status
- 1 claim: structured self-diagnosis prompting as oversight scaffolding