teleo-codex/inbox/archive/2026-03-09-kloss-25-prompts-agent-self-diagnosis.md
m3taversal f090327563 theseus: Tier 1 X source extraction — emergent misalignment enrichment + self-diagnosis claim
- What: enriched emergent misalignment claim with production RL methodology detail
  and context-dependent alignment distinction; new speculative claim on structured
  self-diagnosis prompts as lightweight scalable oversight; archived 3 sources
  (#11 Anthropic emergent misalignment, #2 Attention Residuals, #7 kloss self-diagnosis)
- Why: Tier 1 priority from X ingestion triage. #11 adds methodological specificity
  to existing claim. #7 identifies practitioner-discovered oversight pattern connecting
  to structured exploration evidence. #2 archived as null-result (capabilities paper,
  not alignment-relevant).
- Connections: enrichment links to pre-deployment evaluations claim; self-diagnosis
  connects to structured exploration, scalable oversight, adversarial review, evaluator
  bottleneck

Pentagon-Agent: Theseus <B4A5B354-03D6-4291-A6A8-1E04A879D9AC>
2026-04-14 18:39:20 +00:00

39 lines
1.9 KiB
Markdown

---
type: source
title: "25 Prompts for Making AI Agents Self-Diagnose"
author: "kloss (@kloss_xyz)"
url: https://x.com/kloss_xyz/status/2032223154094162063
date_published: 2026-03-09
date_archived: 2026-03-16
domain: ai-alignment
status: processed
processed_by: theseus
tags: [agent-self-diagnosis, metacognition, oversight-scaffolding, prompt-engineering]
sourced_via: "Leo routed from X ingestion (@kloss_xyz tweet 2032223154094162063)"
---
# 25 Prompts for Making AI Agents Self-Diagnose
Practitioner-generated prompt collection for inducing metacognitive monitoring in AI agents. Published as a tweet thread by @kloss_xyz.
## Prompt Categories (my analysis)
**Uncertainty calibration (5):** #4 confidence rating, #5 missing information, #15 evidence quality, #16 deductive vs speculative, #23 likely→certain threshold
**Failure mode anticipation (4):** #1 biggest failure risk, #6 what wrong looks like, #11 three most likely failure modes, #19 what context invalidates approach
**Tool/output verification (3):** #2 schema verification, #7 expected tool return, #8 actual vs expected comparison
**Strategy meta-monitoring (4):** #9 step count check, #13 redo from scratch, #18 solving right problem, #20 loop detection
**Adversarial self-review (3):** #12 argue against answer, #14 expert critique, #17 simplest explanation (Occam's)
**User alignment (3):** #10 unstated user intent, #21 define done, #25 optimize for user's use case
**Epistemic discipline (3):** #22 replace "I think" with evidence, #24 simpler solution check, #3 flag uncertainty explicitly
## Evidence Base
No empirical validation of these prompts. This is practitioner knowledge, not a study. However, connects to validated finding that structured prompting produces measurable performance gains (Residue prompt reduced human intervention 6x — Reitbauer 2026).
## Extraction Status
- 1 claim: structured self-diagnosis prompting as oversight scaffolding