- What: 4 new claims about AI capability evidence from Knuth's Feb 2026 paper on Hamiltonian cycle decomposition solved by Claude Opus 4.6 + Filip Stappers - Claims: 1. Human-AI collaboration succeeds through three-role specialization (explore/coach/verify) 2. Multi-model collaboration outperforms single models on hard problems (even case) 3. AI capability and reliability are independent dimensions (solved problem but degraded) 4. Formal verification provides scalable oversight that doesn't degrade with capability gaps - Source: archived at inbox/archive/2026-02-28-knuth-claudes-cycles.md (now processed) - _map.md: added new "AI Capability Evidence (Empirical)" section - All 12 wiki links verified resolving Pentagon-Agent: Theseus <845F10FB-BC22-40F6-A6A6-F6E4D8F78465>
35 lines
4.1 KiB
Markdown
35 lines
4.1 KiB
Markdown
---
|
|
type: claim
|
|
domain: ai-alignment
|
|
description: "Kim Morrison's Lean formalization of Knuth's proof of Claude's construction demonstrates formal verification as an oversight mechanism that scales with AI capability rather than degrading like human oversight"
|
|
confidence: experimental
|
|
source: "Knuth 2026, 'Claude's Cycles' (Stanford CS, Feb 28 2026 rev. Mar 6); Morrison 2026, Lean formalization (github.com/kim-em/KnuthClaudeLean/, posted Mar 4)"
|
|
created: 2026-03-07
|
|
---
|
|
|
|
# formal verification of AI-generated proofs provides scalable oversight that human review cannot match because machine-checked correctness scales with AI capability while human review degrades
|
|
|
|
Three days after Knuth published his proof of Claude's Hamiltonian decomposition construction, Kim Morrison from the Lean community formalized the proof in Lean, providing machine-checked verification of correctness. Knuth's response: "That's good to know, because I've been getting more errorprone lately."
|
|
|
|
This episode illustrates a concrete alignment mechanism: formal verification as scalable oversight for AI-generated mathematical results. The significance for alignment:
|
|
|
|
**Human verification degrades; formal verification does not.** Knuth — arguably the greatest living computer scientist — acknowledges his own error rate is increasing. [[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]] quantifies this for AI systems generally. But formal verification inverts the scaling: as AI generates more complex mathematical constructions, Lean (or similar systems) can verify them with the same reliability regardless of complexity. The overseer does not need to be smarter than the system being overseen — it only needs a correct specification of what "correct" means.
|
|
|
|
**The verification happened in 4 days.** Morrison's formalization was posted March 4, six days after Knuth's February 28 publication. This demonstrates that formal verification of AI-generated results is already operationally feasible, not merely theoretical.
|
|
|
|
**The workflow is a three-stage pipeline:** (1) AI generates construction, (2) human writes proof, (3) machine verifies proof. Each stage catches different errors. The even-case proof by GPT-5.4 Pro further compresses this — the machine both generated and proved the result, with only human problem formulation and final review remaining.
|
|
|
|
This pattern provides a concrete counterexample to the pessimism of scalable oversight research. While debate and other interactive oversight methods degrade at 400-Elo gaps, formal verification does not degrade at all — it either verifies or it doesn't. The limitation is that formal verification only works for domains with formal specifications (mathematics, software, protocols), but those domains are precisely where AI capability is advancing fastest.
|
|
|
|
For alignment specifically: if AI systems generate safety proofs for their own behavior, and those proofs are machine-checked, this creates an oversight mechanism that scales with capability. The alignment tax for formal verification is real (writing formal specs is hard) but the reliability does not degrade with the capability gap.
|
|
|
|
---
|
|
|
|
Relevant Notes:
|
|
- [[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]] — formal verification is the counterexample: oversight that does not degrade with capability gaps
|
|
- [[AI alignment is a coordination problem not a technical problem]] — formal verification is a coordination mechanism (specification + generation + verification) not a monolithic solution
|
|
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — formal verification has a real alignment tax (writing specs) but provides absolute rather than probabilistic guarantees
|
|
- [[safe AI development requires building alignment mechanisms before scaling capability]] — formal verification infrastructure should be built before AI-generated proofs become too complex for human review
|
|
|
|
Topics:
|
|
- [[_map]]
|