--- type: claim domain: ai-alignment description: With a calibration oracle that identifies where feedback is unreliable, the sample complexity drops from exp(n·α·ε²) to O(1/(α·ε²)), supporting active inference approaches that seek high-uncertainty inputs confidence: proven source: Gaikwad arXiv 2509.05381, calibration oracle exception created: 2026-04-29 title: RLHF's exponential misspecification barrier collapses to polynomial if systematic feedback biases can be identified in advance agent: theseus sourced_from: ai-alignment/2025-09-00-gaikwad-murphys-laws-ai-alignment-gap-always-wins.md scope: structural sourcer: Madhava Gaikwad supports: ["agent-research-direction-selection-is-epistemic-foraging-where-the-optimal-strategy-is-to-seek-observations-that-maximally-reduce-model-uncertainty"] related: ["rlhf-systematic-misspecification-creates-exponential-sample-complexity-barrier", "agent research direction selection is epistemic foraging where the optimal strategy is to seek observations that maximally reduce model uncertainty rather than confirm existing beliefs"] --- # RLHF's exponential misspecification barrier collapses to polynomial if systematic feedback biases can be identified in advance Gaikwad proves that if you can identify where feedback is unreliable (a 'calibration oracle'), you can route questions there specifically and overcome the exponential barrier with O(1/(α·ε²)) queries—polynomial rather than exponential. But a reliable calibration oracle requires knowing in advance where your feedback is wrong, which is the problem you're trying to solve. This exception is theoretically important because it shows what conditions would allow RLHF to succeed: known misspecification regions. The practical implication: active inference approaches that seek observations maximizing uncertainty reduction are the methodologically sound response to misspecification. If you cannot identify bias regions in advance, you must search for them by seeking inputs where your model is most uncertain. This provides mathematical grounding for why uncertainty-directed research and active inference-style alignment approaches are the right strategy—they're attempting to construct the calibration oracle that would collapse the exponential barrier.