teleo-codex/domains/ai-alignment/marginal returns to intelligence are bounded by five complementary factors which means superintelligence cannot produce unlimited capability gains regardless of cognitive power.md
m3taversal 316cb23a8e
theseus: 3 enrichments + 2 claims from Dario Amodei / Anthropic sources
Enrichments: conditional RSP (voluntary safety), bioweapon uplift data (bioterrorism), AI dev loop evidence (RSI). Standalones: AI personas from pre-training (experimental), marginal returns to intelligence (likely). Source diversity flagged (3 Dario sources). Pentagon-Agent: Leo <76FB9BCA-CC16-4479-B3E5-25A3769B3D7E>
2026-03-06 08:05:22 -07:00

4.9 KiB

description type domain created source confidence
Amodei's "marginal returns to intelligence" framework identifies five factors that bound what intelligence alone can achieve, challenging assumptions that superintelligence implies unlimited capability claim ai-alignment 2026-03-07 Dario Amodei, 'Machines of Loving Grace' (darioamodei.com, 2026) likely

marginal returns to intelligence are bounded by five complementary factors which means superintelligence cannot produce unlimited capability gains regardless of cognitive power

Dario Amodei introduces a framework for evaluating AI impact that borrows from production economics: rather than asking "will AI change everything?", ask "what are the marginal returns to intelligence in this domain, and what complementary factors limit those returns?" Just as an air force needs both planes and pilots (more pilots alone don't help if you're out of planes), intelligence requires complementary factors to be productive.

Five factors bound what even superintelligent AI can achieve:

  1. Speed of the physical world. Cells divide at fixed rates, chemical reactions take time, hardware operates at physical speeds. Experiments are often sequential, each building on the last. This creates an "irreducible minimum" completion time that no amount of intelligence can bypass. A 1000x smarter biologist still waits for the cell culture to grow.

  2. Need for data. Intelligence without data is impotent. Particle physicists are already extremely ingenious — a superintelligent physicist would mainly speed up building a bigger particle accelerator, then wait for data. Some domains simply lack the raw observations needed for progress.

  3. Intrinsic complexity and chaos. Some systems are inherently unpredictable. The three-body problem cannot be predicted substantially further ahead by a superintelligence than by a human. Chaotic systems impose fundamental limits on prediction regardless of cognitive power.

  4. Constraints from humans. Clinical trials, legal requirements, behavioral change, institutional adoption — all impose irreducible delays. An aligned AI respects these constraints (and should). Technologies like nuclear power and supersonic flight were "hampered not by any difficulty of physics but by societal choices."

  5. Physical laws. Speed of light, thermodynamic limits, transistor density floors, minimum energy per computation. These are unbreakable regardless of intelligence.

The critical dynamic: these constraints operate differently across timescales. In the short run, intelligence is "heavily bottlenecked by other factors of production." Over time, intelligence "increasingly routes around the other factors" — designing better experiments, building new instruments, creating alternative paradigms. But some factors (physical laws, chaos) never fully dissolve.

Amodei applies this to predict that AI will compress 50-100 years of biological progress into 5-10 years — a 10-20x acceleration, not the 100-1000x that unconstrained intelligence might suggest. The bottleneck isn't cognitive power but the physical world's response time. Massive parallelization helps (millions of AI instances running simultaneous experiments) but cannot eliminate serial dependencies.

For alignment, this framework bounds both the opportunity and the risk. It challenges both the "AI will solve everything instantly" optimism and the "superintelligence means omnipotence" fear. A superintelligent AI cannot build a Dyson sphere next Tuesday, but it can compress decades of research into years — which is transformative enough to require governance without requiring the apocalyptic urgency of an omnipotent optimizer.


Relevant Notes:

Topics: