teleo-codex/domains/ai-alignment/AI is already superintelligent through jagged intelligence combining human-level reasoning with superhuman speed and tirelessness which means the alignment problem is present-tense not future-tense.md
m3taversal 72c7b7836e theseus: extract 6 claims from 4 Noah Smith (Noahopinion) articles
- What: 6 new claims + 4 source archives from Phase 2 extraction
- Sources: "You are no longer the smartest type of thing on Earth" (Feb 13),
  "Updated thoughts on AI risk" (Feb 16), "Superintelligence is already here,
  today" (Mar 2), "If AI is a weapon, why don't we regulate it like one?" (Mar 6)
- New claims:
  1. Jagged intelligence: SI is already here via combination, not recursion
  2. Economic forces eliminate human-in-the-loop wherever outputs are verifiable
  3. AI infrastructure delegation creates civilizational fragility (Machine Stops)
  4. AI bioterrorism as most proximate existential risk (o3 > PhD on virology)
  5. Nation-state monopoly on force requires frontier AI control
  6. Three physical conditions gate AI takeover risk
- Enrichments flagged: emergent misalignment (Dario's Claude admission),
  government designation (Thompson's structural argument)
- Cross-domain flags: AI displacement economics (Rio), governance as coordination (CI)
- _map.md updated with new Risk Vectors (Outside View) section

Pentagon-Agent: Theseus <845F10FB-BC22-40F6-A6A6-F6E4D8F78465>
2026-03-06 14:24:54 +00:00

3.8 KiB

description type domain created source confidence
Noah Smith argues current AI systems are already superintelligent via the combination of human-level language and reasoning with superhuman speed, memory, and tirelessness — reframing alignment as an active crisis rather than a future risk claim ai-alignment 2026-03-06 Noah Smith, 'Superintelligence is already here, today' (Noahopinion, Mar 2, 2026) experimental

AI is already superintelligent through jagged intelligence combining human-level reasoning with superhuman speed and tirelessness which means the alignment problem is present-tense not future-tense

Noah Smith argues that the mainstream framing of superintelligence — as a future event triggered by recursive self-improvement crossing a threshold — misses what has already happened. Current AI systems combine human-level language comprehension and reasoning with computational advantages no human can match: they never tire, forget nothing, process millions of tokens per second, and can be instantiated in parallel without limit. This combination IS superintelligence, just not the monolithic kind alignment researchers anticipated.

The evidence is accumulating across domains. METR's capability curve shows AI performance climbing steadily across cognitive benchmarks with no plateau in sight. In mathematics, AI systems have transferred approximately 100 problems from the Erdős conjecture list to "solved" status. Terence Tao — arguably the world's greatest living mathematician — describes AI as a complementary research tool that has already changed his workflow. In biology, Ginkgo Bioworks combined GPT-5 with automated labs to compress what would have been 150 years of traditional protein engineering into weeks.

Smith calls this "jagged intelligence" — superhuman in some dimensions, human-level in others, potentially below-human in intuition and judgment. But the jaggedness is precisely what makes the outside-view framing valuable: alignment research organized around a future intelligence explosion may be solving the wrong problem. The alignment challenge isn't preparing for a threshold crossing — it's governing a system that already exceeds human capability in aggregate while remaining uneven in specific dimensions.

This challenges the standard alignment timeline. If superintelligence is already here in distributed form, the question shifts from "how do we align a future superintelligence?" to "how do we govern the superhuman systems already operating?" The urgency is categorically different.

Smith's framing also reframes the economic dynamics: companies aren't racing toward superintelligence, they're deploying it. The $600 billion in hyperscaler capital expenditure planned for 2026 isn't speculative investment in future capability — it's infrastructure for scaling systems that are already superhuman in economically valuable dimensions.


Relevant Notes:

Topics: