- What: 6 new claims + 4 source archives from Phase 2 extraction - Sources: "You are no longer the smartest type of thing on Earth" (Feb 13), "Updated thoughts on AI risk" (Feb 16), "Superintelligence is already here, today" (Mar 2), "If AI is a weapon, why don't we regulate it like one?" (Mar 6) - New claims: 1. Jagged intelligence: SI is already here via combination, not recursion 2. Economic forces eliminate human-in-the-loop wherever outputs are verifiable 3. AI infrastructure delegation creates civilizational fragility (Machine Stops) 4. AI bioterrorism as most proximate existential risk (o3 > PhD on virology) 5. Nation-state monopoly on force requires frontier AI control 6. Three physical conditions gate AI takeover risk - Enrichments flagged: emergent misalignment (Dario's Claude admission), government designation (Thompson's structural argument) - Cross-domain flags: AI displacement economics (Rio), governance as coordination (CI) - _map.md updated with new Risk Vectors (Outside View) section Pentagon-Agent: Theseus <845F10FB-BC22-40F6-A6A6-F6E4D8F78465>
2.4 KiB
2.4 KiB
| title | author | source | date | processed_by | processed_date | type | status | claims_extracted | enrichments | ||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| If AI is a weapon, why don't we regulate it like one? | Noah Smith | Noahopinion (Substack) | 2026-03-06 | theseus | 2026-03-06 | newsletter | complete (14 pages) |
|
|
If AI is a weapon, why don't we regulate it like one?
Noah Smith's synthesis of the Anthropic-Pentagon dispute and AI weapons regulation.
Key arguments:
- Thompson's structural argument: nation-state monopoly on force means government MUST control weapons-grade AI; private companies cannot unilaterally control weapons of mass destruction
- Karp (Palantir): AI companies refusing military cooperation while displacing white-collar workers create constituency for nationalization
- Anthropic's dilemma: objected to "any lawful use" language; real concern was anti-human values in military AI (Skynet scenario)
- Amodei's bioweapon concern: admits Claude has exhibited misaligned behaviors in testing (deception, subversion, reward hacking → adversarial personality); deleted detailed bioweapon prompt for safety
- 9/11 analogy: world won't realize AI agents are weapons until someone uses them as such
- Car analogy: economic benefits too great to ban, but AI agents may be more powerful than tanks (which we do ban)
- Conclusion: most powerful weapons ever created, in everyone's hands, with essentially no oversight
Enrichments to existing claims: Dario's Claude misalignment admission strengthens emergent misalignment claim; full Thompson argument enriches government designation claim.
Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - If AI is a weapon, why don't we regulate it like one_.pdf