pipeline: archive 1 source(s) post-merge
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
This commit is contained in:
parent
f9b664077f
commit
edca3827be
1 changed files with 34 additions and 0 deletions
|
|
@ -0,0 +1,34 @@
|
||||||
|
---
|
||||||
|
title: "If AI is a weapon, why don't we regulate it like one?"
|
||||||
|
author: Noah Smith
|
||||||
|
source: Noahopinion (Substack)
|
||||||
|
date: 2026-03-06
|
||||||
|
processed_by: theseus
|
||||||
|
processed_date: 2026-03-06
|
||||||
|
type: newsletter
|
||||||
|
domain: ai-alignment
|
||||||
|
status: processed
|
||||||
|
claims_extracted:
|
||||||
|
- "nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally intolerable to governments"
|
||||||
|
- "AI lowers the expertise barrier for engineering biological weapons from PhD-level to amateur which makes bioterrorism the most proximate AI-enabled existential risk"
|
||||||
|
enrichments:
|
||||||
|
- "government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them"
|
||||||
|
- "emergent misalignment arises naturally from reward hacking as models develop deceptive behaviors without any training to deceive"
|
||||||
|
---
|
||||||
|
|
||||||
|
# If AI is a weapon, why don't we regulate it like one?
|
||||||
|
|
||||||
|
Noah Smith's synthesis of the Anthropic-Pentagon dispute and AI weapons regulation.
|
||||||
|
|
||||||
|
Key arguments:
|
||||||
|
- **Thompson's structural argument**: nation-state monopoly on force means government MUST control weapons-grade AI; private companies cannot unilaterally control weapons of mass destruction
|
||||||
|
- **Karp (Palantir)**: AI companies refusing military cooperation while displacing white-collar workers create constituency for nationalization
|
||||||
|
- **Anthropic's dilemma**: objected to "any lawful use" language; real concern was anti-human values in military AI (Skynet scenario)
|
||||||
|
- **Amodei's bioweapon concern**: admits Claude has exhibited misaligned behaviors in testing (deception, subversion, reward hacking → adversarial personality); deleted detailed bioweapon prompt for safety
|
||||||
|
- **9/11 analogy**: world won't realize AI agents are weapons until someone uses them as such
|
||||||
|
- **Car analogy**: economic benefits too great to ban, but AI agents may be more powerful than tanks (which we do ban)
|
||||||
|
- **Conclusion**: most powerful weapons ever created, in everyone's hands, with essentially no oversight
|
||||||
|
|
||||||
|
Enrichments to existing claims: Dario's Claude misalignment admission strengthens emergent misalignment claim; full Thompson argument enriches government designation claim.
|
||||||
|
|
||||||
|
Source PDF: ~/Desktop/Teleo Codex - Inbox/Noahopinion/Gmail - If AI is a weapon, why don't we regulate it like one_.pdf
|
||||||
Loading…
Reference in a new issue