Threshold: 0.7, Haiku classification, 40 files modified. Pentagon-Agent: Epimetheus <0144398e-4ed3-4fe2-95a3-3d72e1abf887>
37 lines
3.2 KiB
Markdown
37 lines
3.2 KiB
Markdown
---
|
|
type: claim
|
|
domain: ai-alignment
|
|
description: The Slotkin bill represents the first statutory attempt to regulate AI through use restrictions (autonomous weapons, mass surveillance, nuclear launch) rather than capability-based controls
|
|
confidence: experimental
|
|
source: Senator Elissa Slotkin / The Hill, AI Guardrails Act introduced March 17, 2026
|
|
created: 2026-03-29
|
|
attribution:
|
|
extractor:
|
|
- handle: "theseus"
|
|
sourcer:
|
|
- handle: "senator-elissa-slotkin"
|
|
context: "Senator Elissa Slotkin / The Hill, AI Guardrails Act introduced March 17, 2026"
|
|
related:
|
|
- "house senate ai defense divergence creates structural governance chokepoint at conference"
|
|
- "voluntary ai safety commitments to statutory law pathway requires bipartisan support which slotkin bill lacks"
|
|
reweave_edges:
|
|
- "house senate ai defense divergence creates structural governance chokepoint at conference|related|2026-03-31"
|
|
- "use based ai governance emerged as legislative framework but lacks bipartisan support|supports|2026-03-31"
|
|
- "voluntary ai safety commitments to statutory law pathway requires bipartisan support which slotkin bill lacks|related|2026-03-31"
|
|
supports:
|
|
- "use based ai governance emerged as legislative framework but lacks bipartisan support"
|
|
---
|
|
|
|
# Use-based AI governance emerged as a legislative framework through the AI Guardrails Act which prohibits specific DoD AI applications rather than capability thresholds
|
|
|
|
The AI Guardrails Act introduced by Senator Slotkin on March 17, 2026 is the first federal legislation to impose use-based restrictions on AI deployment rather than capability-threshold governance. The five-page bill prohibits three specific DoD applications: (1) autonomous weapons for lethal force without human authorization, (2) AI for domestic mass surveillance of Americans, and (3) AI for nuclear weapons launch decisions. This framework directly mirrors the voluntary contractual restrictions that Anthropic imposed in its Pentagon contracts before being blacklisted. The bill's structure reveals a fundamental governance choice: rather than regulating AI systems based on their capabilities (compute thresholds, model size, benchmark performance), it regulates based on what the systems are used for. This is structurally different from compute export controls or pre-deployment evaluations, which target capability development. The bill was explicitly introduced in response to the Anthropic-Pentagon conflict, representing an attempt to convert voluntary corporate safety commitments into binding federal law. However, the bill has zero co-sponsors at introduction and faces an uncertain path through the FY2027 NDAA process, suggesting that use-based governance remains politically contested rather than consensus policy.
|
|
|
|
---
|
|
|
|
Relevant Notes:
|
|
- voluntary-safety-pledges-cannot-survive-competitive-pressure
|
|
- [[AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation]]
|
|
- [[only binding regulation with enforcement teeth changes frontier AI lab behavior because every voluntary commitment has been eroded abandoned or made conditional on competitor behavior when commercially inconvenient]]
|
|
|
|
Topics:
|
|
- [[_map]]
|