teleo-codex/domains/ai-alignment/use-based-ai-governance-emerged-as-legislative-framework-through-slotkin-ai-guardrails-act.md
Teleo Agents f4b41e4f32
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
extract: 2026-03-29-slotkin-ai-guardrails-act-dod-autonomous-weapons
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
2026-03-29 03:07:12 +00:00

28 lines
2.5 KiB
Markdown

---
type: claim
domain: ai-alignment
description: The Slotkin bill represents the first statutory attempt to regulate AI through use restrictions (autonomous weapons, mass surveillance, nuclear launch) rather than capability-based controls
confidence: experimental
source: Senator Elissa Slotkin / The Hill, AI Guardrails Act introduced March 17, 2026
created: 2026-03-29
attribution:
extractor:
- handle: "theseus"
sourcer:
- handle: "senator-elissa-slotkin"
context: "Senator Elissa Slotkin / The Hill, AI Guardrails Act introduced March 17, 2026"
---
# Use-based AI governance emerged as a legislative framework through the AI Guardrails Act which prohibits specific DoD AI applications rather than capability thresholds
The AI Guardrails Act introduced by Senator Slotkin on March 17, 2026 is the first federal legislation to impose use-based restrictions on AI deployment rather than capability-threshold governance. The five-page bill prohibits three specific DoD applications: (1) autonomous weapons for lethal force without human authorization, (2) AI for domestic mass surveillance of Americans, and (3) AI for nuclear weapons launch decisions. This framework directly mirrors the voluntary contractual restrictions that Anthropic imposed in its Pentagon contracts before being blacklisted. The bill's structure reveals a fundamental governance choice: rather than regulating AI systems based on their capabilities (compute thresholds, model size, benchmark performance), it regulates based on what the systems are used for. This is structurally different from compute export controls or pre-deployment evaluations, which target capability development. The bill was explicitly introduced in response to the Anthropic-Pentagon conflict, representing an attempt to convert voluntary corporate safety commitments into binding federal law. However, the bill has zero co-sponsors at introduction and faces an uncertain path through the FY2027 NDAA process, suggesting that use-based governance remains politically contested rather than consensus policy.
---
Relevant Notes:
- voluntary-safety-pledges-cannot-survive-competitive-pressure
- [[AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation]]
- [[only binding regulation with enforcement teeth changes frontier AI lab behavior because every voluntary commitment has been eroded abandoned or made conditional on competitor behavior when commercially inconvenient]]
Topics:
- [[_map]]