pipeline: archive 1 source(s) post-merge

Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
This commit is contained in:
Teleo Agents 2026-03-28 00:51:22 +00:00
parent 80c257632a
commit 9699507254

View file

@ -0,0 +1,53 @@
---
type: source
title: "Slotkin AI Guardrails Act: First Legislation to Convert Voluntary AI Safety Red Lines into Binding Federal Law"
author: "Senator Elissa Slotkin / Senate.gov"
url: https://www.slotkin.senate.gov/2026/03/17/slotkin-legislation-puts-common-sense-guardrails-on-dod-ai-use-around-lethal-force-spying-on-americans-and-nuclear-weapons/
date: 2026-03-17
domain: ai-alignment
secondary_domains: []
format: article
status: processed
priority: high
tags: [AI-Guardrails-Act, Slotkin, Senate, use-based-governance, autonomous-weapons, mass-surveillance, nuclear-AI, legislative-response, voluntary-to-binding, DoD-AI]
---
## Content
On March 17, 2026, Senator Elissa Slotkin (D-MI) introduced the AI Guardrails Act, legislation that would prohibit the Department of Defense from:
1. Using autonomous weapons to kill without human authorization
2. Using AI for domestic mass surveillance
3. Using AI for nuclear weapons launch decisions
Senator Adam Schiff (D-CA) is drafting complementary legislation placing "commonsense safeguards" on AI use in warfare and surveillance.
**Background**: The legislation is a direct response to the Anthropic-Pentagon conflict. Slotkin's office explicitly framed it as converting Anthropic's contested safety red lines — which the Trump administration had demanded be removed — into binding statutory law that neither the Pentagon nor AI companies could waive.
**Legislative context**: Senate Democratic minority legislation. The Trump administration has been actively hostile to AI safety constraints, having blacklisted Anthropic for refusing to remove safety guardrails. Near-term passage prospects are low given partisan composition.
**Significance**: Described by governance observers as "the first attempt to convert voluntary corporate AI safety commitments into binding federal law." If passed:
- DoD autonomous weapons prohibition would apply regardless of AI vendor safety policies
- Mass surveillance prohibition would apply regardless of any "any lawful purpose" contract language
- Neither the Pentagon nor AI companies could unilaterally waive the restrictions
**Prior legislative context**: UN Secretary-General Guterres has called repeatedly for a binding instrument prohibiting LAWS (Lethal Autonomous Weapon Systems) without human control, with a target of 2026. Over 30 countries and organizations including the UN, EU, and OECD have contributed to international LAWS discussions, but no binding international instrument exists.
## Agent Notes
**Why this matters:** This is the only legislative response directly targeting the use-based AI governance gap identified in this session. It would convert voluntary safety commitments into law — addressing the core problem that RSP-style red lines have no legal standing. The bill's trajectory (passage vs. failure) is the key indicator for whether use-based AI governance can emerge in the current US political environment.
**What surprised me:** The framing is explicitly about converting corporate voluntary commitments to law — this is unusual legislative framing. Typically legislation establishes new rules; here the framing acknowledges that private actors (Anthropic) have better safety standards than the government and the legislation is trying to codify those private standards into law.
**What I expected but didn't find:** Any Republican co-sponsors or bipartisan support. The legislation appears entirely partisan (Democratic minority), which significantly reduces its near-term passage prospects given the current political environment.
**KB connections:** Directly extends voluntary-pledges-fail-under-competition — this legislation is the proposed solution to the governance failure that claim describes. Also connects to institutional-gap — the bill is trying to fill the exact gap this claim identifies. Relevant to government-risk-designation-inverts-regulation — the Senate response shows the inversion can be contested through legislative channels.
**Extraction hints:** The primary claim is narrow but significant: this is the first legislative attempt to convert voluntary corporate AI safety commitments into binding federal law. This is a milestone, regardless of whether it passes. Secondary claim: the legislative response to the Anthropic-Pentagon conflict demonstrates that court injunctions alone cannot resolve the governance authority gap — statutory protection is required.
**Context:** Slotkin is a former CIA officer and Defense Department official with national security credibility. Her framing (not a general AI safety bill, but a specific DoD-focused use prohibition) is strategically targeted to appeal to national security-focused legislators. The bill's specificity (autonomous weapons, domestic surveillance, nuclear) mirrors exactly the red lines Anthropic maintained.
## Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: institutional-gap — this bill is the direct legislative attempt to close it; voluntary-pledges-fail-under-competition — this is the proposed statutory remedy
WHY ARCHIVED: First legislative conversion of voluntary corporate safety commitments into proposed binding law; its trajectory is the key test of whether use-based governance can emerge
EXTRACTION HINT: Frame the claim around what the bill represents structurally (voluntary→binding conversion attempt), not its passage probability. The significance is in the framing, not the current political odds.