pipeline: archive 1 source(s) post-merge
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
This commit is contained in:
parent
f4b41e4f32
commit
c5530b1f03
1 changed files with 57 additions and 0 deletions
|
|
@ -0,0 +1,57 @@
|
|||
---
|
||||
type: source
|
||||
title: "Senator Slotkin Introduces AI Guardrails Act: First Bill to Limit Pentagon AI Use in Lethal Force, Surveillance, Nuclear"
|
||||
author: "Senator Elissa Slotkin / The Hill"
|
||||
url: https://thehill.com/homenews/senate/5789815-ai-guardrails-act-pentagon/
|
||||
date: 2026-03-17
|
||||
domain: ai-alignment
|
||||
secondary_domains: []
|
||||
format: article
|
||||
status: processed
|
||||
priority: high
|
||||
tags: [AI-Guardrails-Act, Slotkin, NDAA, autonomous-weapons, domestic-surveillance, nuclear, use-based-governance, DoD, Pentagon, legislative-pathway]
|
||||
---
|
||||
|
||||
## Content
|
||||
|
||||
Senator Elissa Slotkin (D-MI) introduced the AI Guardrails Act on March 17, 2026 — a five-page bill imposing statutory limits on Department of Defense AI use. The bill would bar DoD from:
|
||||
1. Using autonomous weapons for lethal force without human authorization
|
||||
2. Using AI for domestic mass surveillance of Americans
|
||||
3. Using AI for nuclear weapons launch decisions
|
||||
|
||||
**Current status:**
|
||||
- No co-sponsors as of introduction
|
||||
- Slotkin aims to fold provisions into the FY2027 NDAA (FY2026 NDAA already signed December 2025)
|
||||
- Introduced as standalone bill but designed for NDAA vehicle
|
||||
- Senator Adam Schiff (D-CA) drafting complementary legislation for autonomous weapons and surveillance
|
||||
- Slotkin serves on Senate Armed Services Committee — relevant committee for NDAA pathway
|
||||
|
||||
**Context:** Introduced directly in response to the Anthropic-Pentagon conflict in which Anthropic refused to allow deployment for autonomous weapons and mass surveillance, was blacklisted by the Trump administration, and received preliminary injunction March 26. The bill would convert Anthropic's voluntary contractual restrictions into binding federal law.
|
||||
|
||||
**Legislative context:** Congress charts diverging paths on AI in FY2026 NDAA — Senate emphasized whole-of-government AI oversight, cross-functional AI oversight teams; House directed DoD to survey AI targeting capabilities. Conference process on FY2026 NDAA already complete; FY2027 process begins mid-2026.
|
||||
|
||||
## Agent Notes
|
||||
|
||||
**Why this matters:** This is the first legislative attempt to convert voluntary corporate AI safety red lines into binding federal law — specifically use-based governance, not capability threshold governance. It answers the session 16 question about whether use-based governance is emerging. Answer: it's being attempted, but without co-sponsors or Republican support in a minority-party bill targeting a future NDAA.
|
||||
|
||||
**What surprised me:** The bill has no co-sponsors at introduction — even from other Democrats. This is weaker than expected for legislation that Slotkin describes as "common-sense guardrails." The bipartisan framing (nuclear weapons, lethal autonomous weapons) would seem to attract cross-party support, but it hasn't.
|
||||
|
||||
**What I expected but didn't find:** Any Republican co-sponsors. Any indication that the Anthropic-Pentagon conflict created bipartisan urgency for statutory governance. The conflict appears to be politically polarized — Democrats see it as a safety issue, Republicans see it as a deregulation issue.
|
||||
|
||||
**KB connections:**
|
||||
- voluntary-safety-pledges-cannot-survive-competitive-pressure — this bill is the legislative response to that claim's empirical validation
|
||||
- ai-critical-juncture-capabilities-governance-mismatch-transformation-window — the Slotkin bill is the key test of whether governance can close the mismatch
|
||||
- Session 16 CLAIM CANDIDATE C (RSP red lines → statutory law as key test)
|
||||
|
||||
**Extraction hints:**
|
||||
- Claim: AI Guardrails Act as first legislative attempt to convert voluntary corporate safety commitments into statutory use-based governance
|
||||
- Claim: The bill's no-co-sponsor status and minority-party origin reveals that use-based governance is not yet bipartisan
|
||||
- The NDAA conference process (FY2027) as the viable pathway for statutory DoD AI safety constraints
|
||||
|
||||
**Context:** Slotkin introduced the bill explicitly in context of Anthropic-Pentagon dispute. Bill text available at slotkin.senate.gov. Described by multiple outlets as "the first attempt to convert voluntary corporate AI safety commitments into binding federal law."
|
||||
|
||||
## Curator Notes
|
||||
|
||||
PRIMARY CONNECTION: voluntary-safety-pledges-cannot-survive-competitive-pressure
|
||||
WHY ARCHIVED: First legislative attempt to convert voluntary AI safety constraints into statutory law; its trajectory is the key test of whether use-based governance can emerge in current US political environment
|
||||
EXTRACTION HINT: Focus on (1) use-based vs capability-threshold framing distinction, (2) the no-co-sponsors status as evidence of governance gap, (3) NDAA conference pathway as the actual legislative route for statutory DoD AI safety constraints
|
||||
Loading…
Reference in a new issue