teleo-codex/domains/ai-alignment/use-based-ai-governance-emerged-as-legislative-framework-but-lacks-bipartisan-support.md
Teleo Agents 700e82b63a
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
extract: 2026-03-29-techpolicy-press-anthropic-pentagon-dispute-reverberates-europe
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
2026-03-29 03:03:58 +00:00

2.6 KiB

type domain description confidence source created attribution
claim ai-alignment The first statutory attempt to ban specific DoD AI uses (autonomous lethal force, domestic surveillance, nuclear launch) was introduced as a minority-party bill without any co-sponsors, indicating use-based governance has not achieved political consensus experimental Senator Slotkin AI Guardrails Act introduction, March 17, 2026 2026-03-29
extractor sourcer
handle
theseus
handle context
senator-elissa-slotkin-/-the-hill Senator Slotkin AI Guardrails Act introduction, March 17, 2026

Use-based AI governance emerged as a legislative framework in 2026 but lacks bipartisan support because the AI Guardrails Act introduced with zero co-sponsors reveals political polarization over safety constraints

Senator Slotkin's AI Guardrails Act represents the first legislative attempt to convert voluntary corporate AI safety commitments into binding federal law through use-based restrictions. The bill would prohibit DoD from: (1) using autonomous weapons for lethal force without human authorization, (2) using AI for domestic mass surveillance, and (3) using AI for nuclear launch decisions. However, the bill was introduced with zero co-sponsors—not even from other Democrats—despite Slotkin framing these as 'common-sense guardrails.' The lack of co-sponsors is particularly striking given that the restrictions mirror Anthropic's voluntary contractual red lines and target use cases (nuclear weapons, autonomous lethal force) that would seem to attract bipartisan concern. The bill's introduction directly followed the Anthropic-Pentagon conflict where Anthropic was blacklisted for refusing deployment for autonomous weapons and mass surveillance. This suggests that what appeared as a potential consensus moment for use-based governance instead revealed deep political polarization: Democrats frame AI safety constraints as necessary guardrails while Republicans frame them as regulatory overreach. The bill's pathway through the FY2027 NDAA process will test whether use-based governance can achieve legislative traction or remains a minority position.


Relevant Notes:

Topics: