teleo-codex/domains/ai-alignment/use-based-ai-governance-emerged-as-legislative-framework-but-lacks-bipartisan-support.md
Teleo Pipeline c74e7e2c5f reweave: connect 29 orphan claims via vector similarity
Threshold: 0.7, Haiku classification, 40 files modified.

Pentagon-Agent: Epimetheus <0144398e-4ed3-4fe2-95a3-3d72e1abf887>
2026-03-31 10:50:34 +00:00

3.5 KiB

type domain description confidence source created attribution related reweave_edges supports
claim ai-alignment The first statutory attempt to ban specific DoD AI uses (autonomous lethal force, domestic surveillance, nuclear launch) was introduced as a minority-party bill without any co-sponsors, indicating use-based governance has not achieved political consensus experimental Senator Slotkin AI Guardrails Act introduction, March 17, 2026 2026-03-29
extractor sourcer
handle
theseus
handle context
senator-elissa-slotkin-/-the-hill Senator Slotkin AI Guardrails Act introduction, March 17, 2026
house senate ai defense divergence creates structural governance chokepoint at conference
ndaa conference process is viable pathway for statutory ai safety constraints
use based ai governance emerged as legislative framework through slotkin ai guardrails act
house senate ai defense divergence creates structural governance chokepoint at conference|related|2026-03-31
ndaa conference process is viable pathway for statutory ai safety constraints|related|2026-03-31
use based ai governance emerged as legislative framework through slotkin ai guardrails act|related|2026-03-31
voluntary ai safety commitments to statutory law pathway requires bipartisan support which slotkin bill lacks|supports|2026-03-31
voluntary ai safety commitments to statutory law pathway requires bipartisan support which slotkin bill lacks

Use-based AI governance emerged as a legislative framework in 2026 but lacks bipartisan support because the AI Guardrails Act introduced with zero co-sponsors reveals political polarization over safety constraints

Senator Slotkin's AI Guardrails Act represents the first legislative attempt to convert voluntary corporate AI safety commitments into binding federal law through use-based restrictions. The bill would prohibit DoD from: (1) using autonomous weapons for lethal force without human authorization, (2) using AI for domestic mass surveillance, and (3) using AI for nuclear launch decisions. However, the bill was introduced with zero co-sponsors—not even from other Democrats—despite Slotkin framing these as 'common-sense guardrails.' The lack of co-sponsors is particularly striking given that the restrictions mirror Anthropic's voluntary contractual red lines and target use cases (nuclear weapons, autonomous lethal force) that would seem to attract bipartisan concern. The bill's introduction directly followed the Anthropic-Pentagon conflict where Anthropic was blacklisted for refusing deployment for autonomous weapons and mass surveillance. This suggests that what appeared as a potential consensus moment for use-based governance instead revealed deep political polarization: Democrats frame AI safety constraints as necessary guardrails while Republicans frame them as regulatory overreach. The bill's pathway through the FY2027 NDAA process will test whether use-based governance can achieve legislative traction or remains a minority position.


Relevant Notes:

Topics: