teleo-codex/domains/ai-alignment/use-based-ai-governance-emerged-as-legislative-framework-but-lacks-bipartisan-support.md
m3taversal f63eb8000a fix: normalize 1,072 broken wiki-links across 604 files
Mechanical space→hyphen conversion in frontmatter references
(related_claims, challenges, supports, etc.) to match actual
filenames. Fixes 26.9% broken link rate found by wiki-link audit.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-21 10:21:26 +01:00

3.7 KiB

type domain description confidence source created attribution related reweave_edges supports
claim ai-alignment The first statutory attempt to ban specific DoD AI uses (autonomous lethal force, domestic surveillance, nuclear launch) was introduced as a minority-party bill without any co-sponsors, indicating use-based governance has not achieved political consensus experimental Senator Slotkin AI Guardrails Act introduction, March 17, 2026 2026-03-29
extractor sourcer
handle
theseus
handle context
senator-elissa-slotkin-/-the-hill Senator Slotkin AI Guardrails Act introduction, March 17, 2026
house-senate-ai-defense-divergence-creates-structural-governance-chokepoint-at-conference
ndaa-conference-process-is-viable-pathway-for-statutory-ai-safety-constraints
use-based-ai-governance-emerged-as-legislative-framework-through-slotkin-ai-guardrails-act
electoral-investment-becomes-residual-ai-governance-strategy-when-voluntary-and-litigation-routes-insufficient
house-senate-ai-defense-divergence-creates-structural-governance-chokepoint-at-conference|related|2026-03-31
ndaa-conference-process-is-viable-pathway-for-statutory-ai-safety-constraints|related|2026-03-31
use-based-ai-governance-emerged-as-legislative-framework-through-slotkin-ai-guardrails-act|related|2026-03-31
voluntary-ai-safety-commitments-to-statutory-law-pathway-requires-bipartisan-support-which-slotkin-bill-lacks|supports|2026-03-31
electoral-investment-becomes-residual-ai-governance-strategy-when-voluntary-and-litigation-routes-insufficient|related|2026-04-03
voluntary-ai-safety-commitments-to-statutory-law-pathway-requires-bipartisan-support-which-slotkin-bill-lacks

Use-based AI governance emerged as a legislative framework in 2026 but lacks bipartisan support because the AI Guardrails Act introduced with zero co-sponsors reveals political polarization over safety constraints

Senator Slotkin's AI Guardrails Act represents the first legislative attempt to convert voluntary corporate AI safety commitments into binding federal law through use-based restrictions. The bill would prohibit DoD from: (1) using autonomous weapons for lethal force without human authorization, (2) using AI for domestic mass surveillance, and (3) using AI for nuclear launch decisions. However, the bill was introduced with zero co-sponsors—not even from other Democrats—despite Slotkin framing these as 'common-sense guardrails.' The lack of co-sponsors is particularly striking given that the restrictions mirror Anthropic's voluntary contractual red lines and target use cases (nuclear weapons, autonomous lethal force) that would seem to attract bipartisan concern. The bill's introduction directly followed the Anthropic-Pentagon conflict where Anthropic was blacklisted for refusing deployment for autonomous weapons and mass surveillance. This suggests that what appeared as a potential consensus moment for use-based governance instead revealed deep political polarization: Democrats frame AI safety constraints as necessary guardrails while Republicans frame them as regulatory overreach. The bill's pathway through the FY2027 NDAA process will test whether use-based governance can achieve legislative traction or remains a minority position.


Relevant Notes:

Topics: