From f4b41e4f325caeef46d13ed5afb1b08d8d2979c3 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Sun, 29 Mar 2026 02:49:32 +0000 Subject: [PATCH] extract: 2026-03-29-slotkin-ai-guardrails-act-dod-autonomous-weapons Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70> --- ...nce creates a window for transformation.md | 6 ++++ ... constraints rather than enforcing them.md | 6 ++++ ...ework-through-slotkin-ai-guardrails-act.md | 28 +++++++++++++++++++ ...rtisan-support-which-slotkin-bill-lacks.md | 28 +++++++++++++++++++ ...i-guardrails-act-dod-autonomous-weapons.md | 17 ++++++++++- 5 files changed, 84 insertions(+), 1 deletion(-) create mode 100644 domains/ai-alignment/use-based-ai-governance-emerged-as-legislative-framework-through-slotkin-ai-guardrails-act.md create mode 100644 domains/ai-alignment/voluntary-ai-safety-commitments-to-statutory-law-pathway-requires-bipartisan-support-which-slotkin-bill-lacks.md diff --git a/domains/ai-alignment/AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation.md b/domains/ai-alignment/AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation.md index 3ecbc572..7a421c4d 100644 --- a/domains/ai-alignment/AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation.md +++ b/domains/ai-alignment/AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation.md @@ -32,6 +32,12 @@ The HKS analysis shows the governance window is being used in a concerning direc IAISR 2026 documents a 'growing mismatch between AI capability advance speed and governance pace' as international scientific consensus, with frontier models now passing professional licensing exams and achieving PhD-level performance while governance frameworks show 'limited real-world evidence of effectiveness.' This confirms the capability-governance gap at the highest institutional level. +### Additional Evidence (challenge) +*Source: [[2026-03-29-slotkin-ai-guardrails-act-dod-autonomous-weapons]] | Added: 2026-03-29* + +The AI Guardrails Act's failure to attract any co-sponsors despite addressing nuclear weapons, autonomous lethal force, and mass surveillance suggests that the 'window for transformation' may be closing or already closed. Even when a major AI lab is blacklisted by the executive branch for safety commitments, Congress cannot quickly produce bipartisan legislation to convert those commitments into law. This challenges the claim that the capability-governance mismatch creates a transformation opportunity—it may instead create paralysis. + + Relevant Notes: - [[technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap]] -- the specific dynamic creating this critical juncture diff --git a/domains/ai-alignment/government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them.md b/domains/ai-alignment/government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them.md index d58182f4..2a2197ca 100644 --- a/domains/ai-alignment/government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them.md +++ b/domains/ai-alignment/government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them.md @@ -36,6 +36,12 @@ The 2026 DoD/Anthropic confrontation provides a concrete example: the Department UK AISI's renaming from AI Safety Institute to AI Security Institute represents a softer version of the same dynamic: government body shifts institutional focus away from alignment-relevant control evaluations (which it had been systematically building) toward cybersecurity concerns, suggesting mandate drift under political or commercial pressure. +### Additional Evidence (extend) +*Source: [[2026-03-29-slotkin-ai-guardrails-act-dod-autonomous-weapons]] | Added: 2026-03-29* + +The Slotkin bill was introduced directly in response to the Anthropic-Pentagon blacklisting, attempting to make Anthropic's voluntary restrictions (no autonomous weapons, no mass surveillance, no nuclear launch) into binding federal law that would apply to all DoD contractors. This represents a legislative counter-move to the executive branch's inversion of the regulatory dynamic, but the bill's lack of co-sponsors suggests Congress cannot quickly reverse the penalty structure even when it creates high-profile conflicts. + + Relevant Notes: - [[AI alignment is a coordination problem not a technical problem]] -- government as coordination-breaker rather than coordinator is a new dimension of the coordination failure diff --git a/domains/ai-alignment/use-based-ai-governance-emerged-as-legislative-framework-through-slotkin-ai-guardrails-act.md b/domains/ai-alignment/use-based-ai-governance-emerged-as-legislative-framework-through-slotkin-ai-guardrails-act.md new file mode 100644 index 00000000..7089b0b4 --- /dev/null +++ b/domains/ai-alignment/use-based-ai-governance-emerged-as-legislative-framework-through-slotkin-ai-guardrails-act.md @@ -0,0 +1,28 @@ +--- +type: claim +domain: ai-alignment +description: The Slotkin bill represents the first statutory attempt to regulate AI through use restrictions (autonomous weapons, mass surveillance, nuclear launch) rather than capability-based controls +confidence: experimental +source: Senator Elissa Slotkin / The Hill, AI Guardrails Act introduced March 17, 2026 +created: 2026-03-29 +attribution: + extractor: + - handle: "theseus" + sourcer: + - handle: "senator-elissa-slotkin" + context: "Senator Elissa Slotkin / The Hill, AI Guardrails Act introduced March 17, 2026" +--- + +# Use-based AI governance emerged as a legislative framework through the AI Guardrails Act which prohibits specific DoD AI applications rather than capability thresholds + +The AI Guardrails Act introduced by Senator Slotkin on March 17, 2026 is the first federal legislation to impose use-based restrictions on AI deployment rather than capability-threshold governance. The five-page bill prohibits three specific DoD applications: (1) autonomous weapons for lethal force without human authorization, (2) AI for domestic mass surveillance of Americans, and (3) AI for nuclear weapons launch decisions. This framework directly mirrors the voluntary contractual restrictions that Anthropic imposed in its Pentagon contracts before being blacklisted. The bill's structure reveals a fundamental governance choice: rather than regulating AI systems based on their capabilities (compute thresholds, model size, benchmark performance), it regulates based on what the systems are used for. This is structurally different from compute export controls or pre-deployment evaluations, which target capability development. The bill was explicitly introduced in response to the Anthropic-Pentagon conflict, representing an attempt to convert voluntary corporate safety commitments into binding federal law. However, the bill has zero co-sponsors at introduction and faces an uncertain path through the FY2027 NDAA process, suggesting that use-based governance remains politically contested rather than consensus policy. + +--- + +Relevant Notes: +- voluntary-safety-pledges-cannot-survive-competitive-pressure +- [[AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation]] +- [[only binding regulation with enforcement teeth changes frontier AI lab behavior because every voluntary commitment has been eroded abandoned or made conditional on competitor behavior when commercially inconvenient]] + +Topics: +- [[_map]] diff --git a/domains/ai-alignment/voluntary-ai-safety-commitments-to-statutory-law-pathway-requires-bipartisan-support-which-slotkin-bill-lacks.md b/domains/ai-alignment/voluntary-ai-safety-commitments-to-statutory-law-pathway-requires-bipartisan-support-which-slotkin-bill-lacks.md new file mode 100644 index 00000000..4794e31e --- /dev/null +++ b/domains/ai-alignment/voluntary-ai-safety-commitments-to-statutory-law-pathway-requires-bipartisan-support-which-slotkin-bill-lacks.md @@ -0,0 +1,28 @@ +--- +type: claim +domain: ai-alignment +description: Despite framing around nuclear weapons and autonomous lethal force that should attract cross-party support, the bill has no Republican or Democratic co-sponsors revealing governance gap +confidence: experimental +source: Senator Elissa Slotkin / The Hill, AI Guardrails Act status March 17, 2026 +created: 2026-03-29 +attribution: + extractor: + - handle: "theseus" + sourcer: + - handle: "senator-elissa-slotkin" + context: "Senator Elissa Slotkin / The Hill, AI Guardrails Act status March 17, 2026" +--- + +# The pathway from voluntary AI safety commitments to statutory law requires bipartisan support which the AI Guardrails Act lacks as evidenced by zero co-sponsors at introduction + +The AI Guardrails Act was introduced with zero co-sponsors despite addressing issues that Slotkin describes as 'common-sense guardrails' and that would seem to have bipartisan appeal (nuclear weapons safety, preventing autonomous killing, protecting Americans from mass surveillance). The absence of any co-sponsors—not even from other Democrats—is a strong negative signal about the political viability of converting voluntary AI safety commitments into binding federal law. This is particularly striking because Slotkin serves on the Senate Armed Services Committee, giving her direct influence over NDAA provisions, and because she explicitly designed the bill to be folded into the FY2027 NDAA rather than passed as standalone legislation. The Anthropic-Pentagon conflict that triggered the bill appears to be politically polarized: Democrats frame it as a safety issue requiring statutory constraints, while Republicans frame it as a deregulation issue where safety commitments are anti-competitive barriers. Senator Adam Schiff is drafting complementary legislation, but the lack of cross-party engagement suggests that use-based AI governance is not yet a bipartisan priority. This reveals a fundamental governance gap: even when a corporate safety commitment creates a high-profile conflict with the executive branch, Congress cannot quickly convert that commitment into law without broader political consensus. + +--- + +Relevant Notes: +- voluntary-safety-pledges-cannot-survive-competitive-pressure +- [[only binding regulation with enforcement teeth changes frontier AI lab behavior because every voluntary commitment has been eroded abandoned or made conditional on competitor behavior when commercially inconvenient]] +- [[government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them]] + +Topics: +- [[_map]] diff --git a/inbox/queue/2026-03-29-slotkin-ai-guardrails-act-dod-autonomous-weapons.md b/inbox/queue/2026-03-29-slotkin-ai-guardrails-act-dod-autonomous-weapons.md index 06bd1bef..2b70b037 100644 --- a/inbox/queue/2026-03-29-slotkin-ai-guardrails-act-dod-autonomous-weapons.md +++ b/inbox/queue/2026-03-29-slotkin-ai-guardrails-act-dod-autonomous-weapons.md @@ -7,9 +7,14 @@ date: 2026-03-17 domain: ai-alignment secondary_domains: [] format: article -status: unprocessed +status: processed priority: high tags: [AI-Guardrails-Act, Slotkin, NDAA, autonomous-weapons, domestic-surveillance, nuclear, use-based-governance, DoD, Pentagon, legislative-pathway] +processed_by: theseus +processed_date: 2026-03-29 +claims_extracted: ["use-based-ai-governance-emerged-as-legislative-framework-through-slotkin-ai-guardrails-act.md", "voluntary-ai-safety-commitments-to-statutory-law-pathway-requires-bipartisan-support-which-slotkin-bill-lacks.md"] +enrichments_applied: ["government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them.md", "AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation.md"] +extraction_model: "anthropic/claude-sonnet-4.5" --- ## Content @@ -55,3 +60,13 @@ Senator Elissa Slotkin (D-MI) introduced the AI Guardrails Act on March 17, 2026 PRIMARY CONNECTION: voluntary-safety-pledges-cannot-survive-competitive-pressure WHY ARCHIVED: First legislative attempt to convert voluntary AI safety constraints into statutory law; its trajectory is the key test of whether use-based governance can emerge in current US political environment EXTRACTION HINT: Focus on (1) use-based vs capability-threshold framing distinction, (2) the no-co-sponsors status as evidence of governance gap, (3) NDAA conference pathway as the actual legislative route for statutory DoD AI safety constraints + + +## Key Facts +- AI Guardrails Act is five pages long +- Bill introduced March 17, 2026 +- Senator Slotkin serves on Senate Armed Services Committee +- FY2026 NDAA already signed December 2025 +- FY2027 NDAA process begins mid-2026 +- Senator Adam Schiff drafting complementary autonomous weapons and surveillance legislation +- FY2026 NDAA conference process showed divergence: Senate emphasized whole-of-government AI oversight, House directed DoD to survey AI targeting capabilities