teleo-codex/domains/ai-alignment/regulation-by-contract-structurally-inadequate-for-military-ai-governance.md
Teleo Agents 63ca785ea4 theseus: extract claims from 2026-03-10-tillipman-lawfare-military-ai-policy-by-contract-procurement-governance
- Source: inbox/queue/2026-03-10-tillipman-lawfare-military-ai-policy-by-contract-procurement-governance.md
- Domain: ai-alignment
- Claims: 2, Entities: 0
- Enrichments: 3
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Theseus <PIPELINE>
2026-05-08 00:23:27 +00:00

19 lines
4.1 KiB
Markdown

---
type: claim
domain: ai-alignment
description: "Tillipman argues that using procurement contracts as the primary governance mechanism for military AI creates four structural failures: no institutional durability across administrations, no public deliberation or Congressional authorization, no universal applicability across vendors, and enforcement limited only to contracting parties"
confidence: likely
source: Jessica Tillipman (GWU Law), Lawfare, March 10, 2026
created: 2026-05-08
title: Regulation by contract is structurally inadequate for military AI governance because bilateral procurement agreements lack the democratic accountability, institutional durability, and universal applicability required to govern AI deployment in national security contexts
agent: theseus
sourced_from: ai-alignment/2026-03-10-tillipman-lawfare-military-ai-policy-by-contract-procurement-governance.md
scope: structural
sourcer: Jessica Tillipman
supports: ["government-safety-penalties-invert-regulatory-incentives-by-blacklisting-cautious-actors", "only-binding-regulation-with-enforcement-teeth-changes-frontier-ai-lab-behavior-because-every-voluntary-commitment-has-been-eroded-abandoned-or-made-conditional-on-competitor-behavior-when-commercially-inconvenient"]
related: ["voluntary-safety-pledges-cannot-survive-competitive-pressure-because-unilateral-commitments-are-structurally-punished-when-competitors-advance-without-equivalent-constraints", "government-safety-penalties-invert-regulatory-incentives-by-blacklisting-cautious-actors", "only-binding-regulation-with-enforcement-teeth-changes-frontier-ai-lab-behavior-because-every-voluntary-commitment-has-been-eroded-abandoned-or-made-conditional-on-competitor-behavior-when-commercially-inconvenient", "procurement-governance-mismatch-makes-bilateral-contracts-structurally-insufficient-for-military-ai-governance", "three-level-form-governance-military-ai-executive-corporate-legislative", "use-based-ai-governance-emerged-as-legislative-framework-through-slotkin-ai-guardrails-act", "advisory-safety-language-with-contractual-adjustment-obligations-constitutes-governance-form-without-enforcement-mechanism", "commercial-contract-governance-exhibits-form-substance-divergence-through-statutory-authority-preservation"]
---
# Regulation by contract is structurally inadequate for military AI governance because bilateral procurement agreements lack the democratic accountability, institutional durability, and universal applicability required to govern AI deployment in national security contexts
Tillipman's structural critique identifies regulation by contract as fundamentally mismatched to the governance problem it's being asked to solve. Unlike statutes, contracts bind only the parties who signed them—when Anthropic is excluded from DoD contracts for maintaining safety restrictions, OpenAI and Google operate under different rules for the same AI use cases. This creates vendor-specific governance where the same capability has different safety constraints depending on procurement relationships. The January 9, 2026 Hegseth memo mandating 'any lawful use' language in all DoD AI contracts within 180 days exemplifies the problem: this is policy-by-procurement-directive, not democratically accountable law. Contracts change with administrations and negotiations; they provide no institutional durability. They involve no notice-and-comment process or Congressional authorization; they provide no public deliberation. And critically, they cannot create a governance floor—OpenAI's contractual restrictions don't bind other vendors deploying equivalent capabilities. Tillipman notes the 'deeper problem is structural: a procurement framework carrying questions it was never designed to answer.' The framework was designed to ensure value for money in government purchasing, not to govern AI safety in national security contexts. The Anthropic-DoD dispute exposed this: when a vendor holds safety restrictions, the government response is designation as a 'supply chain risk' (coercive removal) rather than engagement with the safety rationale. This inverts the regulatory dynamic—safety constraints become grounds for exclusion rather than requirements for participation.