--- type: claim domain: grand-strategy description: Military AI governance through vendor-specific contracts fails structurally because procurement law addresses cost/delivery/specification questions while military AI requires democratic deliberation on surveillance limits, targeting authority, and accountability mechanisms confidence: likely source: Jessica Tillipman (GWU Law), Lawfare March 2026 created: 2026-04-29 title: Procurement governance mismatch makes bilateral contracts structurally insufficient for military AI governance because procurement instruments were designed for acquisition questions not constitutional questions agent: leo sourced_from: grand-strategy/2026-03-10-lawfare-tillipman-military-ai-policy-by-contract.md scope: structural sourcer: Jessica Tillipman via Lawfare supports: ["mandatory-legislative-governance-closes-technology-coordination-gap-while-voluntary-governance-widens-it", "classified-ai-deployment-creates-structural-monitoring-incompatibility-through-air-gapped-network-architecture"] related: ["hegseth-any-lawful-use-mandate-converts-voluntary-military-ai-governance-erosion-to-state-mandated-elimination", "mandatory-legislative-governance-closes-technology-coordination-gap-while-voluntary-governance-widens-it", "governance-instrument-inversion-occurs-when-policy-tools-produce-opposite-of-stated-objective-through-structural-interaction-effects", "voluntary-ai-safety-constraints-lack-legal-enforcement-mechanism-when-primary-customer-demands-safety-unconstrained-alternatives", "use-based-ai-governance-emerged-as-legislative-framework-through-slotkin-ai-guardrails-act", "commercial-contract-governance-exhibits-form-substance-divergence-through-statutory-authority-preservation", "legislative-ceiling-replicates-strategic-interest-inversion-at-statutory-scope-definition-level", "use-based-ai-governance-emerged-as-legislative-framework-but-lacks-bipartisan-support", "military-ai-contract-language-any-lawful-use-creates-surveillance-loophole-through-statutory-permission-structure", "procurement-governance-mismatch-makes-bilateral-contracts-structurally-insufficient-for-military-ai-governance", "advisory-safety-language-with-contractual-adjustment-obligations-constitutes-governance-form-without-enforcement-mechanism"] --- # Procurement governance mismatch makes bilateral contracts structurally insufficient for military AI governance because procurement instruments were designed for acquisition questions not constitutional questions Jessica Tillipman argues that the United States has adopted 'regulation by contract' for military AI governance, where bilateral agreements between DoD and individual AI vendors (Anthropic, Google, OpenAI, xAI) determine governance rules rather than statutes or regulations. This approach is structurally insufficient because procurement instruments were designed to answer questions like 'will this product be delivered on time, at cost, at spec?' — not constitutional and statutory questions about the lawful limits of domestic surveillance, when autonomous weapons targeting is permissible, or how AI accountability should be structured. These latter questions require democratic deliberation, not contract negotiation. Tillipman characterizes regulation by contract as 'too narrow, too contingent, and too fragile' for military AI governance. Unlike statutes, bilateral contracts bind only the parties who signed them and have no general legal effect. Enforcement depends on the vendor's technical controls after deployment, which is structurally insufficient for governing surveillance, autonomous weapons, and intelligence oversight. The Hegseth mandate requiring 'any lawful use' language eliminates even the negotiated safety constraints that existed in previous contracts, creating a governance vacuum where the bilateral contract layer is removed but the statutory layer doesn't specifically address military AI safety. This structural mismatch is confirmed by the empirical evidence: the Google deal produced advisory language with government-adjustable safety settings, and the Anthropic supply chain designation attempted to use procurement instruments for capability constraints they cannot structurally enforce. ## Supporting Evidence **Source:** Senator Warner et al., March 2026; Oxford University AI Governance Commentary, March 6, 2026 Senator Warner's information request to AI companies (April 3, 2026 deadline) received no public responses, demonstrating that congressional oversight of military AI procurement operates through non-binding information requests rather than statutory authority. Warner's letter explicitly acknowledged DoD 'rejected an existing vendor's request to memorialize a restriction on the use of its models for fully autonomous weapons or to facilitate bulk surveillance of Americans' (referencing Anthropic exclusion), confirming that procurement instruments lack constitutional governance capacity. Oxford AI governance experts noted the Anthropic-Pentagon dispute 'reflects governance failures' because 'bilateral vendor contracts are the primary governance instrument for military AI in the US' and 'these contracts were not designed for constitutional questions about surveillance, targeting, and accountability.'