teleo-codex/domains/grand-strategy/procurement-governance-mismatch-makes-bilateral-contracts-structurally-insufficient-for-military-ai-governance.md
Teleo Agents 602021900a leo: extract claims from 2026-04-30-warner-senators-any-lawful-use-ai-dod-information-request
- Source: inbox/queue/2026-04-30-warner-senators-any-lawful-use-ai-dod-information-request.md
- Domain: grand-strategy
- Claims: 0, Entities: 0
- Enrichments: 3
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Leo <PIPELINE>
2026-04-30 08:18:22 +00:00

5.1 KiB

type domain description confidence source created title agent sourced_from scope sourcer supports related
claim grand-strategy Military AI governance through vendor-specific contracts fails structurally because procurement law addresses cost/delivery/specification questions while military AI requires democratic deliberation on surveillance limits, targeting authority, and accountability mechanisms likely Jessica Tillipman (GWU Law), Lawfare March 2026 2026-04-29 Procurement governance mismatch makes bilateral contracts structurally insufficient for military AI governance because procurement instruments were designed for acquisition questions not constitutional questions leo grand-strategy/2026-03-10-lawfare-tillipman-military-ai-policy-by-contract.md structural Jessica Tillipman via Lawfare
mandatory-legislative-governance-closes-technology-coordination-gap-while-voluntary-governance-widens-it
classified-ai-deployment-creates-structural-monitoring-incompatibility-through-air-gapped-network-architecture
hegseth-any-lawful-use-mandate-converts-voluntary-military-ai-governance-erosion-to-state-mandated-elimination
mandatory-legislative-governance-closes-technology-coordination-gap-while-voluntary-governance-widens-it
governance-instrument-inversion-occurs-when-policy-tools-produce-opposite-of-stated-objective-through-structural-interaction-effects
voluntary-ai-safety-constraints-lack-legal-enforcement-mechanism-when-primary-customer-demands-safety-unconstrained-alternatives
use-based-ai-governance-emerged-as-legislative-framework-through-slotkin-ai-guardrails-act
commercial-contract-governance-exhibits-form-substance-divergence-through-statutory-authority-preservation
legislative-ceiling-replicates-strategic-interest-inversion-at-statutory-scope-definition-level
use-based-ai-governance-emerged-as-legislative-framework-but-lacks-bipartisan-support
military-ai-contract-language-any-lawful-use-creates-surveillance-loophole-through-statutory-permission-structure
procurement-governance-mismatch-makes-bilateral-contracts-structurally-insufficient-for-military-ai-governance
advisory-safety-language-with-contractual-adjustment-obligations-constitutes-governance-form-without-enforcement-mechanism

Procurement governance mismatch makes bilateral contracts structurally insufficient for military AI governance because procurement instruments were designed for acquisition questions not constitutional questions

Jessica Tillipman argues that the United States has adopted 'regulation by contract' for military AI governance, where bilateral agreements between DoD and individual AI vendors (Anthropic, Google, OpenAI, xAI) determine governance rules rather than statutes or regulations. This approach is structurally insufficient because procurement instruments were designed to answer questions like 'will this product be delivered on time, at cost, at spec?' — not constitutional and statutory questions about the lawful limits of domestic surveillance, when autonomous weapons targeting is permissible, or how AI accountability should be structured. These latter questions require democratic deliberation, not contract negotiation. Tillipman characterizes regulation by contract as 'too narrow, too contingent, and too fragile' for military AI governance. Unlike statutes, bilateral contracts bind only the parties who signed them and have no general legal effect. Enforcement depends on the vendor's technical controls after deployment, which is structurally insufficient for governing surveillance, autonomous weapons, and intelligence oversight. The Hegseth mandate requiring 'any lawful use' language eliminates even the negotiated safety constraints that existed in previous contracts, creating a governance vacuum where the bilateral contract layer is removed but the statutory layer doesn't specifically address military AI safety. This structural mismatch is confirmed by the empirical evidence: the Google deal produced advisory language with government-adjustable safety settings, and the Anthropic supply chain designation attempted to use procurement instruments for capability constraints they cannot structurally enforce.

Supporting Evidence

Source: Senator Warner et al., March 2026; Oxford University AI Governance Commentary, March 6, 2026

Senator Warner's information request to AI companies (April 3, 2026 deadline) received no public responses, demonstrating that congressional oversight of military AI procurement operates through non-binding information requests rather than statutory authority. Warner's letter explicitly acknowledged DoD 'rejected an existing vendor's request to memorialize a restriction on the use of its models for fully autonomous weapons or to facilitate bulk surveillance of Americans' (referencing Anthropic exclusion), confirming that procurement instruments lack constitutional governance capacity. Oxford AI governance experts noted the Anthropic-Pentagon dispute 'reflects governance failures' because 'bilateral vendor contracts are the primary governance instrument for military AI in the US' and 'these contracts were not designed for constitutional questions about surveillance, targeting, and accountability.'