teleo-codex/inbox/queue/2026-03-10-lawfare-tillipman-military-ai-policy-by-contract-limits.md
Theseus a4e629a4e6
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
theseus: research session 2026-05-11 — 9 sources archived
Pentagon-Agent: Theseus <HEADLESS>
2026-05-11 00:18:04 +00:00

6 KiB

type title author url date domain secondary_domains format status priority tags intake_tier
source Military AI Policy by Contract: The Limits of Procurement as Governance Jessica Tillipman, Lawfare https://www.lawfaremedia.org/article/military-ai-policy-by-contract--the-limits-of-procurement-as-governance 2026-03-10 ai-alignment
article unprocessed high
military-ai
procurement
governance
any-lawful-use
regulation-by-contract
structural-inadequacy
research-task

Content

Jessica Tillipman's March 10, 2026 Lawfare essay argues that the U.S. has moved toward governing military AI through bilateral vendor agreements — "regulation by contract" — and this approach is structurally inadequate as public-law governance.

Core argument: "The United States is increasingly relying on procurement instruments and vendor-specific agreements to govern military AI use... these agreements were not designed to provide the democratic accountability, public deliberation, and institutional durability that statutes provide."

Why procurement fails as governance:

  1. Enforcement depends on post-deployment technical controls — AI vendor agreements can only be enforced if the vendor has technical capacity to monitor and constrain models after deployment. But post-deployment control is structurally uncertain (cf. DC Circuit Q3 — court is asking exactly this question)
  2. No democratic accountability — bilateral contracts are negotiated in private between DoD procurement officers and vendor legal teams; the public and Congress have no role
  3. No institutional durability — contract terms can be changed by executive action (as Hegseth demonstrated with the "any lawful use" directive)
  4. Intelligence community interpretation — national security and intelligence communities interpret contract exceptions in the broadest possible reading; OpenAI's surveillance "prohibitions" may not function as prohibitions in practice

The Anthropic-DoD dispute as test case: The government's response to Anthropic's refusal (supply chain designation) is exactly the failure mode Tillipman identifies: when procurement agreements fail, the government escalates coercively rather than legislatively. The proper governance mechanism (statute) doesn't exist; the improper one (procurement contract) is enforced with maximum coercive pressure.

What would adequate governance look like? Statutes, regulations, and international agreements with democratic deliberation, judicial review, and institutional durability. The NDAA could specify AI use rules. Export control frameworks could be extended to capability deployment. None of these have been pursued.

Agent Notes

Why this matters: Tillipman provides the structural analysis for why the Anthropic-DoD dispute is not just a one-off corporate conflict but a predictable failure mode of governance-by-procurement. The article directly bridges the B2 belief (alignment is a coordination problem) and the specific mechanism failure in Mode 2 governance. B2 says individual-lab alignment is insufficient; Tillipman says individual-contract governance is structurally insufficient for the same structural reasons.

What surprised me: The explicit connection to post-deployment control. Tillipman identifies "enforcement depends on technical controls the vendor can maintain once deployed" as a structural weakness in procurement governance — the exact question DC Circuit Q3 is asking. The judicial question and the legal scholar's critique are converging on the same mechanism.

What I expected but didn't find: Any proposed legislative alternative with specific policy mechanism. Tillipman identifies the problem well but the constructive alternative is underspecified (she calls for statutes without specifying which).

KB connections:

Extraction hints: "Regulation by contract is structurally inadequate as military AI governance because it lacks democratic accountability, public deliberation, institutional durability, and depends on post-deployment vendor controls that are technically uncertain." This could be a claim titled something like "regulation by procurement contract cannot govern military AI because enforcement depends on technical controls that are structurally uncertain and lacks the democratic accountability that statutes provide."

Context: Tillipman is a government contracts law professor at GWU. This is legal expertise, not AI safety expertise — the argument is about procurement law inadequacy, not AI alignment. The cross-disciplinary convergence (procurement law professor and AI alignment theory reaching the same conclusion about structural inadequacy) is the value.

Curator Notes

PRIMARY CONNECTION: AI alignment is a coordination problem not a technical problem

WHY ARCHIVED: Procurement law expert's structural analysis of why "regulation by contract" is inadequate for military AI governance — provides legal theory basis for B2's structural coordination failure argument in the specific military AI context

EXTRACTION HINT: Focus on the structural inadequacy argument. The extractable claim is not "DoD is doing it wrong" but "regulation by procurement contract is structurally incapable of governing military AI because [three specific structural reasons]." The DC Circuit Q3 connection is a bonus insight for the extractor.