leo: extract claims from 2026-03-10-lawfare-tillipman-military-ai-policy-by-contract
Some checks failed
Mirror PR to Forgejo / mirror (pull_request) Has been cancelled

- Source: inbox/queue/2026-03-10-lawfare-tillipman-military-ai-policy-by-contract.md
- Domain: grand-strategy
- Claims: 1, Entities: 0
- Enrichments: 3
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Leo <PIPELINE>
This commit is contained in:
Teleo Agents 2026-04-29 08:16:35 +00:00
parent 14b50f4e30
commit 547735a281
5 changed files with 46 additions and 3 deletions

View file

@ -11,7 +11,7 @@ sourced_from: grand-strategy/2026-03-xx-eff-openai-pentagon-weasel-words-surveil
scope: structural
sourcer: Electronic Frontier Foundation
supports: ["voluntary-ai-safety-constraints-lack-legal-enforcement-mechanism-when-primary-customer-demands-safety-unconstrained-alternatives"]
related: ["three-track-corporate-safety-governance-stack-reveals-sequential-ceiling-architecture", "international-ai-governance-form-substance-divergence-enables-simultaneous-treaty-ratification-and-domestic-implementation-weakening", "eu-ai-governance-reveals-form-substance-divergence-at-domestic-regulatory-level-through-simultaneous-treaty-ratification-and-compliance-delay", "voluntary-ai-safety-constraints-lack-legal-enforcement-mechanism-when-primary-customer-demands-safety-unconstrained-alternatives", "voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance"]
related: ["three-track-corporate-safety-governance-stack-reveals-sequential-ceiling-architecture", "international-ai-governance-form-substance-divergence-enables-simultaneous-treaty-ratification-and-domestic-implementation-weakening", "eu-ai-governance-reveals-form-substance-divergence-at-domestic-regulatory-level-through-simultaneous-treaty-ratification-and-compliance-delay", "voluntary-ai-safety-constraints-lack-legal-enforcement-mechanism-when-primary-customer-demands-safety-unconstrained-alternatives", "voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance", "commercial-contract-governance-exhibits-form-substance-divergence-through-statutory-authority-preservation", "military-ai-contract-language-any-lawful-use-creates-surveillance-loophole-through-statutory-permission-structure"]
---
# Commercial contract governance of military AI produces form-substance divergence through statutory authority preservation that voluntary amendments cannot override
@ -24,3 +24,10 @@ EFF's analysis of OpenAI's amended Pentagon contract demonstrates that commercia
**Source:** CNBC/Axios/NBC News, March 3, 2026; EFF 'Weasel Words' analysis March 2026
OpenAI amended Pentagon contract within 3 days under commercial pressure (1.5M user quits per Let's Data Science analysis) to add explicit surveillance prohibitions. However, EFF analysis confirms amendments are insufficient: contract specifically refers to 'commercially acquired or public information' meaning non-public intelligence collection remains uncovered. Intelligence agencies (CIA, NSA, DIA) operate under different legal authorities than 'lawful surveillance' as ordinarily understood. The 'any lawful use' structural loophole remains open for intelligence agencies operating under existing statutory authority.
## Extending Evidence
**Source:** Tillipman, Lawfare March 2026
Tillipman's 'regulation by contract' critique explains why form-substance divergence is inevitable in commercial contract governance: contracts cannot provide democratic accountability, public deliberation, or institutional durability that constitutional questions require, so even binding contract terms diverge from governance substance

View file

@ -31,3 +31,10 @@ REAIM demonstrates epistemic coordination (three summits, documented frameworks,
**Source:** Synthesis Law Review Blog, 2026-04-13
Despite 'multiple international summits and frameworks,' there is 'still no Geneva Convention for AI' after 8+ years. The Council of Europe treaty achieves epistemic coordination (documented consensus on principles) while operational coordination fails through national security carve-outs. This is the international expression of epistemic-operational divergence—agreement on what should happen without binding implementation in high-stakes domains.
## Extending Evidence
**Source:** Tillipman, Lawfare March 2026
Tillipman provides structural diagnosis for why operational gap persists: the governance instrument (contracts) is architecturally mismatched to the governance task (constitutional questions), making operational coordination structurally impossible even when epistemic coordination exists

View file

@ -11,7 +11,7 @@ sourced_from: grand-strategy/2026-04-20-defensepost-google-gemini-pentagon-class
scope: structural
sourcer: "@TheDefensePost"
supports: ["voluntary-ai-safety-constraints-lack-legal-enforcement-mechanism-when-primary-customer-demands-safety-unconstrained-alternatives", "military-ai-contract-language-any-lawful-use-creates-surveillance-loophole-through-statutory-permission-structure"]
related: ["voluntary-ai-safety-constraints-lack-legal-enforcement-mechanism-when-primary-customer-demands-safety-unconstrained-alternatives", "voluntary-ai-safety-red-lines-are-structurally-equivalent-to-no-red-lines-when-lacking-constitutional-protection", "military-ai-contract-language-any-lawful-use-creates-surveillance-loophole-through-statutory-permission-structure", "commercial-contract-governance-exhibits-form-substance-divergence-through-statutory-authority-preservation", "pentagon-military-ai-contracts-systematically-demand-any-lawful-use-terms-as-confirmed-by-three-independent-lab-negotiations"]
related: ["voluntary-ai-safety-constraints-lack-legal-enforcement-mechanism-when-primary-customer-demands-safety-unconstrained-alternatives", "voluntary-ai-safety-red-lines-are-structurally-equivalent-to-no-red-lines-when-lacking-constitutional-protection", "military-ai-contract-language-any-lawful-use-creates-surveillance-loophole-through-statutory-permission-structure", "commercial-contract-governance-exhibits-form-substance-divergence-through-statutory-authority-preservation", "pentagon-military-ai-contracts-systematically-demand-any-lawful-use-terms-as-confirmed-by-three-independent-lab-negotiations", "pentagon-ai-contract-negotiations-stratify-into-three-tiers-creating-inverse-market-signal-rewarding-minimum-constraint"]
---
# Pentagon military AI contracts systematically demand 'any lawful use' terms as confirmed by three independent lab negotiations
@ -45,3 +45,10 @@ The Google employee letter confirms that the Pentagon is pushing 'all lawful use
**Source:** Google-Pentagon Gemini classified negotiations, April 2026
Google-Pentagon classified contract negotiation adds third confirmed case of Pentagon pushing 'all lawful uses' contract language, alongside OpenAI and Anthropic negotiations. Pattern now confirmed across all three major AI labs in contract discussions.
## Extending Evidence
**Source:** Tillipman, Lawfare March 2026
Tillipman's analysis shows the Hegseth mandate requiring 'any lawful use' language actively weakens the procurement-as-governance mechanism by mandating removal of safety constraints from contracts, creating a governance vacuum where bilateral contract layer is removed but statutory layer doesn't address military AI safety

View file

@ -0,0 +1,19 @@
---
type: claim
domain: grand-strategy
description: Military AI governance through vendor-specific procurement contracts fails structurally because procurement law was designed for acquisition questions (cost, delivery, specification) not constitutional questions about surveillance, targeting, and accountability
confidence: likely
source: Jessica Tillipman (GWU Law), Lawfare March 2026
created: 2026-04-29
title: Procurement governance mismatch makes bilateral contracts structurally insufficient for military AI because acquisition instruments cannot answer constitutional questions
agent: leo
sourced_from: grand-strategy/2026-03-10-lawfare-tillipman-military-ai-policy-by-contract.md
scope: structural
sourcer: Jessica Tillipman via Lawfare
supports: ["mandatory-legislative-governance-closes-technology-coordination-gap-while-voluntary-governance-widens-it", "classified-ai-deployment-creates-structural-monitoring-incompatibility-through-air-gapped-network-architecture"]
related: ["governance-instrument-inversion-occurs-when-policy-tools-produce-opposite-of-stated-objective-through-structural-interaction-effects", "mandatory-legislative-governance-closes-technology-coordination-gap-while-voluntary-governance-widens-it", "epistemic-coordination-outpaces-operational-coordination-in-ai-governance-creating-documented-consensus-on-fragmented-implementation", "commercial-contract-governance-exhibits-form-substance-divergence-through-statutory-authority-preservation", "use-based-ai-governance-emerged-as-legislative-framework-through-slotkin-ai-guardrails-act", "military-ai-contract-language-any-lawful-use-creates-surveillance-loophole-through-statutory-permission-structure", "legislative-ceiling-replicates-strategic-interest-inversion-at-statutory-scope-definition-level"]
---
# Procurement governance mismatch makes bilateral contracts structurally insufficient for military AI because acquisition instruments cannot answer constitutional questions
Jessica Tillipman argues that the United States has adopted 'regulation by contract' for military AI governance, where bilateral vendor-government agreements determine governance rules rather than statutes or regulations. This approach is structurally insufficient because procurement instruments were designed to answer acquisition questions—will this product be delivered on time, at cost, at specification—not constitutional and statutory questions about the lawful limits of domestic surveillance, when autonomous weapons targeting is permissible, or how AI accountability should be structured. These latter questions require democratic deliberation and institutional durability that bilateral contracts cannot provide. Unlike statutes, contracts bind only the parties who signed them and have no general legal effect. Enforcement depends on the vendor's technical controls after deployment, which Tillipman characterizes as 'too narrow, too contingent, and too fragile' for governing surveillance, autonomous weapons, and intelligence oversight. The Hegseth mandate requiring 'any lawful use' language eliminates even the negotiated safety constraints that existed in previous contracts, creating a governance vacuum where the bilateral contract layer is removed but the statutory layer doesn't specifically address military AI safety. This structural mismatch is not correctable through better contract drafting—it is an architectural problem where the governance instrument (procurement) is fundamentally mismatched to the governance task (constitutional limits on state power).

View file

@ -7,10 +7,13 @@ date: 2026-03-10
domain: grand-strategy
secondary_domains: [ai-alignment]
format: article
status: unprocessed
status: processed
processed_by: leo
processed_date: 2026-04-29
priority: high
tags: [regulation-by-contract, procurement-governance, military-AI, Tillipman, Lawfare, democratic-accountability, structural-critique, bilateral-agreements, monitoring-gap, surveillance, autonomous-weapons]
intake_tier: research-task
extraction_model: "anthropic/claude-sonnet-4.5"
---
## Content