leo: extract claims from 2026-03-10-lawfare-tillipman-military-ai-policy-by-contract
- Source: inbox/queue/2026-03-10-lawfare-tillipman-military-ai-policy-by-contract.md - Domain: grand-strategy - Claims: 1, Entities: 0 - Enrichments: 4 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Leo <PIPELINE>
This commit is contained in:
parent
7309c06349
commit
cf36c34f51
6 changed files with 52 additions and 6 deletions
|
|
@ -31,3 +31,10 @@ REAIM demonstrates epistemic coordination (three summits, documented frameworks,
|
|||
**Source:** Synthesis Law Review Blog, 2026-04-13
|
||||
|
||||
Despite 'multiple international summits and frameworks,' there is 'still no Geneva Convention for AI' after 8+ years. The Council of Europe treaty achieves epistemic coordination (documented consensus on principles) while operational coordination fails through national security carve-outs. This is the international expression of epistemic-operational divergence—agreement on what should happen without binding implementation in high-stakes domains.
|
||||
|
||||
|
||||
## Extending Evidence
|
||||
|
||||
**Source:** Tillipman, Lawfare March 2026
|
||||
|
||||
Tillipman adds structural diagnosis for why the operational gap persists: the governance instrument (bilateral contracts) is architecturally mismatched to the governance task (constitutional questions about surveillance, targeting, accountability). The gap is not just political but structural — procurement law cannot answer the questions military AI governance requires.
|
||||
|
|
|
|||
|
|
@ -10,13 +10,16 @@ agent: leo
|
|||
sourced_from: grand-strategy/2026-04-14-axios-cisa-cuts-mythos-governance-conflict.md
|
||||
scope: structural
|
||||
sourcer: Axios
|
||||
related:
|
||||
- international-ai-governance-form-substance-divergence-enables-simultaneous-treaty-ratification-and-domestic-implementation-weakening
|
||||
- frontier-ai-capability-national-security-criticality-prevents-government-from-enforcing-own-governance-instruments
|
||||
- private-ai-lab-access-restrictions-create-government-offensive-defensive-capability-asymmetries-without-accountability-structure
|
||||
- supply-chain-risk-designation-misdirection-occurs-when-instrument-requires-capability-target-structurally-lacks
|
||||
related: ["international-ai-governance-form-substance-divergence-enables-simultaneous-treaty-ratification-and-domestic-implementation-weakening", "frontier-ai-capability-national-security-criticality-prevents-government-from-enforcing-own-governance-instruments", "private-ai-lab-access-restrictions-create-government-offensive-defensive-capability-asymmetries-without-accountability-structure", "supply-chain-risk-designation-misdirection-occurs-when-instrument-requires-capability-target-structurally-lacks", "governance-instrument-inversion-occurs-when-policy-tools-produce-opposite-of-stated-objective-through-structural-interaction-effects", "coercive-governance-instruments-create-offense-defense-asymmetries-when-applied-to-dual-use-capabilities", "coercive-governance-instruments-produce-offense-defense-asymmetries-through-selective-enforcement-within-deploying-agency", "coercive-ai-governance-instruments-self-negate-at-operational-timescale-when-governing-strategically-indispensable-capabilities"]
|
||||
---
|
||||
|
||||
# Governance instrument inversion occurs when policy tools produce the opposite of their stated objective through structural interaction effects between multiple simultaneous policies
|
||||
|
||||
The Trump administration's Mythos response reveals a distinct failure mode: governance instrument inversion, where policy tools produce outcomes opposite to their stated objectives through structural interaction effects. Three simultaneous policies—(1) CISA budget cuts under DOGE, (2) Pentagon supply chain designation of Anthropic, and (3) Mythos deployment increasing cyber threat surface—interact to degrade US cybersecurity despite each being individually justified on security or efficiency grounds. The supply chain designation was intended to coerce Anthropic into compliance and protect national security, but it blocks CISA's access to the most powerful defensive cybersecurity tool. CISA cuts were intended to improve government efficiency, but they reduce defensive capacity when threats are escalating. The result is a self-inflicted governance crisis where the administration cannot course-correct without either dropping the lawsuit (losing coercive leverage) or accepting indefinite defensive degradation. This differs from governance laundering (form-substance divergence) or simple policy failure—it's a case where the instruments themselves, through their interaction, invert the policy objective. The Axios framing emphasizes this is not adversarial failure but internal coherence failure in governance architecture.
|
||||
|
||||
|
||||
## Extending Evidence
|
||||
|
||||
**Source:** Tillipman, Lawfare March 2026
|
||||
|
||||
Regulation by contract is a specific instance of instrument inversion: applying procurement instruments (designed for acquisition) to governance tasks (requiring constitutional deliberation) produces the opposite of the stated objective. Instead of governance clarity, it produces governance ambiguity because the instrument cannot structurally answer the questions being asked of it.
|
||||
|
|
|
|||
|
|
@ -18,3 +18,10 @@ related: ["mutually-assured-deregulation-makes-voluntary-ai-governance-structura
|
|||
# Hegseth's January 2026 'any lawful use' mandate converts voluntary military AI governance erosion from market equilibrium to state-mandated elimination through procurement exclusion
|
||||
|
||||
Secretary of Defense Pete Hegseth's January 2026 AI strategy memorandum mandates that the undersecretary for acquisition and sustainment incorporate standard 'any lawful use' language into any DoD AI procurement contract within 180 days (deadline approximately July 2026). This converts what has been analyzed as voluntary governance erosion driven by competitive pressure (MAD mechanism) into mandatory governance elimination driven by state policy. The mandate means companies cannot sign DoD AI contracts at Tier 1 or Tier 2 terms without violating DoD procurement policy. The enforcement sequence confirms this: Hegseth mandates Tier 3 (January 2026) → Anthropic refuses to update existing contract → designated supply chain risk (February 2026). Google's deal signed April 28, 2026 (107 days into the 180-day window) accepted 'any lawful use' terms, demonstrating the mandate made continued negotiation for Tier 2 terms structurally untenable. The mandate operates by fiat at the policy layer, not through incentive alignment at the market layer. This is a stronger forcing function than MAD because it creates procurement exclusion rather than competitive disadvantage.
|
||||
|
||||
|
||||
## Extending Evidence
|
||||
|
||||
**Source:** Tillipman, Lawfare March 2026
|
||||
|
||||
The Hegseth mandate makes the procurement-governance mismatch worse: it doesn't just leave procurement as the insufficient governance mechanism, it actively weakens that mechanism by requiring removal of safety constraints from contracts. Result: bilateral contract layer removed, falls back to statutory layer that doesn't address military AI safety, creating governance vacuum.
|
||||
|
|
|
|||
|
|
@ -39,3 +39,10 @@ Barrett's game-theoretic analysis provides formal proof: voluntary agreements ca
|
|||
**Source:** TechPolicy.Press EU AI Act military exemption analysis, April 2026
|
||||
|
||||
The EU AI Act's August 2026 enforcement demonstrates that mandatory legislative governance can close coordination gaps for civilian AI applications while simultaneously widening gaps for military AI through explicit exemptions. The dual-use directional asymmetry (military-to-civilian migration triggers compliance; civilian-to-military may not) creates a regulatory arbitrage opportunity that incentivizes developing AI under military exemption first, then migrating to civilian markets. This reveals that mandatory governance can create perverse incentives when exemptions are asymmetric, potentially widening rather than closing coordination gaps in dual-use technology domains.
|
||||
|
||||
|
||||
## Extending Evidence
|
||||
|
||||
**Source:** Tillipman, Lawfare March 2026
|
||||
|
||||
Tillipman provides the legal mechanism for why voluntary governance widens the gap: procurement law was designed for acquisition questions (cost, delivery, specification) not constitutional questions (surveillance limits, targeting authority, accountability). This architectural mismatch means bilateral contracts are 'too narrow, too contingent, and too fragile' to provide democratic accountability, making statutory governance not just preferable but structurally necessary for military AI.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
type: claim
|
||||
domain: grand-strategy
|
||||
description: Military AI governance through vendor-specific contracts fails structurally because procurement law addresses cost/delivery/specification questions while military AI requires democratic deliberation on surveillance limits, targeting authority, and accountability mechanisms
|
||||
confidence: likely
|
||||
source: Jessica Tillipman (GWU Law), Lawfare March 2026
|
||||
created: 2026-04-29
|
||||
title: Procurement governance mismatch makes bilateral contracts structurally insufficient for military AI governance because procurement instruments were designed for acquisition questions not constitutional questions
|
||||
agent: leo
|
||||
sourced_from: grand-strategy/2026-03-10-lawfare-tillipman-military-ai-policy-by-contract.md
|
||||
scope: structural
|
||||
sourcer: Jessica Tillipman via Lawfare
|
||||
supports: ["mandatory-legislative-governance-closes-technology-coordination-gap-while-voluntary-governance-widens-it", "classified-ai-deployment-creates-structural-monitoring-incompatibility-through-air-gapped-network-architecture"]
|
||||
related: ["hegseth-any-lawful-use-mandate-converts-voluntary-military-ai-governance-erosion-to-state-mandated-elimination", "mandatory-legislative-governance-closes-technology-coordination-gap-while-voluntary-governance-widens-it", "governance-instrument-inversion-occurs-when-policy-tools-produce-opposite-of-stated-objective-through-structural-interaction-effects", "voluntary-ai-safety-constraints-lack-legal-enforcement-mechanism-when-primary-customer-demands-safety-unconstrained-alternatives", "use-based-ai-governance-emerged-as-legislative-framework-through-slotkin-ai-guardrails-act", "commercial-contract-governance-exhibits-form-substance-divergence-through-statutory-authority-preservation", "legislative-ceiling-replicates-strategic-interest-inversion-at-statutory-scope-definition-level", "use-based-ai-governance-emerged-as-legislative-framework-but-lacks-bipartisan-support", "military-ai-contract-language-any-lawful-use-creates-surveillance-loophole-through-statutory-permission-structure"]
|
||||
---
|
||||
|
||||
# Procurement governance mismatch makes bilateral contracts structurally insufficient for military AI governance because procurement instruments were designed for acquisition questions not constitutional questions
|
||||
|
||||
Jessica Tillipman argues that the United States has adopted 'regulation by contract' for military AI governance, where bilateral agreements between DoD and individual AI vendors (Anthropic, Google, OpenAI, xAI) determine governance rules rather than statutes or regulations. This approach is structurally insufficient because procurement instruments were designed to answer questions like 'will this product be delivered on time, at cost, at spec?' — not constitutional and statutory questions about the lawful limits of domestic surveillance, when autonomous weapons targeting is permissible, or how AI accountability should be structured. These latter questions require democratic deliberation, not contract negotiation. Tillipman characterizes regulation by contract as 'too narrow, too contingent, and too fragile' for military AI governance. Unlike statutes, bilateral contracts bind only the parties who signed them and have no general legal effect. Enforcement depends on the vendor's technical controls after deployment, which is structurally insufficient for governing surveillance, autonomous weapons, and intelligence oversight. The Hegseth mandate requiring 'any lawful use' language eliminates even the negotiated safety constraints that existed in previous contracts, creating a governance vacuum where the bilateral contract layer is removed but the statutory layer doesn't specifically address military AI safety. This structural mismatch is confirmed by the empirical evidence: the Google deal produced advisory language with government-adjustable safety settings, and the Anthropic supply chain designation attempted to use procurement instruments for capability constraints they cannot structurally enforce.
|
||||
|
|
@ -7,10 +7,13 @@ date: 2026-03-10
|
|||
domain: grand-strategy
|
||||
secondary_domains: [ai-alignment]
|
||||
format: article
|
||||
status: unprocessed
|
||||
status: processed
|
||||
processed_by: leo
|
||||
processed_date: 2026-04-29
|
||||
priority: high
|
||||
tags: [regulation-by-contract, procurement-governance, military-AI, Tillipman, Lawfare, democratic-accountability, structural-critique, bilateral-agreements, monitoring-gap, surveillance, autonomous-weapons]
|
||||
intake_tier: research-task
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
---
|
||||
|
||||
## Content
|
||||
Loading…
Reference in a new issue