theseus: extract claims from 2026-03-10-lawfare-tillipman-military-ai-policy-by-contract-limits
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run

- Source: inbox/queue/2026-03-10-lawfare-tillipman-military-ai-policy-by-contract-limits.md
- Domain: ai-alignment
- Claims: 0, Entities: 0
- Enrichments: 3
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Theseus <PIPELINE>
This commit is contained in:
Teleo Agents 2026-05-11 00:24:32 +00:00
parent 0da235d765
commit 683d0e0e18
4 changed files with 26 additions and 2 deletions

View file

@ -115,3 +115,10 @@ Dan Hendrycks (CAIS founder, leading technical AI safety institution) co-authore
**Source:** Acemoglu, Project Syndicate March 2026
Acemoglu extends the coordination problem diagnosis to the governance philosophy level: alignment requires not just coordination mechanisms (multilateral commitments, authority separation) but also rejecting emergency exceptionalism as a general governance mode. This is 'orders of magnitude harder than any technical or institutional fix' because it requires changing foundational beliefs about when rules apply, not just implementing better coordination infrastructure.
## Extending Evidence
**Source:** Tillipman, Lawfare March 2026
Tillipman provides legal theory basis for why coordination failure occurs in military AI governance: procurement contracts lack democratic accountability, institutional durability, and depend on post-deployment vendor controls that are technically uncertain. The absence of statutory AI governance is the institutional gap that prevents coordination.

View file

@ -11,7 +11,7 @@ attribution:
sourcer:
- handle: "openai"
context: "OpenAI blog post (Feb 27, 2026), CEO Altman public statements"
related: ["voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance", "government-safety-penalties-invert-regulatory-incentives-by-blacklisting-cautious-actors", "government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them", "alignment-tax-operates-as-market-clearing-mechanism-across-three-frontier-labs", "judicial-oversight-of-ai-governance-through-constitutional-grounds-not-statutory-safety-law"]
related: ["voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance", "government-safety-penalties-invert-regulatory-incentives-by-blacklisting-cautious-actors", "government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them", "alignment-tax-operates-as-market-clearing-mechanism-across-three-frontier-labs", "judicial-oversight-of-ai-governance-through-constitutional-grounds-not-statutory-safety-law", "supply-chain-risk-designation-weaponizes-national-security-law-to-punish-ai-safety-speech", "regulation-by-contract-structurally-inadequate-for-military-ai-governance"]
reweave_edges: ["voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance|related|2026-03-31", "multilateral-verification-mechanisms-can-substitute-for-failed-voluntary-commitments-when-binding-enforcement-replaces-unilateral-sacrifice|supports|2026-04-03"]
supports: ["multilateral-verification-mechanisms-can-substitute-for-failed-voluntary-commitments-when-binding-enforcement-replaces-unilateral-sacrifice"]
---
@ -57,3 +57,10 @@ The timing of The Intercept's publication (March 8, one day after Kalinowski's r
**Source:** Tillipman, Lawfare, March 10, 2026
Tillipman documents the specific mechanism: when vendors maintain safety restrictions, the government designates them as 'supply chain risks' rather than engaging with the safety rationale. This is 'punishing speech' (per Judge Lin's ruling in the Anthropic case) and represents coercive removal rather than negotiation. The governance response to vendor safety positions is exclusion, not incorporation.
## Supporting Evidence
**Source:** Tillipman, Lawfare March 2026
Tillipman identifies the Anthropic-DoD dispute as predictable failure mode of governance-by-procurement: when procurement agreements fail, the government escalates coercively (supply chain designation) rather than legislatively. This is structural, not accidental — the proper governance mechanism (statute) doesn't exist.

View file

@ -73,3 +73,10 @@ The EU AI Act Omnibus deferral extends this pattern from voluntary commitments t
**Source:** Theseus synthetic analysis, May 4, 2026
The EU AI Act's August 2, 2026 enforcement deadline represents the first time in AI governance history that mandatory enforcement is legally in force without a confirmed delay mechanism, following the April 28, 2026 Omnibus trilogue failure. This creates a natural experiment testing whether mandatory mechanisms can work for civilian high-risk AI systems (medical devices, credit scoring, recruitment, critical infrastructure), though the Act's explicit military exclusion means the most consequential AI deployments (classified military systems) remain outside mandatory governance scope by design.
## Extending Evidence
**Source:** Tillipman, Lawfare March 2026
Procurement contracts as governance instruments have four structural weaknesses that prevent them from functioning as binding governance: no democratic accountability, no institutional durability (can be changed by executive action), enforcement depends on uncertain post-deployment technical controls, and intelligence community interpretation applies broadest possible reading to exceptions.

View file

@ -7,10 +7,13 @@ date: 2026-03-10
domain: ai-alignment
secondary_domains: []
format: article
status: unprocessed
status: processed
processed_by: theseus
processed_date: 2026-05-11
priority: high
tags: [military-ai, procurement, governance, any-lawful-use, regulation-by-contract, structural-inadequacy]
intake_tier: research-task
extraction_model: "anthropic/claude-sonnet-4.5"
---
## Content