From 683d0e0e186968453da5498c9ec2401df3216506 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Mon, 11 May 2026 00:24:32 +0000 Subject: [PATCH] theseus: extract claims from 2026-03-10-lawfare-tillipman-military-ai-policy-by-contract-limits - Source: inbox/queue/2026-03-10-lawfare-tillipman-military-ai-policy-by-contract-limits.md - Domain: ai-alignment - Claims: 0, Entities: 0 - Enrichments: 3 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Theseus --- ... is a coordination problem not a technical problem.md | 7 +++++++ ...ulatory-incentives-by-blacklisting-cautious-actors.md | 9 ++++++++- ...nt-are-statements-of-intent-not-binding-governance.md | 7 +++++++ ...re-tillipman-military-ai-policy-by-contract-limits.md | 5 ++++- 4 files changed, 26 insertions(+), 2 deletions(-) rename inbox/{queue => archive/ai-alignment}/2026-03-10-lawfare-tillipman-military-ai-policy-by-contract-limits.md (98%) diff --git a/domains/ai-alignment/AI alignment is a coordination problem not a technical problem.md b/domains/ai-alignment/AI alignment is a coordination problem not a technical problem.md index 1d0b2551d..f3616d79e 100644 --- a/domains/ai-alignment/AI alignment is a coordination problem not a technical problem.md +++ b/domains/ai-alignment/AI alignment is a coordination problem not a technical problem.md @@ -115,3 +115,10 @@ Dan Hendrycks (CAIS founder, leading technical AI safety institution) co-authore **Source:** Acemoglu, Project Syndicate March 2026 Acemoglu extends the coordination problem diagnosis to the governance philosophy level: alignment requires not just coordination mechanisms (multilateral commitments, authority separation) but also rejecting emergency exceptionalism as a general governance mode. This is 'orders of magnitude harder than any technical or institutional fix' because it requires changing foundational beliefs about when rules apply, not just implementing better coordination infrastructure. + + +## Extending Evidence + +**Source:** Tillipman, Lawfare March 2026 + +Tillipman provides legal theory basis for why coordination failure occurs in military AI governance: procurement contracts lack democratic accountability, institutional durability, and depend on post-deployment vendor controls that are technically uncertain. The absence of statutory AI governance is the institutional gap that prevents coordination. diff --git a/domains/ai-alignment/government-safety-penalties-invert-regulatory-incentives-by-blacklisting-cautious-actors.md b/domains/ai-alignment/government-safety-penalties-invert-regulatory-incentives-by-blacklisting-cautious-actors.md index 24bb0594f..2a16467bb 100644 --- a/domains/ai-alignment/government-safety-penalties-invert-regulatory-incentives-by-blacklisting-cautious-actors.md +++ b/domains/ai-alignment/government-safety-penalties-invert-regulatory-incentives-by-blacklisting-cautious-actors.md @@ -11,7 +11,7 @@ attribution: sourcer: - handle: "openai" context: "OpenAI blog post (Feb 27, 2026), CEO Altman public statements" -related: ["voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance", "government-safety-penalties-invert-regulatory-incentives-by-blacklisting-cautious-actors", "government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them", "alignment-tax-operates-as-market-clearing-mechanism-across-three-frontier-labs", "judicial-oversight-of-ai-governance-through-constitutional-grounds-not-statutory-safety-law"] +related: ["voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance", "government-safety-penalties-invert-regulatory-incentives-by-blacklisting-cautious-actors", "government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them", "alignment-tax-operates-as-market-clearing-mechanism-across-three-frontier-labs", "judicial-oversight-of-ai-governance-through-constitutional-grounds-not-statutory-safety-law", "supply-chain-risk-designation-weaponizes-national-security-law-to-punish-ai-safety-speech", "regulation-by-contract-structurally-inadequate-for-military-ai-governance"] reweave_edges: ["voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance|related|2026-03-31", "multilateral-verification-mechanisms-can-substitute-for-failed-voluntary-commitments-when-binding-enforcement-replaces-unilateral-sacrifice|supports|2026-04-03"] supports: ["multilateral-verification-mechanisms-can-substitute-for-failed-voluntary-commitments-when-binding-enforcement-replaces-unilateral-sacrifice"] --- @@ -57,3 +57,10 @@ The timing of The Intercept's publication (March 8, one day after Kalinowski's r **Source:** Tillipman, Lawfare, March 10, 2026 Tillipman documents the specific mechanism: when vendors maintain safety restrictions, the government designates them as 'supply chain risks' rather than engaging with the safety rationale. This is 'punishing speech' (per Judge Lin's ruling in the Anthropic case) and represents coercive removal rather than negotiation. The governance response to vendor safety positions is exclusion, not incorporation. + + +## Supporting Evidence + +**Source:** Tillipman, Lawfare March 2026 + +Tillipman identifies the Anthropic-DoD dispute as predictable failure mode of governance-by-procurement: when procurement agreements fail, the government escalates coercively (supply chain designation) rather than legislatively. This is structural, not accidental — the proper governance mechanism (statute) doesn't exist. diff --git a/domains/ai-alignment/voluntary-safety-constraints-without-enforcement-are-statements-of-intent-not-binding-governance.md b/domains/ai-alignment/voluntary-safety-constraints-without-enforcement-are-statements-of-intent-not-binding-governance.md index 58142232e..7e9d864f5 100644 --- a/domains/ai-alignment/voluntary-safety-constraints-without-enforcement-are-statements-of-intent-not-binding-governance.md +++ b/domains/ai-alignment/voluntary-safety-constraints-without-enforcement-are-statements-of-intent-not-binding-governance.md @@ -73,3 +73,10 @@ The EU AI Act Omnibus deferral extends this pattern from voluntary commitments t **Source:** Theseus synthetic analysis, May 4, 2026 The EU AI Act's August 2, 2026 enforcement deadline represents the first time in AI governance history that mandatory enforcement is legally in force without a confirmed delay mechanism, following the April 28, 2026 Omnibus trilogue failure. This creates a natural experiment testing whether mandatory mechanisms can work for civilian high-risk AI systems (medical devices, credit scoring, recruitment, critical infrastructure), though the Act's explicit military exclusion means the most consequential AI deployments (classified military systems) remain outside mandatory governance scope by design. + + +## Extending Evidence + +**Source:** Tillipman, Lawfare March 2026 + +Procurement contracts as governance instruments have four structural weaknesses that prevent them from functioning as binding governance: no democratic accountability, no institutional durability (can be changed by executive action), enforcement depends on uncertain post-deployment technical controls, and intelligence community interpretation applies broadest possible reading to exceptions. diff --git a/inbox/queue/2026-03-10-lawfare-tillipman-military-ai-policy-by-contract-limits.md b/inbox/archive/ai-alignment/2026-03-10-lawfare-tillipman-military-ai-policy-by-contract-limits.md similarity index 98% rename from inbox/queue/2026-03-10-lawfare-tillipman-military-ai-policy-by-contract-limits.md rename to inbox/archive/ai-alignment/2026-03-10-lawfare-tillipman-military-ai-policy-by-contract-limits.md index 51ddc3e0c..8c91cb09c 100644 --- a/inbox/queue/2026-03-10-lawfare-tillipman-military-ai-policy-by-contract-limits.md +++ b/inbox/archive/ai-alignment/2026-03-10-lawfare-tillipman-military-ai-policy-by-contract-limits.md @@ -7,10 +7,13 @@ date: 2026-03-10 domain: ai-alignment secondary_domains: [] format: article -status: unprocessed +status: processed +processed_by: theseus +processed_date: 2026-05-11 priority: high tags: [military-ai, procurement, governance, any-lawful-use, regulation-by-contract, structural-inadequacy] intake_tier: research-task +extraction_model: "anthropic/claude-sonnet-4.5" --- ## Content