Compare commits

...

2 commits

Author SHA1 Message Date
Teleo Agents
855020d516 leo: extract claims from 2026-04-22-axios-anthropic-no-kill-switch-dc-circuit
Some checks failed
Mirror PR to Forgejo / mirror (pull_request) Has been cancelled
- Source: inbox/queue/2026-04-22-axios-anthropic-no-kill-switch-dc-circuit.md
- Domain: grand-strategy
- Claims: 1, Entities: 0
- Enrichments: 2
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Leo <PIPELINE>
2026-04-24 08:29:59 +00:00
Teleo Agents
ca1dffe57c leo: extract claims from 2026-04-20-defensepost-google-gemini-pentagon-classified
Some checks failed
Mirror PR to Forgejo / mirror (pull_request) Has been cancelled
- Source: inbox/queue/2026-04-20-defensepost-google-gemini-pentagon-classified.md
- Domain: grand-strategy
- Claims: 2, Entities: 2
- Enrichments: 2
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Leo <PIPELINE>
2026-04-24 08:29:05 +00:00
11 changed files with 175 additions and 3 deletions

View file

@ -11,9 +11,16 @@ sourced_from: grand-strategy/2026-02-27-npr-openai-pentagon-deal-after-anthropic
scope: structural
sourcer: NPR/EFF
supports: ["legislative-ceiling-replicates-strategic-interest-inversion-at-statutory-scope-definition-level"]
related: ["eu-ai-act-article-2-3-national-security-exclusion-confirms-legislative-ceiling-is-cross-jurisdictional", "voluntary-ai-safety-constraints-lack-legal-enforcement-mechanism-when-primary-customer-demands-safety-unconstrained-alternatives", "legislative-ceiling-replicates-strategic-interest-inversion-at-statutory-scope-definition-level"]
related: ["eu-ai-act-article-2-3-national-security-exclusion-confirms-legislative-ceiling-is-cross-jurisdictional", "voluntary-ai-safety-constraints-lack-legal-enforcement-mechanism-when-primary-customer-demands-safety-unconstrained-alternatives", "legislative-ceiling-replicates-strategic-interest-inversion-at-statutory-scope-definition-level", "military-ai-contract-language-any-lawful-use-creates-surveillance-loophole-through-statutory-permission-structure", "commercial-contract-governance-exhibits-form-substance-divergence-through-statutory-authority-preservation", "voluntary-ai-safety-red-lines-are-structurally-equivalent-to-no-red-lines-when-lacking-constitutional-protection"]
---
# Military AI contract language using 'any lawful use' creates surveillance loopholes through existing statutory permissions that make explicit prohibitions ineffective
Anthropic refused Pentagon contract language requiring 'any lawful use' because this umbrella formulation would permit deployment for mass domestic surveillance and fully autonomous weapons without meaningful human authorization. OpenAI accepted this language while adding voluntary red lines against these activities. However, the EFF noted that 'any lawful use' language allows broad data collection under current statutes, which already permit various surveillance activities. The mechanism: explicit prohibitions (no mass domestic surveillance) are undermined by the umbrella permission (any lawful use) because 'lawful' is defined by existing statutes that authorize surveillance. The March 2-3 amendments added explicit prohibitions on surveillance of 'U.S. persons' and 'commercially acquired' personal information, but critics noted these still contain intelligence agency carve-outs. The structural problem is that 'any lawful use' establishes the baseline permission, and specific prohibitions must be interpreted within that framework — creating a legal hierarchy where the umbrella permission can override the specific constraint through statutory interpretation.
## Supporting Evidence
**Source:** The Defense Post, April 20, 2026
Pentagon's demand for 'any lawful use' language in Google negotiations (April 2026) matches the OpenAI template (February 2026), confirming this is standard contract architecture across military AI deployments, not negotiable language.

View file

@ -0,0 +1,19 @@
---
type: claim
domain: grand-strategy
description: The 'any lawful use' contract language is a structural Pentagon demand across AI providers, not a bilateral negotiation artifact
confidence: likely
source: The Defense Post, The Information (April 2026), confirmed across OpenAI, Anthropic, Google negotiations
created: 2026-04-24
title: Pentagon military AI contracts systematically demand 'any lawful use' terms as confirmed by three independent lab negotiations
agent: leo
sourced_from: grand-strategy/2026-04-20-defensepost-google-gemini-pentagon-classified.md
scope: structural
sourcer: "@TheDefensePost"
supports: ["voluntary-ai-safety-constraints-lack-legal-enforcement-mechanism-when-primary-customer-demands-safety-unconstrained-alternatives", "military-ai-contract-language-any-lawful-use-creates-surveillance-loophole-through-statutory-permission-structure"]
related: ["voluntary-ai-safety-constraints-lack-legal-enforcement-mechanism-when-primary-customer-demands-safety-unconstrained-alternatives", "voluntary-ai-safety-red-lines-are-structurally-equivalent-to-no-red-lines-when-lacking-constitutional-protection", "military-ai-contract-language-any-lawful-use-creates-surveillance-loophole-through-statutory-permission-structure", "commercial-contract-governance-exhibits-form-substance-divergence-through-statutory-authority-preservation"]
---
# Pentagon military AI contracts systematically demand 'any lawful use' terms as confirmed by three independent lab negotiations
Three independent AI lab negotiations with the Pentagon have now encountered identical 'any lawful use' contract language: OpenAI accepted it (February 27, 2026), Anthropic refused and was designated a supply chain risk with $200M contract canceled, and Google is currently negotiating with proposed carve-outs rather than categorical refusal. This pattern across three separate negotiations with different labs, different timelines, and different outcomes confirms that 'any lawful use' is the Pentagon's standard contract term for military AI deployments, not situational leverage applied to a single vendor. The consistency of this demand across negotiations spanning February through April 2026, despite the public controversy triggered by the Anthropic case, demonstrates institutional commitment to this language as a template requirement. The Pentagon's GenAI.mil platform launched in March 2026 with this contractual architecture already embedded, further confirming systematic rather than ad-hoc application.

View file

@ -0,0 +1,19 @@
---
type: claim
domain: grand-strategy
description: Google's 'appropriate human control' framing establishes a procedural compliance path that avoids capability restrictions while appearing to address safety concerns
confidence: experimental
source: The Defense Post (April 2026), Google-Pentagon negotiations
created: 2026-04-24
title: Process standard autonomous weapons governance creates middle ground between categorical prohibition and unrestricted deployment
agent: leo
sourced_from: grand-strategy/2026-04-20-defensepost-google-gemini-pentagon-classified.md
scope: functional
sourcer: "@TheDefensePost"
supports: ["definitional-ambiguity-in-autonomous-weapons-governance-is-strategic-interest-not-bureaucratic-failure-because-major-powers-preserve-programs-through-vague-thresholds"]
related: ["definitional-ambiguity-in-autonomous-weapons-governance-is-strategic-interest-not-bureaucratic-failure-because-major-powers-preserve-programs-through-vague-thresholds"]
---
# Process standard autonomous weapons governance creates middle ground between categorical prohibition and unrestricted deployment
Google's proposed contract restrictions prohibit autonomous weapons 'without appropriate human control' rather than Anthropic's categorical prohibition on fully autonomous weapons. This shift from capability prohibition to process requirement creates a governance middle ground that may become the industry standard. 'Appropriate human control' is a compliance standard that can be satisfied through procedural documentation rather than architectural constraints—it asks 'was there a human in the loop' rather than 'can the system operate autonomously.' This framing allows Google to negotiate with the Pentagon while maintaining the appearance of safety constraints, but the process standard is fundamentally weaker because it doesn't prevent deployment of autonomous capabilities, only requires documentation of human oversight procedures. If Google's negotiation succeeds where Anthropic's categorical prohibition failed, this establishes process standards as the viable path for AI labs seeking both Pentagon contracts and safety credibility, potentially making Anthropic's position look like outlier maximalism rather than minimum viable safety.

View file

@ -30,3 +30,10 @@ DC Circuit assigned the same three-judge panel (Henderson, Katsas, Rao) that den
**Source:** TechPolicy.Press timeline, April 8 2026 DC Circuit action
DC Circuit suspended preliminary injunction on April 8, 2026 citing 'ongoing military conflict' as grounds, while the underlying First Amendment retaliation claim remained viable in civil context. This confirms the military/civil split in judicial protection boundaries.
## Extending Evidence
**Source:** Anthropic DC Circuit Case 26-1049, April 22 2026
DC Circuit briefing schedule shows Petitioner Brief filed 04/22/2026, Respondent Brief due 05/06/2026, oral arguments 05/19/2026. The 'no kill switch' technical argument provides a non-First Amendment basis for challenging the designation — factual impossibility of the security risk the instrument is designed to address. This creates a second legal pathway beyond retaliation claims.

View file

@ -0,0 +1,19 @@
---
type: claim
domain: grand-strategy
description: The supply chain risk designation instrument was designed for companies with alleged government backdoors (Huawei, ZTE), but Anthropic's static model deployment in air-gapped Pentagon systems makes remote manipulation technically impossible
confidence: experimental
source: Anthropic Petitioner Brief, DC Circuit Case 26-1049, April 22 2026
created: 2026-04-24
title: Supply chain risk designation of domestic AI lab with no classified network access is governance instrument misdirection because the instrument requires backdoor capability that static model deployment structurally precludes
agent: leo
sourced_from: grand-strategy/2026-04-22-axios-anthropic-no-kill-switch-dc-circuit.md
scope: structural
sourcer: Axios / AP Wire
supports: ["voluntary-ai-safety-red-lines-are-structurally-equivalent-to-no-red-lines-when-lacking-constitutional-protection"]
related: ["governance-instrument-inversion-occurs-when-policy-tools-produce-opposite-of-stated-objective-through-structural-interaction-effects", "coercive-governance-instruments-produce-offense-defense-asymmetries-through-selective-enforcement-within-deploying-agency", "government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them"]
---
# Supply chain risk designation of domestic AI lab with no classified network access is governance instrument misdirection because the instrument requires backdoor capability that static model deployment structurally precludes
Anthropic's DC Circuit brief argues it has 'no back door or remote kill switch' and cannot 'log into a department system to modify or disable a running model' because Claude is deployed as a 'static model in classified environments.' This creates a structural impossibility: the supply chain risk designation instrument (previously applied only to Huawei and ZTE for alleged government backdoors) requires the capability to remotely manipulate deployed systems. Air-gapped classified military networks with static model deployments preclude this capability by design. This differs from governance instrument inversion (where instruments produce opposite effects) — here the instrument is applied against a factually impossible premise. The designation assumes a capability (remote access/manipulation) that the deployment architecture structurally prevents. If Anthropic's technical argument is correct, the designation was deployed on false factual grounds regardless of the First Amendment retaliation question.

View file

@ -115,3 +115,10 @@ The Anthropic-Pentagon timeline provides precise dating: July 2025 contract sign
**Source:** Axios April 19, 2026
The NSA/CISA access asymmetry reveals that even mandatory governance instruments (DOD supply chain designations) lack enforcement when the enforcing agency itself demands capability access. If coercive tools cannot be enforced within the deploying organization, voluntary constraints face even steeper enforcement barriers.
## Supporting Evidence
**Source:** The Defense Post, April 20, 2026
Google negotiations confirm the mechanism operates across multiple vendors: OpenAI accepted 'any lawful use' terms, Anthropic refused and was blacklisted, Google is negotiating with weaker carve-outs. Three independent data points establish this as systematic Pentagon demand, not bilateral artifact.

View file

@ -38,3 +38,10 @@ OpenAI's contract amendment added explicit prohibition language but no enforceme
**Source:** Abiri, Mutually Assured Deregulation, arXiv:2508.12300
Abiri's MAD framework provides the theoretical mechanism for why voluntary red lines collapse: the Regulation Sacrifice view creates competitive disadvantage for any actor that maintains constraints, making voluntary commitments politically untenable even for willing parties. The mechanism operates fractally—what was observed at corporate level (RSP v3) and negotiation level (Google) is driven by the same structural dynamic at national level.
## Supporting Evidence
**Source:** AP Wire via Axios, April 22 2026
AP reporting on April 22 states that even if political relations improve, a formal deal is 'not imminent' and would require a 'technical evaluation period.' This confirms that voluntary safety constraints remain vulnerable to administrative pressure even after preliminary injunction, as the company must still negotiate compliance terms rather than enforce constitutional boundaries.

View file

@ -0,0 +1,35 @@
# GenAI.mil
**Type:** Military AI deployment platform
**Operator:** U.S. Department of Defense
**Status:** Operational (launched March 2026)
**Domain:** Military AI infrastructure
## Overview
GenAI.mil is the Pentagon's AI deployment platform for making commercial AI models available to Department of Defense personnel. Launched in March 2026, it represents the Pentagon's systematic approach to military AI adoption with tiered access based on classification levels.
## Timeline
- **March 2026** — Platform launches with Google's Gemini as first model on UNCLASSIFIED tier
- **April 2026** — Negotiations underway for CLASSIFIED tier deployment
## Architecture
**Current deployment:**
- UNCLASSIFIED networks: Google Gemini (operational)
- CLASSIFIED networks: Under negotiation (Google Gemini, others TBD)
**Contract structure:**
- Standard 'any lawful use' terms required by Pentagon
- Tiered access based on security classification
- Hardware deployment within classified environments (GPUs, TPUs)
## Significance
GenAI.mil embeds the Pentagon's 'any lawful use' contract template as platform architecture, making it the standard requirement for any AI lab seeking military deployment. The platform's launch in March 2026, between the OpenAI deal (February) and ongoing Google negotiations (April), confirms systematic rather than ad-hoc application of these contract terms.
## Sources
- The Defense Post, April 20, 2026
- The Information, April 16, 2026

View file

@ -0,0 +1,46 @@
# Google-Pentagon Gemini Classified Negotiations
**Type:** Military AI contract negotiation
**Status:** Active (as of April 20, 2026)
**Parties:** Google, U.S. Department of Defense
**Domain:** Military AI deployment, classified systems
## Overview
Google is negotiating with the Pentagon to deploy Gemini AI models inside classified systems, following the March 2026 launch of GenAI.mil with Gemini on unclassified networks. The negotiation centers on contract language governing prohibited uses, with Google proposing specific carve-outs rather than accepting the Pentagon's standard 'any lawful use' terms.
## Timeline
- **March 2026** — Pentagon launches GenAI.mil with Google's Gemini as first model on UNCLASSIFIED networks
- **April 16, 2026** — The Information reports Google-Pentagon negotiations for CLASSIFIED deployment
- **April 20, 2026** — Multiple confirmations; negotiations ongoing, no deal closed
## Proposed Terms
Google's proposed contract restrictions:
- Prohibit use for domestic mass surveillance
- Prohibit controlling autonomous weapons without 'appropriate human control'
Pentagon's demand:
- 'All lawful uses' wording (same language that triggered Anthropic dispute)
## Technical Scope
Negotiations include plans to install:
- Racks of GPUs within classified environments
- Google's custom Tensor Processing Units (TPUs) in classified systems (first time for TPUs)
## Competitive Context
- **OpenAI:** Accepted 'any lawful use' language (February 27, 2026)
- **Anthropic:** Refused; designated supply chain risk; $200M contract canceled
- **Google:** Negotiating with carve-outs (current)
## Significance
This negotiation represents the third independent data point confirming 'any lawful use' as the Pentagon's standard military AI contract term. Google's 'appropriate human control' language for autonomous weapons is weaker than Anthropic's categorical prohibition, potentially establishing a process-based middle ground for industry safety standards.
## Sources
- The Information, April 16, 2026
- The Defense Post, April 20, 2026

View file

@ -7,9 +7,12 @@ date: 2026-04-20
domain: grand-strategy
secondary_domains: [ai-alignment]
format: article
status: unprocessed
status: processed
processed_by: leo
processed_date: 2026-04-24
priority: high
tags: [google, gemini, pentagon, classified-systems, any-lawful-use, autonomous-weapons, domestic-surveillance, genai-mil, military-ai-contract, governance-template]
extraction_model: "anthropic/claude-sonnet-4.5"
---
## Content

View file

@ -7,9 +7,12 @@ date: 2026-04-22
domain: grand-strategy
secondary_domains: [ai-alignment]
format: article
status: unprocessed
status: processed
processed_by: leo
processed_date: 2026-04-24
priority: high
tags: [anthropic, pentagon, dc-circuit, supply-chain-risk, kill-switch, static-model, classified-systems, governance-instrument-misdirection, first-amendment, brief]
extraction_model: "anthropic/claude-sonnet-4.5"
---
## Content