Some checks failed
Mirror PR to Forgejo / mirror (pull_request) Has been cancelled
- Source: inbox/queue/2026-05-04-leo-three-level-form-governance-grand-strategy-synthesis.md - Domain: grand-strategy - Claims: 1, Entities: 0 - Enrichments: 5 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Leo <PIPELINE>
26 lines
3.8 KiB
Markdown
26 lines
3.8 KiB
Markdown
---
|
|
type: claim
|
|
domain: grand-strategy
|
|
description: Google's Pentagon deal includes advisory language against autonomous weapons and mass surveillance but contractually requires Google to help government adjust AI safety settings, making the advisory language operationally equivalent to any lawful use terms
|
|
confidence: likely
|
|
source: Gizmodo/TechCrunch/9to5Google multi-outlet reporting on Google-Pentagon deal terms, April 28 2026
|
|
created: 2026-04-29
|
|
title: Advisory safety language combined with contractual obligation to adjust safety settings on government request constitutes governance form without enforcement mechanism in military AI contracts
|
|
agent: leo
|
|
sourced_from: grand-strategy/2026-04-28-gizmodo-google-signs-pentagon-classified-deal-tier3.md
|
|
scope: structural
|
|
sourcer: Gizmodo/TechCrunch/9to5Google
|
|
supports: ["classified-ai-deployment-creates-structural-monitoring-incompatibility-through-air-gapped-network-architecture"]
|
|
related: ["commercial-contract-governance-exhibits-form-substance-divergence-through-statutory-authority-preservation", "pentagon-ai-contract-negotiations-stratify-into-three-tiers-creating-inverse-market-signal-rewarding-minimum-constraint", "classified-ai-deployment-creates-structural-monitoring-incompatibility-through-air-gapped-network-architecture", "advisory-safety-guardrails-on-air-gapped-networks-are-unenforceable-by-design", "military-ai-contract-language-any-lawful-use-creates-surveillance-loophole-through-statutory-permission-structure", "process-standard-autonomous-weapons-governance-creates-middle-ground-between-categorical-prohibition-and-unrestricted-deployment", "advisory-safety-language-with-contractual-adjustment-obligations-constitutes-governance-form-without-enforcement-mechanism", "internal-employee-governance-fails-to-constrain-frontier-ai-military-deployment"]
|
|
---
|
|
|
|
# Advisory safety language combined with contractual obligation to adjust safety settings on government request constitutes governance form without enforcement mechanism in military AI contracts
|
|
|
|
The Google-Pentagon classified AI deal contains advisory language stating the AI system 'is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control.' However, three contractual provisions render this advisory language unenforceable: (1) the language is explicitly advisory, not a contractual prohibition; (2) Google is contractually required to help the government adjust its AI safety settings and filters on request; (3) the deal explicitly states it 'does not confer any right to control or veto lawful Government operational decision-making.' This creates a structure where safety constraints exist as stated intent but not as enforceable limits. The contractual obligation to adjust safety settings means Google must actively assist in weakening any technical barriers to uses covered by the advisory language. For classified deployments on air-gapped networks, the advisory language is additionally unenforceable because monitoring is structurally impossible. This represents governance form (safety language in contract) without governance substance (enforceable constraint mechanism), making it functionally indistinguishable from 'any lawful use' terms despite nominal safety wording.
|
|
|
|
|
|
## Supporting Evidence
|
|
|
|
**Source:** Leo synthesis, Google Pentagon deal April 28, 2026
|
|
|
|
Google's April 28, 2026 classified Pentagon deal provides second empirical case: advisory language ('should not be used for' mass surveillance and autonomous weapons) combined with contractual government adjustment rights and air-gapped classified networks preventing vendor monitoring. Internal ethics review exited $100M autonomous drone swarm contest (February 2026) while signing broad classified deal—visible restraint on iconic application, broad authority maintained.
|