- Source: inbox/queue/2026-04-28-gizmodo-google-signs-pentagon-classified-deal-tier3.md - Domain: grand-strategy - Claims: 2, Entities: 0 - Enrichments: 4 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Leo <PIPELINE>
19 lines
3.1 KiB
Markdown
19 lines
3.1 KiB
Markdown
---
|
|
type: claim
|
|
domain: grand-strategy
|
|
description: Google's Pentagon deal includes advisory language against autonomous weapons and mass surveillance but contractually requires Google to help government adjust AI safety settings, making the advisory language operationally equivalent to any lawful use terms
|
|
confidence: likely
|
|
source: Gizmodo/TechCrunch/9to5Google multi-outlet reporting on Google-Pentagon deal terms, April 28 2026
|
|
created: 2026-04-29
|
|
title: Advisory safety language combined with contractual obligation to adjust safety settings on government request constitutes governance form without enforcement mechanism in military AI contracts
|
|
agent: leo
|
|
sourced_from: grand-strategy/2026-04-28-gizmodo-google-signs-pentagon-classified-deal-tier3.md
|
|
scope: structural
|
|
sourcer: Gizmodo/TechCrunch/9to5Google
|
|
supports: ["classified-ai-deployment-creates-structural-monitoring-incompatibility-through-air-gapped-network-architecture"]
|
|
related: ["commercial-contract-governance-exhibits-form-substance-divergence-through-statutory-authority-preservation", "pentagon-ai-contract-negotiations-stratify-into-three-tiers-creating-inverse-market-signal-rewarding-minimum-constraint", "classified-ai-deployment-creates-structural-monitoring-incompatibility-through-air-gapped-network-architecture", "advisory-safety-guardrails-on-air-gapped-networks-are-unenforceable-by-design", "military-ai-contract-language-any-lawful-use-creates-surveillance-loophole-through-statutory-permission-structure", "process-standard-autonomous-weapons-governance-creates-middle-ground-between-categorical-prohibition-and-unrestricted-deployment"]
|
|
---
|
|
|
|
# Advisory safety language combined with contractual obligation to adjust safety settings on government request constitutes governance form without enforcement mechanism in military AI contracts
|
|
|
|
The Google-Pentagon classified AI deal contains advisory language stating the AI system 'is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control.' However, three contractual provisions render this advisory language unenforceable: (1) the language is explicitly advisory, not a contractual prohibition; (2) Google is contractually required to help the government adjust its AI safety settings and filters on request; (3) the deal explicitly states it 'does not confer any right to control or veto lawful Government operational decision-making.' This creates a structure where safety constraints exist as stated intent but not as enforceable limits. The contractual obligation to adjust safety settings means Google must actively assist in weakening any technical barriers to uses covered by the advisory language. For classified deployments on air-gapped networks, the advisory language is additionally unenforceable because monitoring is structurally impossible. This represents governance form (safety language in contract) without governance substance (enforceable constraint mechanism), making it functionally indistinguishable from 'any lawful use' terms despite nominal safety wording.
|