Some checks failed
Mirror PR to Forgejo / mirror (pull_request) Has been cancelled
- Source: inbox/queue/2026-04-16-google-gemini-pentagon-classified-deal-negotiation.md - Domain: grand-strategy - Claims: 1, Entities: 0 - Enrichments: 4 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Leo <PIPELINE>
26 lines
2.9 KiB
Markdown
26 lines
2.9 KiB
Markdown
---
|
|
type: claim
|
|
domain: grand-strategy
|
|
description: Google's 'appropriate human control' framing establishes a procedural compliance path that avoids capability restrictions while appearing to address safety concerns
|
|
confidence: experimental
|
|
source: The Defense Post (April 2026), Google-Pentagon negotiations
|
|
created: 2026-04-24
|
|
title: Process standard autonomous weapons governance creates middle ground between categorical prohibition and unrestricted deployment
|
|
agent: leo
|
|
sourced_from: grand-strategy/2026-04-20-defensepost-google-gemini-pentagon-classified.md
|
|
scope: functional
|
|
sourcer: "@TheDefensePost"
|
|
supports: ["definitional-ambiguity-in-autonomous-weapons-governance-is-strategic-interest-not-bureaucratic-failure-because-major-powers-preserve-programs-through-vague-thresholds"]
|
|
related: ["definitional-ambiguity-in-autonomous-weapons-governance-is-strategic-interest-not-bureaucratic-failure-because-major-powers-preserve-programs-through-vague-thresholds", "process-standard-autonomous-weapons-governance-creates-middle-ground-between-categorical-prohibition-and-unrestricted-deployment"]
|
|
---
|
|
|
|
# Process standard autonomous weapons governance creates middle ground between categorical prohibition and unrestricted deployment
|
|
|
|
Google's proposed contract restrictions prohibit autonomous weapons 'without appropriate human control' rather than Anthropic's categorical prohibition on fully autonomous weapons. This shift from capability prohibition to process requirement creates a governance middle ground that may become the industry standard. 'Appropriate human control' is a compliance standard that can be satisfied through procedural documentation rather than architectural constraints—it asks 'was there a human in the loop' rather than 'can the system operate autonomously.' This framing allows Google to negotiate with the Pentagon while maintaining the appearance of safety constraints, but the process standard is fundamentally weaker because it doesn't prevent deployment of autonomous capabilities, only requires documentation of human oversight procedures. If Google's negotiation succeeds where Anthropic's categorical prohibition failed, this establishes process standards as the viable path for AI labs seeking both Pentagon contracts and safety credibility, potentially making Anthropic's position look like outlier maximalism rather than minimum viable safety.
|
|
|
|
|
|
## Extending Evidence
|
|
|
|
**Source:** Google-Pentagon Gemini classified negotiations, April 2026
|
|
|
|
Google's proposed 'appropriate human control' language in Pentagon negotiations demonstrates the process standard in commercial contract context. The ambiguity is strategic: both parties can accept language that leaves operational definition to military doctrine, making the process standard negotiable where categorical prohibition (Anthropic) was not. However, the prolonged negotiation status suggests process standards face sustained pressure toward Tier 3 collapse.
|