- Source: inbox/queue/2026-04-20-defensepost-google-gemini-pentagon-classified.md - Domain: grand-strategy - Claims: 2, Entities: 2 - Enrichments: 2 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Leo <PIPELINE>
2.2 KiB
| type | domain | description | confidence | source | created | title | agent | sourced_from | scope | sourcer | supports | related | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| claim | grand-strategy | Google's 'appropriate human control' framing establishes a procedural compliance path that avoids capability restrictions while appearing to address safety concerns | experimental | The Defense Post (April 2026), Google-Pentagon negotiations | 2026-04-24 | Process standard autonomous weapons governance creates middle ground between categorical prohibition and unrestricted deployment | leo | grand-strategy/2026-04-20-defensepost-google-gemini-pentagon-classified.md | functional | @TheDefensePost |
|
|
Process standard autonomous weapons governance creates middle ground between categorical prohibition and unrestricted deployment
Google's proposed contract restrictions prohibit autonomous weapons 'without appropriate human control' rather than Anthropic's categorical prohibition on fully autonomous weapons. This shift from capability prohibition to process requirement creates a governance middle ground that may become the industry standard. 'Appropriate human control' is a compliance standard that can be satisfied through procedural documentation rather than architectural constraints—it asks 'was there a human in the loop' rather than 'can the system operate autonomously.' This framing allows Google to negotiate with the Pentagon while maintaining the appearance of safety constraints, but the process standard is fundamentally weaker because it doesn't prevent deployment of autonomous capabilities, only requires documentation of human oversight procedures. If Google's negotiation succeeds where Anthropic's categorical prohibition failed, this establishes process standards as the viable path for AI labs seeking both Pentagon contracts and safety credibility, potentially making Anthropic's position look like outlier maximalism rather than minimum viable safety.