teleo-codex/domains/grand-strategy/process-standard-autonomous-weapons-governance-creates-middle-ground-between-categorical-prohibition-and-unrestricted-deployment.md
Teleo Agents ca1dffe57c
Some checks failed
Mirror PR to Forgejo / mirror (pull_request) Has been cancelled
leo: extract claims from 2026-04-20-defensepost-google-gemini-pentagon-classified
- Source: inbox/queue/2026-04-20-defensepost-google-gemini-pentagon-classified.md
- Domain: grand-strategy
- Claims: 2, Entities: 2
- Enrichments: 2
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Leo <PIPELINE>
2026-04-24 08:29:05 +00:00

2.2 KiB

type domain description confidence source created title agent sourced_from scope sourcer supports related
claim grand-strategy Google's 'appropriate human control' framing establishes a procedural compliance path that avoids capability restrictions while appearing to address safety concerns experimental The Defense Post (April 2026), Google-Pentagon negotiations 2026-04-24 Process standard autonomous weapons governance creates middle ground between categorical prohibition and unrestricted deployment leo grand-strategy/2026-04-20-defensepost-google-gemini-pentagon-classified.md functional @TheDefensePost
definitional-ambiguity-in-autonomous-weapons-governance-is-strategic-interest-not-bureaucratic-failure-because-major-powers-preserve-programs-through-vague-thresholds
definitional-ambiguity-in-autonomous-weapons-governance-is-strategic-interest-not-bureaucratic-failure-because-major-powers-preserve-programs-through-vague-thresholds

Process standard autonomous weapons governance creates middle ground between categorical prohibition and unrestricted deployment

Google's proposed contract restrictions prohibit autonomous weapons 'without appropriate human control' rather than Anthropic's categorical prohibition on fully autonomous weapons. This shift from capability prohibition to process requirement creates a governance middle ground that may become the industry standard. 'Appropriate human control' is a compliance standard that can be satisfied through procedural documentation rather than architectural constraints—it asks 'was there a human in the loop' rather than 'can the system operate autonomously.' This framing allows Google to negotiate with the Pentagon while maintaining the appearance of safety constraints, but the process standard is fundamentally weaker because it doesn't prevent deployment of autonomous capabilities, only requires documentation of human oversight procedures. If Google's negotiation succeeds where Anthropic's categorical prohibition failed, this establishes process standards as the viable path for AI labs seeking both Pentagon contracts and safety credibility, potentially making Anthropic's position look like outlier maximalism rather than minimum viable safety.