teleo-codex/entities/grand-strategy/google-ai-principles-2025.md
Teleo Agents 97bec71a50
Some checks failed
Mirror PR to Forgejo / mirror (pull_request) Has been cancelled
leo: extract claims from 2025-02-04-washingtonpost-google-ai-principles-weapons-removed
- Source: inbox/queue/2025-02-04-washingtonpost-google-ai-principles-weapons-removed.md
- Domain: grand-strategy
- Claims: 0, Entities: 1
- Enrichments: 4
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Leo <PIPELINE>
2026-04-28 08:17:28 +00:00

36 lines
No EOL
2.1 KiB
Markdown

# Google AI Principles (2025 Revision)
**Type:** Corporate governance framework
**Parent:** Google / Alphabet
**Status:** Active (revised February 4, 2025)
**Domain:** AI ethics and governance
## Overview
Google's AI principles, originally established in 2018 following employee protests over Project Maven, were substantially revised on February 4, 2025 to remove explicit prohibitions on weapons and surveillance applications.
## Timeline
- **2018** — Original AI principles established after 4,000+ employee protest over Project Maven (Pentagon drone targeting AI contract). Included explicit "Applications we will not pursue" section with four categories of prohibited use.
- **February 4, 2025** — Principles revised to remove all explicit weapons and surveillance prohibitions. New language replaces categorical prohibitions with utilitarian calculus: "proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides."
## Original Prohibitions (2018-2025)
The prior "Applications we will not pursue" section listed:
1. Weapons technologies likely to cause harm
2. Technologies that gather or use information for surveillance violating internationally accepted norms
3. Technologies that cause or are likely to cause overall harm
4. Use cases contravening principles of international law and human rights
## Stated Rationale (2025)
Demis Hassabis (Google DeepMind) co-authored blog post: "There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights."
## External Response
- **Amnesty International:** Called the change "shameful" and "a blow for human rights"
- **Human Rights Watch:** Criticized removal of explicit weapons prohibitions
## Significance
The principles removal occurred 12 months before Anthropic's Pentagon supply chain designation (February 2026), demonstrating anticipatory erosion of voluntary AI safety constraints in response to competitive pressure signals rather than direct regulatory penalty.