From 97bec71a50af7260fc8eb161fe17c090ac20588b Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Tue, 28 Apr 2026 08:14:47 +0000 Subject: [PATCH] leo: extract claims from 2025-02-04-washingtonpost-google-ai-principles-weapons-removed - Source: inbox/queue/2025-02-04-washingtonpost-google-ai-principles-weapons-removed.md - Domain: grand-strategy - Claims: 0, Entities: 1 - Enrichments: 4 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Leo --- ...inverts-china-us-participation-patterns.md | 8 ++++- ...ugh-competitive-disadvantage-conversion.md | 7 ++++ ...tors-of-cumulative-competitive-pressure.md | 9 ++++- ...-when-lacking-constitutional-protection.md | 7 ++++ .../google-ai-principles-2025.md | 36 +++++++++++++++++++ ...st-google-ai-principles-weapons-removed.md | 5 ++- 6 files changed, 69 insertions(+), 3 deletions(-) create mode 100644 entities/grand-strategy/google-ai-principles-2025.md rename inbox/{queue => archive/grand-strategy}/2025-02-04-washingtonpost-google-ai-principles-weapons-removed.md (98%) diff --git a/domains/grand-strategy/ai-governance-discourse-capture-by-competitiveness-framing-inverts-china-us-participation-patterns.md b/domains/grand-strategy/ai-governance-discourse-capture-by-competitiveness-framing-inverts-china-us-participation-patterns.md index b7beec95b..4c41ffda7 100644 --- a/domains/grand-strategy/ai-governance-discourse-capture-by-competitiveness-framing-inverts-china-us-participation-patterns.md +++ b/domains/grand-strategy/ai-governance-discourse-capture-by-competitiveness-framing-inverts-china-us-participation-patterns.md @@ -28,4 +28,10 @@ The Paris Summit's official framing as the 'AI Action Summit' rather than contin **Source:** Abiri, Mutually Assured Deregulation, arXiv:2508.12300 -The MAD mechanism explains the discourse capture: the 'Regulation Sacrifice' framing since ~2022 converted AI governance from a cooperation problem to a prisoner's dilemma where restraint equals competitive disadvantage. This structural conversion makes the competitiveness framing self-reinforcing—any attempt to reframe as cooperation is countered by pointing to adversary non-participation. \ No newline at end of file +The MAD mechanism explains the discourse capture: the 'Regulation Sacrifice' framing since ~2022 converted AI governance from a cooperation problem to a prisoner's dilemma where restraint equals competitive disadvantage. This structural conversion makes the competitiveness framing self-reinforcing—any attempt to reframe as cooperation is countered by pointing to adversary non-participation. + +## Supporting Evidence + +**Source:** Google DeepMind blog post, Demis Hassabis, February 4, 2025 + +Google's official rationale for removing weapons prohibitions deployed the exact competitiveness-framing inversion: 'There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights' (Demis Hassabis, Google DeepMind blog post, February 4, 2025). This frames weapons AI development as democracy promotion, inverting the governance discourse to license the behavior it previously prohibited. The 'democracies should lead' framing converts a safety constraint removal into a values-aligned competitive necessity. diff --git a/domains/grand-strategy/mutually-assured-deregulation-makes-voluntary-ai-governance-structurally-untenable-through-competitive-disadvantage-conversion.md b/domains/grand-strategy/mutually-assured-deregulation-makes-voluntary-ai-governance-structurally-untenable-through-competitive-disadvantage-conversion.md index b1c6d024f..ad56a2a1a 100644 --- a/domains/grand-strategy/mutually-assured-deregulation-makes-voluntary-ai-governance-structurally-untenable-through-competitive-disadvantage-conversion.md +++ b/domains/grand-strategy/mutually-assured-deregulation-makes-voluntary-ai-governance-structurally-untenable-through-competitive-disadvantage-conversion.md @@ -24,3 +24,10 @@ Abiri's Mutually Assured Deregulation framework formalizes what has been empiric **Source:** Sharma resignation, Semafor/BISI reporting, Feb 9 2026 Sharma's February 9 resignation preceded both RSP v3.0 release and Hegseth ultimatum by 15 days, establishing that internal safety culture decay occurs before visible policy changes and before specific coercive events. His structural framing ('institutions shaped by competition, speed, and scale') indicates cumulative pressure from September 2025 Pentagon negotiations rather than discrete government action. + + +## Extending Evidence + +**Source:** Washington Post, February 4, 2025; Google DeepMind blog post (Demis Hassabis) + +Google removed its AI weapons and surveillance principles on February 4, 2025—12 months BEFORE Anthropic was designated a supply chain risk in February 2026. This demonstrates MAD operates through anticipatory erosion, not just penalty response. Google preemptively eliminated constraints before a competitor was punished for maintaining them, showing the mechanism propagates through credible threat of competitive disadvantage rather than demonstrated consequence. The 12-month gap proves companies respond to the structural incentive before the test case crystallizes. diff --git a/domains/grand-strategy/safety-leadership-exits-precede-voluntary-governance-policy-changes-as-leading-indicators-of-cumulative-competitive-pressure.md b/domains/grand-strategy/safety-leadership-exits-precede-voluntary-governance-policy-changes-as-leading-indicators-of-cumulative-competitive-pressure.md index da6144680..31eed797d 100644 --- a/domains/grand-strategy/safety-leadership-exits-precede-voluntary-governance-policy-changes-as-leading-indicators-of-cumulative-competitive-pressure.md +++ b/domains/grand-strategy/safety-leadership-exits-precede-voluntary-governance-policy-changes-as-leading-indicators-of-cumulative-competitive-pressure.md @@ -11,9 +11,16 @@ sourced_from: grand-strategy/2026-02-09-semafor-sharma-anthropic-safety-head-res scope: causal sourcer: Semafor, Yahoo Finance, eWeek, BISI supports: ["mutually-assured-deregulation-makes-voluntary-ai-governance-structurally-untenable-through-competitive-disadvantage-conversion"] -related: ["mutually-assured-deregulation-makes-voluntary-ai-governance-structurally-untenable-through-competitive-disadvantage-conversion", "voluntary-ai-safety-red-lines-are-structurally-equivalent-to-no-red-lines-when-lacking-constitutional-protection", "voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints"] +related: ["mutually-assured-deregulation-makes-voluntary-ai-governance-structurally-untenable-through-competitive-disadvantage-conversion", "voluntary-ai-safety-red-lines-are-structurally-equivalent-to-no-red-lines-when-lacking-constitutional-protection", "voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints", "safety-leadership-exits-precede-voluntary-governance-policy-changes-as-leading-indicators-of-cumulative-competitive-pressure"] --- # Safety leadership exits precede voluntary governance policy changes as leading indicators of cumulative competitive pressure Mrinank Sharma, head of Anthropic's Safeguards Research Team, resigned on February 9, 2026 with a public statement that 'the world is in peril' and citing difficulty in 'truly let[ting] our values govern our actions' within 'institutions shaped by competition, speed, and scale.' This resignation occurred 15 days before both the RSP v3.0 release (February 24) that dropped pause commitments and the Hegseth ultimatum (February 24, 5pm deadline). The timing establishes that internal safety culture erosion preceded any specific external coercive event. Sharma's framing was structural ('competition, speed, and scale') rather than event-specific, suggesting cumulative pressure from the September 2025 Pentagon contract negotiations collapse rather than reaction to a discrete policy decision. This pattern indicates that voluntary governance failure operates through continuous market pressure that degrades internal safety capacity before manifesting in visible policy changes. Leadership exits serve as leading indicators of governance decay, with the safety head departing before the formal policy shift became public. + + +## Extending Evidence + +**Source:** Washington Post, February 4, 2025 + +Google's weapons principles removal demonstrates the mechanism operates at the institutional level (policy documents) not just individual level (personnel exits). The formal AI principles themselves can exit before leadership exits, showing the competitive pressure indicator manifests in multiple forms. The principles removal is the institutional equivalent of a safety leadership departure—both signal cumulative competitive pressure reaching a threshold where voluntary constraints become untenable. diff --git a/domains/grand-strategy/voluntary-ai-safety-red-lines-are-structurally-equivalent-to-no-red-lines-when-lacking-constitutional-protection.md b/domains/grand-strategy/voluntary-ai-safety-red-lines-are-structurally-equivalent-to-no-red-lines-when-lacking-constitutional-protection.md index ad1376e57..2f7919e66 100644 --- a/domains/grand-strategy/voluntary-ai-safety-red-lines-are-structurally-equivalent-to-no-red-lines-when-lacking-constitutional-protection.md +++ b/domains/grand-strategy/voluntary-ai-safety-red-lines-are-structurally-equivalent-to-no-red-lines-when-lacking-constitutional-protection.md @@ -52,3 +52,10 @@ AP reporting on April 22 states that even if political relations improve, a form **Source:** Sharma resignation timeline, Feb 9 vs Feb 24 2026 The head of Anthropic's Safeguards Research Team exited 15 days before the lab dropped pause commitments in RSP v3.0, demonstrating that voluntary safety commitments erode through internal culture decay before external enforcement is tested. Leadership exits serve as leading indicators of governance failure. + + +## Supporting Evidence + +**Source:** Washington Post, February 4, 2025; comparison of old vs. new Google AI principles + +Google's February 2025 removal of explicit weapons and surveillance prohibitions from its AI principles demonstrates the structural equivalence in action. The prior 'Applications we will not pursue' section (weapons technologies, surveillance violating international norms, technologies causing overall harm, violations of international law) was replaced with utilitarian calculus language: 'proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks.' The formal red lines were eliminated through competitive pressure without any judicial or legislative intervention, completing the process from explicit prohibition to discretionary assessment. diff --git a/entities/grand-strategy/google-ai-principles-2025.md b/entities/grand-strategy/google-ai-principles-2025.md new file mode 100644 index 000000000..be51f3c1e --- /dev/null +++ b/entities/grand-strategy/google-ai-principles-2025.md @@ -0,0 +1,36 @@ +# Google AI Principles (2025 Revision) + +**Type:** Corporate governance framework +**Parent:** Google / Alphabet +**Status:** Active (revised February 4, 2025) +**Domain:** AI ethics and governance + +## Overview + +Google's AI principles, originally established in 2018 following employee protests over Project Maven, were substantially revised on February 4, 2025 to remove explicit prohibitions on weapons and surveillance applications. + +## Timeline + +- **2018** — Original AI principles established after 4,000+ employee protest over Project Maven (Pentagon drone targeting AI contract). Included explicit "Applications we will not pursue" section with four categories of prohibited use. +- **February 4, 2025** — Principles revised to remove all explicit weapons and surveillance prohibitions. New language replaces categorical prohibitions with utilitarian calculus: "proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides." + +## Original Prohibitions (2018-2025) + +The prior "Applications we will not pursue" section listed: +1. Weapons technologies likely to cause harm +2. Technologies that gather or use information for surveillance violating internationally accepted norms +3. Technologies that cause or are likely to cause overall harm +4. Use cases contravening principles of international law and human rights + +## Stated Rationale (2025) + +Demis Hassabis (Google DeepMind) co-authored blog post: "There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights." + +## External Response + +- **Amnesty International:** Called the change "shameful" and "a blow for human rights" +- **Human Rights Watch:** Criticized removal of explicit weapons prohibitions + +## Significance + +The principles removal occurred 12 months before Anthropic's Pentagon supply chain designation (February 2026), demonstrating anticipatory erosion of voluntary AI safety constraints in response to competitive pressure signals rather than direct regulatory penalty. \ No newline at end of file diff --git a/inbox/queue/2025-02-04-washingtonpost-google-ai-principles-weapons-removed.md b/inbox/archive/grand-strategy/2025-02-04-washingtonpost-google-ai-principles-weapons-removed.md similarity index 98% rename from inbox/queue/2025-02-04-washingtonpost-google-ai-principles-weapons-removed.md rename to inbox/archive/grand-strategy/2025-02-04-washingtonpost-google-ai-principles-weapons-removed.md index bd5099d5a..e8316eead 100644 --- a/inbox/queue/2025-02-04-washingtonpost-google-ai-principles-weapons-removed.md +++ b/inbox/archive/grand-strategy/2025-02-04-washingtonpost-google-ai-principles-weapons-removed.md @@ -7,10 +7,13 @@ date: 2025-02-04 domain: grand-strategy secondary_domains: [ai-alignment] format: news-coverage -status: unprocessed +status: processed +processed_by: leo +processed_date: 2026-04-28 priority: high tags: [google, AI-principles, weapons, surveillance, MAD, voluntary-constraints, competitive-pressure, governance-laundering, DeepMind] intake_tier: research-task +extraction_model: "anthropic/claude-sonnet-4.5" --- ## Content