theseus: extract claims from 2026-05-03-hendrycks-schmidt-wang-superintelligence-strategy-maim
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
- Source: inbox/queue/2026-05-03-hendrycks-schmidt-wang-superintelligence-strategy-maim.md - Domain: ai-alignment - Claims: 2, Entities: 1 - Enrichments: 3 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Theseus <PIPELINE>
This commit is contained in:
parent
063f0b44ee
commit
a05b05bb4a
7 changed files with 104 additions and 56 deletions
|
|
@ -78,3 +78,10 @@ Topics:
|
||||||
**Source:** Theseus synthetic analysis of Beaglehole/SCAV/Nordby/Apollo publication patterns
|
**Source:** Theseus synthetic analysis of Beaglehole/SCAV/Nordby/Apollo publication patterns
|
||||||
|
|
||||||
The interpretability-for-safety and adversarial robustness research communities publish in different venues (ICLR interpretability workshops vs. CCS/USENIX security), attend different conferences, and have minimal citation crossover. This structural silo causes organizations implementing Beaglehole-style monitoring to gain detection improvement against naive attackers while simultaneously creating precision attack infrastructure for adversarially-informed attackers, without awareness from reading the monitoring literature. This is empirical evidence that coordination failures between research communities produce safety degradation independent of any individual lab's technical capabilities.
|
The interpretability-for-safety and adversarial robustness research communities publish in different venues (ICLR interpretability workshops vs. CCS/USENIX security), attend different conferences, and have minimal citation crossover. This structural silo causes organizations implementing Beaglehole-style monitoring to gain detection improvement against naive attackers while simultaneously creating precision attack infrastructure for adversarially-informed attackers, without awareness from reading the monitoring literature. This is empirical evidence that coordination failures between research communities produce safety degradation independent of any individual lab's technical capabilities.
|
||||||
|
|
||||||
|
|
||||||
|
## Supporting Evidence
|
||||||
|
|
||||||
|
**Source:** Hendrycks, Schmidt, Wang (2025), Superintelligence Strategy
|
||||||
|
|
||||||
|
Dan Hendrycks (CAIS founder, leading technical AI safety institution) co-authored with Eric Schmidt and Alexandr Wang a paper proposing MAIM deterrence infrastructure as the primary alignment-adjacent policy lever rather than technical solutions like improved RLHF or interpretability. This represents the strongest institutional confirmation that coordination mechanisms are the actionable lever — the field's most credible safety organization is proposing deterrence (coordination) not technical alignment.
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,20 @@
|
||||||
|
---
|
||||||
|
type: claim
|
||||||
|
domain: ai-alignment
|
||||||
|
description: Deterrence-based coordination maintains multiple competing AI development programs through threat of sabotage, offering an alternative to unified collective intelligence systems
|
||||||
|
confidence: experimental
|
||||||
|
source: Hendrycks, Schmidt, Wang (2025), MAIM framework
|
||||||
|
created: 2026-05-03
|
||||||
|
title: MAIM deterrence creates a multipolar AI equilibrium without requiring collective superintelligence architecture
|
||||||
|
agent: theseus
|
||||||
|
sourced_from: ai-alignment/2026-05-03-hendrycks-schmidt-wang-superintelligence-strategy-maim.md
|
||||||
|
scope: structural
|
||||||
|
sourcer: Hendrycks, Schmidt, Wang
|
||||||
|
supports: ["AI alignment is a coordination problem not a technical problem"]
|
||||||
|
challenges: ["multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence"]
|
||||||
|
related: ["multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence", "distributed superintelligence may be less stable and more dangerous than unipolar because resource competition between superintelligent agents creates worse coordination failures than a single misaligned system"]
|
||||||
|
---
|
||||||
|
|
||||||
|
# MAIM deterrence creates a multipolar AI equilibrium without requiring collective superintelligence architecture
|
||||||
|
|
||||||
|
MAIM proposes a fourth path to superintelligence coordination distinct from the three paths previously identified (unipolar, multipolar competing, collective). The deterrence regime maintains a multipolar world where multiple states develop AI capabilities simultaneously, but prevents any single actor from achieving decisive strategic advantage through the threat of preventive sabotage. The escalation ladder (intelligence gathering → covert cyber interference → overt cyberattacks → kinetic strikes) creates mutual vulnerability that stabilizes the multipolar equilibrium without requiring architectural integration of AI systems. This differs from collective superintelligence proposals in two ways: (1) it preserves national sovereignty and competitive development rather than requiring federated architectures, and (2) it operates through negative incentives (threat of sabotage) rather than positive coordination mechanisms (shared infrastructure, aligned objectives). The paper argues this equilibrium 'already describes' the current strategic situation, suggesting deterrence is the de facto coordination mechanism rather than a future proposal. However, this creates tension with claims about multipolar failure modes — if multiple aligned AI systems pose greater existential risk than single misaligned superintelligence, then MAIM's multipolar equilibrium may be stabilizing a more dangerous configuration than it prevents.
|
||||||
|
|
@ -0,0 +1,19 @@
|
||||||
|
---
|
||||||
|
type: claim
|
||||||
|
domain: ai-alignment
|
||||||
|
description: The leading AI safety institution (CAIS) proposing deterrence infrastructure rather than technical solutions signals that coordination mechanisms have become the dominant framework in AI national security discourse
|
||||||
|
confidence: experimental
|
||||||
|
source: Hendrycks, Schmidt, Wang (2025), nationalsecurity.ai paper
|
||||||
|
created: 2026-05-03
|
||||||
|
title: MAIM deterrence represents a paradigm shift from technical alignment to coordination infrastructure as the primary alignment-adjacent policy lever
|
||||||
|
agent: theseus
|
||||||
|
sourced_from: ai-alignment/2026-05-03-hendrycks-schmidt-wang-superintelligence-strategy-maim.md
|
||||||
|
scope: structural
|
||||||
|
sourcer: Hendrycks, Schmidt, Wang
|
||||||
|
supports: ["AI alignment is a coordination problem not a technical problem"]
|
||||||
|
related: ["AI alignment is a coordination problem not a technical problem", "voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints", "uk-aisi", "ai-governance-discourse-capture-by-competitiveness-framing-inverts-china-us-participation-patterns"]
|
||||||
|
---
|
||||||
|
|
||||||
|
# MAIM deterrence represents a paradigm shift from technical alignment to coordination infrastructure as the primary alignment-adjacent policy lever
|
||||||
|
|
||||||
|
The MAIM paper represents a paradigm shift in AI alignment strategy, evidenced by three factors: (1) Institutional signal — Dan Hendrycks, founder of CAIS (the most credible institutional voice in technical AI safety), is proposing deterrence infrastructure rather than improved RLHF or interpretability methods. (2) Coalition composition — co-authors are Eric Schmidt (former Google CEO, former National Security Commission on AI chair) and Alexandr Wang (Scale AI CEO, leading AI deployment contractor with DoD relationships), indicating government-connected tech executives and military contractors have aligned on deterrence as the actionable lever. (3) Framework adoption — the paper claims MAIM 'already describes the strategic picture AI superpowers find themselves in,' positioning deterrence not as a proposal but as the existing reality. The paper outlines a three-part strategy where deterrence (MAIM) is Part 1, with nonproliferation and competitiveness as supporting elements. The escalation ladder includes intelligence gathering, covert cyber interference, overt cyberattacks on infrastructure, and kinetic strikes on datacenters. The argument is that AI projects are 'relatively easy to sabotage' compared to nuclear arsenals, creating a deterrent effect where no state will race to superintelligence unilaterally because rivals have both capability and incentive to sabotage. This represents a fundamental reorientation from technical alignment research (making AI systems safe) to coordination infrastructure (making unilateral AI development strategically untenable).
|
||||||
|
|
@ -1,22 +1,14 @@
|
||||||
---
|
---
|
||||||
confidence: experimental
|
|
||||||
created: 2026-03-06
|
|
||||||
description: Ben Thompson's structural argument that governments must control frontier AI because it constitutes weapons-grade capability, as demonstrated by the Pentagon's actions against Anthropic
|
|
||||||
domain: ai-alignment
|
|
||||||
related:
|
|
||||||
- near-universal-political-support-for-autonomous-weapons-governance-coexists-with-structural-failure-because-opposing-states-control-advanced-programs
|
|
||||||
- legal-mandate-is-the-only-version-of-coordinated-pausing-that-avoids-antitrust-risk-while-preserving-coordination-benefits
|
|
||||||
- attractor-authoritarian-lock-in
|
|
||||||
reweave_edges:
|
|
||||||
- AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance
|
|
||||||
must account for|supports|2026-03-28
|
|
||||||
source: Noah Smith, 'If AI is a weapon, why don't we regulate it like one?' (Noahopinion, Mar 6, 2026); Ben Thompson, Stratechery analysis of Anthropic/Pentagon dispute (2026)
|
|
||||||
sourced_from:
|
|
||||||
- inbox/archive/general/2026-03-06-noahopinion-ai-weapon-regulation.md
|
|
||||||
supports:
|
|
||||||
- AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance
|
|
||||||
must account for
|
|
||||||
type: claim
|
type: claim
|
||||||
|
domain: ai-alignment
|
||||||
|
description: Ben Thompson's structural argument that governments must control frontier AI because it constitutes weapons-grade capability, as demonstrated by the Pentagon's actions against Anthropic
|
||||||
|
confidence: experimental
|
||||||
|
source: Noah Smith, 'If AI is a weapon, why don't we regulate it like one?' (Noahopinion, Mar 6, 2026); Ben Thompson, Stratechery analysis of Anthropic/Pentagon dispute (2026)
|
||||||
|
created: 2026-03-06
|
||||||
|
related: ["near-universal-political-support-for-autonomous-weapons-governance-coexists-with-structural-failure-because-opposing-states-control-advanced-programs", "legal-mandate-is-the-only-version-of-coordinated-pausing-that-avoids-antitrust-risk-while-preserving-coordination-benefits", "attractor-authoritarian-lock-in", "nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally intolerable to governments", "government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them", "coercive-governance-instruments-deployed-for-future-optionality-preservation-not-current-harm-prevention-when-pentagon-designates-domestic-ai-labs-as-supply-chain-risks"]
|
||||||
|
reweave_edges: ["AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance must account for|supports|2026-03-28"]
|
||||||
|
sourced_from: ["inbox/archive/general/2026-03-06-noahopinion-ai-weapon-regulation.md"]
|
||||||
|
supports: ["AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance must account for"]
|
||||||
---
|
---
|
||||||
|
|
||||||
# nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally intolerable to governments
|
# nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally intolerable to governments
|
||||||
|
|
@ -41,3 +33,10 @@ Relevant Notes:
|
||||||
|
|
||||||
Topics:
|
Topics:
|
||||||
- [[_map]]
|
- [[_map]]
|
||||||
|
|
||||||
|
|
||||||
|
## Supporting Evidence
|
||||||
|
|
||||||
|
**Source:** Hendrycks, Schmidt, Wang (2025), Part 2 (Nonproliferation) and Part 3 (Competitiveness)
|
||||||
|
|
||||||
|
MAIM framework explicitly positions AI development as a national security issue requiring state-level coordination and control. The escalation ladder includes kinetic strikes on datacenters, treating AI infrastructure as legitimate military targets. Schmidt (former National Security Commission on AI chair) and Wang (Scale AI CEO with DoD relationships) co-authoring signals government-connected actors treating AI as state-controlled strategic asset.
|
||||||
|
|
|
||||||
|
|
@ -1,42 +1,13 @@
|
||||||
---
|
---
|
||||||
confidence: likely
|
|
||||||
created: 2026-03-06
|
|
||||||
description: Anthropic's Feb 2026 rollback of its Responsible Scaling Policy proves that even the strongest voluntary safety commitment collapses when the competitive cost exceeds the reputational benefit
|
|
||||||
domain: ai-alignment
|
|
||||||
related:
|
|
||||||
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment
|
|
||||||
- multilateral-ai-governance-verification-mechanisms-remain-at-proposal-stage-because-technical-infrastructure-does-not-exist-at-deployment-scale
|
|
||||||
- evaluation-based-coordination-schemes-face-antitrust-obstacles-because-collective-pausing-agreements-among-competing-developers-could-be-construed-as-cartel-behavior
|
|
||||||
- ccw-consensus-rule-enables-small-coalition-veto-over-autonomous-weapons-governance
|
|
||||||
- ai-sandbagging-creates-m-and-a-liability-exposure-across-product-liability-consumer-protection-and-securities-fraud
|
|
||||||
- precautionary-capability-threshold-activation-is-governance-response-to-benchmark-uncertainty
|
|
||||||
- near-universal-political-support-for-autonomous-weapons-governance-coexists-with-structural-failure-because-opposing-states-control-advanced-programs
|
|
||||||
- civil-society-coordination-infrastructure-fails-to-produce-binding-governance-when-structural-obstacle-is-great-power-veto-not-political-will
|
|
||||||
- voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance
|
|
||||||
- domestic-political-change-can-rapidly-erode-decade-long-international-AI-safety-norms-as-US-reversed-from-supporter-to-opponent-in-one-year
|
|
||||||
- frontier-ai-labs-allocate-6-15-percent-research-headcount-to-safety-versus-60-75-percent-to-capabilities-with-declining-ratios-since-2024
|
|
||||||
- frontier-ai-monitoring-evasion-capability-grew-from-minimal-mitigations-sufficient-to-26-percent-success-in-13-months
|
|
||||||
- eu-ai-act-extraterritorial-enforcement-creates-binding-governance-alternative-to-us-voluntary-commitments
|
|
||||||
- legal-mandate-is-the-only-version-of-coordinated-pausing-that-avoids-antitrust-risk-while-preserving-coordination-benefits
|
|
||||||
- anthropic-internal-resource-allocation-shows-6-8-percent-safety-only-headcount-when-dual-use-research-excluded-revealing-gap-between-public-positioning-and-commitment
|
|
||||||
- attractor-molochian-exhaustion
|
|
||||||
reweave_edges:
|
|
||||||
- Anthropic|supports|2026-03-28
|
|
||||||
- voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance|supports|2026-03-31
|
|
||||||
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment|related|2026-04-09
|
|
||||||
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling|supports|2026-04-20
|
|
||||||
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to
|
|
||||||
- Safety leadership exits precede voluntary governance policy changes as leading indicators of cumulative competitive pressure|supports|2026-04-26 competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling|supports|2026-04-20
|
|
||||||
- RSP v3's substitution of non-binding Frontier Safety Roadmap for binding pause commitments instantiates Mutually Assured Deregulation at corporate voluntary governance level|supports|2026-05-01
|
|
||||||
source: Anthropic RSP v3.0 (Feb 24, 2026); TIME exclusive (Feb 25, 2026); Jared Kaplan statements
|
|
||||||
supports:
|
|
||||||
- Anthropic
|
|
||||||
- voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance
|
|
||||||
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling
|
|
||||||
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to
|
|
||||||
- Safety leadership exits precede voluntary governance policy changes as leading indicators of cumulative competitive pressure competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling
|
|
||||||
- RSP v3's substitution of non-binding Frontier Safety Roadmap for binding pause commitments instantiates Mutually Assured Deregulation at corporate voluntary governance level
|
|
||||||
type: claim
|
type: claim
|
||||||
|
domain: ai-alignment
|
||||||
|
description: Anthropic's Feb 2026 rollback of its Responsible Scaling Policy proves that even the strongest voluntary safety commitment collapses when the competitive cost exceeds the reputational benefit
|
||||||
|
confidence: likely
|
||||||
|
source: Anthropic RSP v3.0 (Feb 24, 2026); TIME exclusive (Feb 25, 2026); Jared Kaplan statements
|
||||||
|
created: 2026-03-06
|
||||||
|
related: ["Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment", "multilateral-ai-governance-verification-mechanisms-remain-at-proposal-stage-because-technical-infrastructure-does-not-exist-at-deployment-scale", "evaluation-based-coordination-schemes-face-antitrust-obstacles-because-collective-pausing-agreements-among-competing-developers-could-be-construed-as-cartel-behavior", "ccw-consensus-rule-enables-small-coalition-veto-over-autonomous-weapons-governance", "ai-sandbagging-creates-m-and-a-liability-exposure-across-product-liability-consumer-protection-and-securities-fraud", "precautionary-capability-threshold-activation-is-governance-response-to-benchmark-uncertainty", "near-universal-political-support-for-autonomous-weapons-governance-coexists-with-structural-failure-because-opposing-states-control-advanced-programs", "civil-society-coordination-infrastructure-fails-to-produce-binding-governance-when-structural-obstacle-is-great-power-veto-not-political-will", "voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance", "domestic-political-change-can-rapidly-erode-decade-long-international-AI-safety-norms-as-US-reversed-from-supporter-to-opponent-in-one-year", "frontier-ai-labs-allocate-6-15-percent-research-headcount-to-safety-versus-60-75-percent-to-capabilities-with-declining-ratios-since-2024", "frontier-ai-monitoring-evasion-capability-grew-from-minimal-mitigations-sufficient-to-26-percent-success-in-13-months", "eu-ai-act-extraterritorial-enforcement-creates-binding-governance-alternative-to-us-voluntary-commitments", "legal-mandate-is-the-only-version-of-coordinated-pausing-that-avoids-antitrust-risk-while-preserving-coordination-benefits", "anthropic-internal-resource-allocation-shows-6-8-percent-safety-only-headcount-when-dual-use-research-excluded-revealing-gap-between-public-positioning-and-commitment", "attractor-molochian-exhaustion", "voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints", "Anthropics RSP rollback under commercial pressure is the first empirical confirmation that binding safety commitments cannot survive the competitive dynamics of frontier AI development", "the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it"]
|
||||||
|
reweave_edges: ["Anthropic|supports|2026-03-28", "voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance|supports|2026-03-31", "Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment|related|2026-04-09", "Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling|supports|2026-04-20", "Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to", "Safety leadership exits precede voluntary governance policy changes as leading indicators of cumulative competitive pressure|supports|2026-04-26 competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling|supports|2026-04-20", "RSP v3's substitution of non-binding Frontier Safety Roadmap for binding pause commitments instantiates Mutually Assured Deregulation at corporate voluntary governance level|supports|2026-05-01"]
|
||||||
|
supports: ["Anthropic", "voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance", "Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling", "Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to", "Safety leadership exits precede voluntary governance policy changes as leading indicators of cumulative competitive pressure competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling", "RSP v3's substitution of non-binding Frontier Safety Roadmap for binding pause commitments instantiates Mutually Assured Deregulation at corporate voluntary governance level"]
|
||||||
---
|
---
|
||||||
|
|
||||||
# voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints
|
# voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints
|
||||||
|
|
@ -123,4 +94,10 @@ Relevant Notes:
|
||||||
- [[adaptive governance outperforms rigid alignment blueprints because superintelligence development has too many unknowns for fixed plans]] -- Anthropic's shift from categorical pause triggers to conditional assessment is adaptive governance, but without coordination it becomes permissive governance
|
- [[adaptive governance outperforms rigid alignment blueprints because superintelligence development has too many unknowns for fixed plans]] -- Anthropic's shift from categorical pause triggers to conditional assessment is adaptive governance, but without coordination it becomes permissive governance
|
||||||
|
|
||||||
Topics:
|
Topics:
|
||||||
- [[_map]]
|
- [[_map]]
|
||||||
|
|
||||||
|
## Extending Evidence
|
||||||
|
|
||||||
|
**Source:** Hendrycks, Schmidt, Wang (2025), MAIM framework
|
||||||
|
|
||||||
|
MAIM deterrence addresses the competitive pressure problem by changing the payoff structure: any state's aggressive bid for unilateral AI dominance is met with preventive sabotage (escalation ladder: intelligence gathering → covert cyber → overt cyberattacks → kinetic strikes). This creates mutual vulnerability that makes unilateral racing strategically untenable without requiring voluntary commitments.
|
||||||
|
|
|
||||||
23
entities/ai-alignment/scale-ai.md
Normal file
23
entities/ai-alignment/scale-ai.md
Normal file
|
|
@ -0,0 +1,23 @@
|
||||||
|
# Scale AI
|
||||||
|
|
||||||
|
**Type:** Company
|
||||||
|
**Domain:** ai-alignment
|
||||||
|
**Founded:** 2016
|
||||||
|
**CEO:** Alexandr Wang
|
||||||
|
**Focus:** AI deployment contractor, data labeling and model evaluation infrastructure
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Scale AI is a leading AI deployment contractor with significant Department of Defense relationships. The company provides data labeling, model evaluation, and AI deployment infrastructure for frontier AI systems.
|
||||||
|
|
||||||
|
## Key Personnel
|
||||||
|
|
||||||
|
- **Alexandr Wang** — CEO, co-author of MAIM deterrence framework
|
||||||
|
|
||||||
|
## Timeline
|
||||||
|
|
||||||
|
- **2025-03-01** — CEO Alexandr Wang co-authored Superintelligence Strategy paper with Dan Hendrycks (CAIS) and Eric Schmidt, proposing MAIM deterrence regime for AI development
|
||||||
|
|
||||||
|
## Significance
|
||||||
|
|
||||||
|
Scale AI's CEO co-authoring the MAIM framework signals that leading AI deployment contractors with military relationships have aligned on deterrence infrastructure as the primary coordination mechanism for superintelligence development.
|
||||||
|
|
@ -7,11 +7,14 @@ date: 2025-03-01
|
||||||
domain: ai-alignment
|
domain: ai-alignment
|
||||||
secondary_domains: [grand-strategy]
|
secondary_domains: [grand-strategy]
|
||||||
format: paper
|
format: paper
|
||||||
status: unprocessed
|
status: processed
|
||||||
|
processed_by: theseus
|
||||||
|
processed_date: 2026-05-03
|
||||||
priority: high
|
priority: high
|
||||||
tags: [MAIM, deterrence, superintelligence, national-security, coordination, paradigm-shift]
|
tags: [MAIM, deterrence, superintelligence, national-security, coordination, paradigm-shift]
|
||||||
intake_tier: research-task
|
intake_tier: research-task
|
||||||
flagged_for_leo: ["grand-strategy coordination failure; deterrence vs. alignment paradigm at civilizational level — potentially relevant to living-capital and teleohumanity strategy"]
|
flagged_for_leo: ["grand-strategy coordination failure; deterrence vs. alignment paradigm at civilizational level — potentially relevant to living-capital and teleohumanity strategy"]
|
||||||
|
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||||
---
|
---
|
||||||
|
|
||||||
## Content
|
## Content
|
||||||
Loading…
Reference in a new issue