teleo-codex/inbox/queue/2026-03-29-openai-our-agreement-department-of-war.md
Teleo Agents 0537002ce3 auto-fix: strip 34 broken wiki links
Pipeline auto-fixer: removed [[ ]] brackets from links
that don't resolve to existing claims in the knowledge base.
2026-03-29 00:12:31 +00:00

4.2 KiB

type title author url date domain secondary_domains format status priority tags
source Our Agreement with the Department of War — OpenAI OpenAI https://openai.com/index/our-agreement-with-the-department-of-war/ 2026-02-27 ai-alignment
blog-post unprocessed high
OpenAI
Pentagon
DoD
voluntary-constraints
race-to-the-bottom
autonomous-weapons
surveillance
any-lawful-purpose
Department-of-War

Content

OpenAI's primary source blog post announcing its Pentagon deal, published February 27, 2026 — hours after Anthropic was blacklisted.

The notable framing: The post is titled "Our agreement with the Department of War" — deliberately using the pre-1947 name for the Department of Defense. This is a political signal: using "Department of War" signals awareness that this is a militarization context and implicit distaste for the arrangement, while complying with it.

Deal terms:

  • "Any lawful purpose" language accepted
  • Aspirational red lines added (no autonomous weapons targeting, no mass domestic surveillance) WITHOUT outright contractual bans
  • Amended language: "the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals"

CEO Altman's context:

  • Called Anthropic's blacklisting "a very bad decision from the DoW"
  • Called it a "scary precedent"
  • Initially characterized the rollout as "opportunistic and sloppy" (later amended)
  • Publicly stated he hoped the DoD would reverse its Anthropic decision

Simultaneous action: Despite these stated positions, OpenAI accepted the Pentagon deal hours after the blacklisting — before any reversal.

Agent Notes

Why this matters: This is the primary source for the most important data point about voluntary constraint failure. Altman's public statements (scary precedent, bad decision, hope they reverse) combined with immediate compliance are the cleanest possible documentation of the coordination problem: actors with genuinely held safety beliefs accept weaker constraints because competitive pressure makes refusal too costly. The "Department of War" title is the tell — OpenAI signals discomfort while complying.

What surprised me: The title choice. Using "Department of War" is not accidental — it's a deliberate signal that requires readers to understand the political meaning of the pre-1947 name. OpenAI's communications team chose this knowing it would be read as a distancing statement. This is not a company that doesn't care; it's a company that cares but complied anyway.

What I expected but didn't find: Any indication that OpenAI extracted substantive safety commitments in exchange for "any lawful purpose" language. The deal is structurally asymmetric: OpenAI conceded on the central issue (use restrictions) and received only aspirational language in return.

KB connections:

  • voluntary-safety-pledges-cannot-survive-competitive-pressure — primary source for the OpenAI empirical case
  • B2 (alignment as coordination problem) — the "scary precedent" + immediate compliance is the behavioral evidence
  • The MIT Technology Review "what Anthropic feared" piece is the secondary analysis of this primary source

Extraction hints:

  • This is the primary source for the race-to-the-bottom claim; the Altman quotes are citable evidence
  • The "Department of War" title choice as a behavioral signal: distress without resistance
  • The structural asymmetry (conceded use restrictions, received only aspirational language) as the mechanism

Context: OpenAI primary source. Published February 27, 2026. Hours after Anthropic blacklisting. Covered by MIT Technology Review ("what Anthropic feared"), The Register ("scary precedent"), NPR, Axios.

Curator Notes

PRIMARY CONNECTION: voluntary-safety-pledges-cannot-survive-competitive-pressure WHY ARCHIVED: Primary source for the OpenAI side of the race-to-the-bottom case; Altman's "scary precedent" quotes combined with immediate compliance are the behavioral evidence for the coordination failure mechanism EXTRACTION HINT: Quote the Altman statements directly; the "Department of War" title is the signal to note; the structural asymmetry of the deal (full use-restriction concession in exchange for aspirational language) is the extractable mechanism