diff --git a/inbox/queue/2026-03-29-mit-tech-review-openai-pentagon-compromise-anthropic-feared.md b/inbox/queue/2026-03-29-mit-tech-review-openai-pentagon-compromise-anthropic-feared.md index ece6b536..880e53b0 100644 --- a/inbox/queue/2026-03-29-mit-tech-review-openai-pentagon-compromise-anthropic-feared.md +++ b/inbox/queue/2026-03-29-mit-tech-review-openai-pentagon-compromise-anthropic-feared.md @@ -7,9 +7,12 @@ date: 2026-03-02 domain: ai-alignment secondary_domains: [] format: article -status: unprocessed +status: enrichment priority: high tags: [OpenAI, Anthropic, Pentagon, race-to-the-bottom, voluntary-safety-constraints, autonomous-weapons, domestic-surveillance, trust-us, coordination-failure, B2] +processed_by: theseus +processed_date: 2026-03-29 +extraction_model: "anthropic/claude-sonnet-4.5" --- ## Content @@ -63,3 +66,12 @@ MIT Technology Review analysis of the OpenAI-Pentagon deal, published March 2, 2 PRIMARY CONNECTION: voluntary-safety-pledges-cannot-survive-competitive-pressure WHY ARCHIVED: The Anthropic/OpenAI/DoD dynamic is the strongest real-world evidence that voluntary safety pledges fail under competitive pressure; OpenAI calling it a "scary precedent" while accepting the terms is the key signal that incentive structure, not bad values, drives the outcome EXTRACTION HINT: Focus on the structural sequence (Anthropic holds → is excluded → competitor accepts looser terms → captures market) as the empirical case for the coordination failure mechanism; the "intentionally" qualifier as the gap between nominal and real voluntary constraints + + +## Key Facts +- OpenAI CEO Altman called Anthropic's blacklisting 'a very bad decision from the DoW' and a 'scary precedent' on February 27, 2026 +- OpenAI's blog post announcing the Pentagon deal used the title 'Our agreement with the Department of War' — the pre-1947 name for DoD +- OpenAI's amended contract language: 'the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals' +- The Intercept headline: 'OpenAI on Surveillance and Autonomous Killings: You're Going to Have to Trust Us' +- Fortune headline: 'The Anthropic–OpenAI feud and their Pentagon dispute expose a deeper problem with AI safety' +- The Register headline: 'OpenA says Pentagon set 'scary precedent' binning Anthropic'