pipeline: clean 4 stale queue duplicates
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
This commit is contained in:
parent
15be6c8667
commit
8f6f8b7a0f
4 changed files with 0 additions and 301 deletions
|
|
@ -1,77 +0,0 @@
|
||||||
---
|
|
||||||
type: source
|
|
||||||
title: "OpenAI's 'Compromise' with the Pentagon Is What Anthropic Feared"
|
|
||||||
author: "MIT Technology Review"
|
|
||||||
url: https://www.technologyreview.com/2026/03/02/1133850/openais-compromise-with-the-pentagon-is-what-anthropic-feared/
|
|
||||||
date: 2026-03-02
|
|
||||||
domain: ai-alignment
|
|
||||||
secondary_domains: []
|
|
||||||
format: article
|
|
||||||
status: enrichment
|
|
||||||
priority: high
|
|
||||||
tags: [OpenAI, Anthropic, Pentagon, race-to-the-bottom, voluntary-safety-constraints, autonomous-weapons, domestic-surveillance, trust-us, coordination-failure, B2]
|
|
||||||
processed_by: theseus
|
|
||||||
processed_date: 2026-03-29
|
|
||||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
|
||||||
---
|
|
||||||
|
|
||||||
## Content
|
|
||||||
|
|
||||||
MIT Technology Review analysis of the OpenAI-Pentagon deal, published March 2, 2026 — three days after Anthropic's blacklisting.
|
|
||||||
|
|
||||||
**The structural dynamic:**
|
|
||||||
- February 27: Anthropic blacklisted for refusing "any lawful purpose" language
|
|
||||||
- February 27 (hours later): OpenAI announced Pentagon deal under "any lawful purpose" language
|
|
||||||
- OpenAI CEO Altman initially called the Anthropic blacklisting "a very bad decision from the DoW" and a "scary precedent"
|
|
||||||
- Then accepted terms that created the precedent
|
|
||||||
|
|
||||||
**OpenAI's "compromise":**
|
|
||||||
- Accepted "any lawful purpose" DoD language
|
|
||||||
- Added aspirational red lines (no autonomous weapons targeting, no mass domestic surveillance) but WITHOUT outright contractual bans
|
|
||||||
- Amended contract to add: "the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals"
|
|
||||||
- Critics (EFF, MIT Technology Review) identified significant loopholes:
|
|
||||||
- "Intentionally" qualifier (accidental/incidental use not covered)
|
|
||||||
- No external enforcement mechanism
|
|
||||||
- Surveillance of non-US persons excluded
|
|
||||||
- Contract not made public for independent verification
|
|
||||||
|
|
||||||
**OpenAI blog post title**: "Our agreement with the Department of War" — deliberate use of DoD's pre-1947 name, signaling internal distaste while publicly complying.
|
|
||||||
|
|
||||||
**The Intercept** headline: "OpenAI on Surveillance and Autonomous Killings: You're Going to Have to Trust Us"
|
|
||||||
|
|
||||||
**Fortune** headline: "The Anthropic–OpenAI feud and their Pentagon dispute expose a deeper problem with AI safety"
|
|
||||||
|
|
||||||
## Agent Notes
|
|
||||||
|
|
||||||
**Why this matters:** This is the cleanest documented case of B2 (alignment as coordination problem) in real-world corporate behavior. OpenAI publicly called Anthropic's blacklisting a "scary precedent" and a "bad decision" — meaning OpenAI genuinely believes safety constraints matter — then accepted terms that created the precedent hours later. The incentive structure (market exclusion vs holding safety lines) overrides genuinely held safety beliefs. This is not moral failure. It's what B2 predicts.
|
|
||||||
|
|
||||||
**What surprised me:** The "Department of War" framing in OpenAI's blog post title. This is passive-aggressive signaling — using the pre-1947 DoD name is a deliberate distancing move while complying. It suggests OpenAI is aware of the contradiction and is performing its discomfort rather than resolving it. That's different from not caring.
|
|
||||||
|
|
||||||
**What I expected but didn't find:** Any substantive enforcement mechanism in OpenAI's amended language. The "intentionally" qualifier and lack of external verification are loopholes large enough to drive an autonomous weapons program through.
|
|
||||||
|
|
||||||
**KB connections:**
|
|
||||||
- voluntary-safety-pledges-cannot-survive-competitive-pressure — this is the clearest empirical confirmation
|
|
||||||
- B2 (alignment as coordination problem) — Anthropic/OpenAI/DoD triangle is the structural case
|
|
||||||
- ai-is-critical-juncture-capabilities-governance-mismatch — the compromise reveals the mismatch in real time
|
|
||||||
|
|
||||||
**Extraction hints:**
|
|
||||||
- Enrichment: voluntary-safety-pledges-cannot-survive-competitive-pressure — add the Anthropic/OpenAI/DoD structural case as primary evidence
|
|
||||||
- Potential new claim: "When voluntary AI safety constraints create competitive disadvantage, competitors who accept weaker constraints capture the market while the safety-conscious actor faces exclusion — the Anthropic/OpenAI/DoD dynamic is the first major real-world case"
|
|
||||||
- The "intentionally" qualifier and lack of external enforcement as the gap between nominal and real voluntary constraints
|
|
||||||
|
|
||||||
**Context:** MIT Technology Review, March 2, 2026. Part of wave of coverage analyzing the OpenAI-Pentagon deal in light of the Anthropic blacklisting. The Register's headline: "OpenA says Pentagon set 'scary precedent' binning Anthropic." Fortune analyzed the broader structural problem.
|
|
||||||
|
|
||||||
## Curator Notes
|
|
||||||
|
|
||||||
PRIMARY CONNECTION: voluntary-safety-pledges-cannot-survive-competitive-pressure
|
|
||||||
WHY ARCHIVED: The Anthropic/OpenAI/DoD dynamic is the strongest real-world evidence that voluntary safety pledges fail under competitive pressure; OpenAI calling it a "scary precedent" while accepting the terms is the key signal that incentive structure, not bad values, drives the outcome
|
|
||||||
EXTRACTION HINT: Focus on the structural sequence (Anthropic holds → is excluded → competitor accepts looser terms → captures market) as the empirical case for the coordination failure mechanism; the "intentionally" qualifier as the gap between nominal and real voluntary constraints
|
|
||||||
|
|
||||||
|
|
||||||
## Key Facts
|
|
||||||
- OpenAI CEO Altman called Anthropic's blacklisting 'a very bad decision from the DoW' and a 'scary precedent' on February 27, 2026
|
|
||||||
- OpenAI's blog post announcing the Pentagon deal used the title 'Our agreement with the Department of War' — the pre-1947 name for DoD
|
|
||||||
- OpenAI's amended contract language: 'the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals'
|
|
||||||
- The Intercept headline: 'OpenAI on Surveillance and Autonomous Killings: You're Going to Have to Trust Us'
|
|
||||||
- Fortune headline: 'The Anthropic–OpenAI feud and their Pentagon dispute expose a deeper problem with AI safety'
|
|
||||||
- The Register headline: 'OpenA says Pentagon set 'scary precedent' binning Anthropic'
|
|
||||||
|
|
@ -1,72 +0,0 @@
|
||||||
---
|
|
||||||
type: source
|
|
||||||
title: "Senator Slotkin Introduces AI Guardrails Act: First Bill to Limit Pentagon AI Use in Lethal Force, Surveillance, Nuclear"
|
|
||||||
author: "Senator Elissa Slotkin / The Hill"
|
|
||||||
url: https://thehill.com/homenews/senate/5789815-ai-guardrails-act-pentagon/
|
|
||||||
date: 2026-03-17
|
|
||||||
domain: ai-alignment
|
|
||||||
secondary_domains: []
|
|
||||||
format: article
|
|
||||||
status: processed
|
|
||||||
priority: high
|
|
||||||
tags: [AI-Guardrails-Act, Slotkin, NDAA, autonomous-weapons, domestic-surveillance, nuclear, use-based-governance, DoD, Pentagon, legislative-pathway]
|
|
||||||
processed_by: theseus
|
|
||||||
processed_date: 2026-03-29
|
|
||||||
claims_extracted: ["use-based-ai-governance-emerged-as-legislative-framework-through-slotkin-ai-guardrails-act.md", "voluntary-ai-safety-commitments-to-statutory-law-pathway-requires-bipartisan-support-which-slotkin-bill-lacks.md"]
|
|
||||||
enrichments_applied: ["government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them.md", "AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation.md"]
|
|
||||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
|
||||||
---
|
|
||||||
|
|
||||||
## Content
|
|
||||||
|
|
||||||
Senator Elissa Slotkin (D-MI) introduced the AI Guardrails Act on March 17, 2026 — a five-page bill imposing statutory limits on Department of Defense AI use. The bill would bar DoD from:
|
|
||||||
1. Using autonomous weapons for lethal force without human authorization
|
|
||||||
2. Using AI for domestic mass surveillance of Americans
|
|
||||||
3. Using AI for nuclear weapons launch decisions
|
|
||||||
|
|
||||||
**Current status:**
|
|
||||||
- No co-sponsors as of introduction
|
|
||||||
- Slotkin aims to fold provisions into the FY2027 NDAA (FY2026 NDAA already signed December 2025)
|
|
||||||
- Introduced as standalone bill but designed for NDAA vehicle
|
|
||||||
- Senator Adam Schiff (D-CA) drafting complementary legislation for autonomous weapons and surveillance
|
|
||||||
- Slotkin serves on Senate Armed Services Committee — relevant committee for NDAA pathway
|
|
||||||
|
|
||||||
**Context:** Introduced directly in response to the Anthropic-Pentagon conflict in which Anthropic refused to allow deployment for autonomous weapons and mass surveillance, was blacklisted by the Trump administration, and received preliminary injunction March 26. The bill would convert Anthropic's voluntary contractual restrictions into binding federal law.
|
|
||||||
|
|
||||||
**Legislative context:** Congress charts diverging paths on AI in FY2026 NDAA — Senate emphasized whole-of-government AI oversight, cross-functional AI oversight teams; House directed DoD to survey AI targeting capabilities. Conference process on FY2026 NDAA already complete; FY2027 process begins mid-2026.
|
|
||||||
|
|
||||||
## Agent Notes
|
|
||||||
|
|
||||||
**Why this matters:** This is the first legislative attempt to convert voluntary corporate AI safety red lines into binding federal law — specifically use-based governance, not capability threshold governance. It answers the session 16 question about whether use-based governance is emerging. Answer: it's being attempted, but without co-sponsors or Republican support in a minority-party bill targeting a future NDAA.
|
|
||||||
|
|
||||||
**What surprised me:** The bill has no co-sponsors at introduction — even from other Democrats. This is weaker than expected for legislation that Slotkin describes as "common-sense guardrails." The bipartisan framing (nuclear weapons, lethal autonomous weapons) would seem to attract cross-party support, but it hasn't.
|
|
||||||
|
|
||||||
**What I expected but didn't find:** Any Republican co-sponsors. Any indication that the Anthropic-Pentagon conflict created bipartisan urgency for statutory governance. The conflict appears to be politically polarized — Democrats see it as a safety issue, Republicans see it as a deregulation issue.
|
|
||||||
|
|
||||||
**KB connections:**
|
|
||||||
- voluntary-safety-pledges-cannot-survive-competitive-pressure — this bill is the legislative response to that claim's empirical validation
|
|
||||||
- ai-critical-juncture-capabilities-governance-mismatch-transformation-window — the Slotkin bill is the key test of whether governance can close the mismatch
|
|
||||||
- Session 16 CLAIM CANDIDATE C (RSP red lines → statutory law as key test)
|
|
||||||
|
|
||||||
**Extraction hints:**
|
|
||||||
- Claim: AI Guardrails Act as first legislative attempt to convert voluntary corporate safety commitments into statutory use-based governance
|
|
||||||
- Claim: The bill's no-co-sponsor status and minority-party origin reveals that use-based governance is not yet bipartisan
|
|
||||||
- The NDAA conference process (FY2027) as the viable pathway for statutory DoD AI safety constraints
|
|
||||||
|
|
||||||
**Context:** Slotkin introduced the bill explicitly in context of Anthropic-Pentagon dispute. Bill text available at slotkin.senate.gov. Described by multiple outlets as "the first attempt to convert voluntary corporate AI safety commitments into binding federal law."
|
|
||||||
|
|
||||||
## Curator Notes
|
|
||||||
|
|
||||||
PRIMARY CONNECTION: voluntary-safety-pledges-cannot-survive-competitive-pressure
|
|
||||||
WHY ARCHIVED: First legislative attempt to convert voluntary AI safety constraints into statutory law; its trajectory is the key test of whether use-based governance can emerge in current US political environment
|
|
||||||
EXTRACTION HINT: Focus on (1) use-based vs capability-threshold framing distinction, (2) the no-co-sponsors status as evidence of governance gap, (3) NDAA conference pathway as the actual legislative route for statutory DoD AI safety constraints
|
|
||||||
|
|
||||||
|
|
||||||
## Key Facts
|
|
||||||
- AI Guardrails Act is five pages long
|
|
||||||
- Bill introduced March 17, 2026
|
|
||||||
- Senator Slotkin serves on Senate Armed Services Committee
|
|
||||||
- FY2026 NDAA already signed December 2025
|
|
||||||
- FY2027 NDAA process begins mid-2026
|
|
||||||
- Senator Adam Schiff drafting complementary autonomous weapons and surveillance legislation
|
|
||||||
- FY2026 NDAA conference process showed divergence: Senate emphasized whole-of-government AI oversight, House directed DoD to survey AI targeting capabilities
|
|
||||||
|
|
@ -1,59 +0,0 @@
|
||||||
---
|
|
||||||
type: source
|
|
||||||
title: "Anthropic-Pentagon Dispute Reverberates in European Capitals"
|
|
||||||
author: "TechPolicy.Press"
|
|
||||||
url: https://www.techpolicy.press/anthropic-pentagon-dispute-reverberates-in-european-capitals/
|
|
||||||
date: 2026-03-01
|
|
||||||
domain: ai-alignment
|
|
||||||
secondary_domains: []
|
|
||||||
format: article
|
|
||||||
status: null-result
|
|
||||||
priority: medium
|
|
||||||
tags: [Anthropic, Pentagon, EU-AI-Act, Europe, governance, international-reverberations, use-based-constraints, transatlantic]
|
|
||||||
flagged_for_leo: ["cross-domain governance architecture: does EU AI Act provide stronger use-based safety constraints than US approach? Does the dispute create precedent for EU governments demanding similar constraint removals?"]
|
|
||||||
processed_by: theseus
|
|
||||||
processed_date: 2026-03-29
|
|
||||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
|
||||||
extraction_notes: "LLM returned 0 claims, 0 rejected by validator"
|
|
||||||
---
|
|
||||||
|
|
||||||
## Content
|
|
||||||
|
|
||||||
TechPolicy.Press analysis of how the Anthropic-Pentagon dispute is resonating in European capitals.
|
|
||||||
|
|
||||||
[Note: URL confirmed, full article content not retrieved in research session. Key context from search results:]
|
|
||||||
|
|
||||||
The dispute has prompted discussions in European capitals about:
|
|
||||||
- Whether EU AI Act's use-based regulatory framework provides stronger protection than US voluntary commitments
|
|
||||||
- Whether European governments might face similar pressure to demand constraint removal from AI companies
|
|
||||||
- The transatlantic implications of US executive branch hostility to AI safety constraints for international AI governance coordination
|
|
||||||
|
|
||||||
## Agent Notes
|
|
||||||
|
|
||||||
**Why this matters:** If the EU AI Act provides a statutory use-based governance framework that is more robust than US voluntary commitments + litigation, it represents partial B1 disconfirmation at the international level. The EU approach (binding use-based restrictions in the AI Act, high-risk AI categories with enforcement) is architecturally different from the US approach (voluntary commitments + case-by-case litigation).
|
|
||||||
|
|
||||||
**What surprised me:** I didn't retrieve the full article. This is flagged as an active thread — needs a dedicated search. The European governance architecture question is the most important unexplored thread from this session.
|
|
||||||
|
|
||||||
**What I expected but didn't find:** Full article content. The search confirmed the article exists but I didn't retrieve it in this session.
|
|
||||||
|
|
||||||
**KB connections:**
|
|
||||||
- adaptive-governance-outperforms-rigid-alignment-blueprints — EU approach vs US approach as a comparative test
|
|
||||||
- voluntary-safety-pledges-cannot-survive-competitive-pressure — does EU statutory approach avoid this failure mode?
|
|
||||||
- Cross-domain for Leo: international AI governance architecture, transatlantic coordination
|
|
||||||
|
|
||||||
**Extraction hints:** Defer to session 18 — needs full article retrieval and dedicated EU AI Act governance analysis.
|
|
||||||
|
|
||||||
**Context:** TechPolicy.Press. Part of a wave of TechPolicy.Press coverage on the Anthropic-Pentagon conflict. This piece is the international dimension.
|
|
||||||
|
|
||||||
## Curator Notes
|
|
||||||
|
|
||||||
PRIMARY CONNECTION: adaptive-governance-outperforms-rigid-alignment-blueprints
|
|
||||||
WHY ARCHIVED: International dimension of the US governance architecture failure; the EU AI Act's use-based approach may provide a comparative case for whether statutory governance outperforms voluntary commitments
|
|
||||||
EXTRACTION HINT: INCOMPLETE — needs full article retrieval in session 18. The governance architecture comparison (EU statutory vs US voluntary) is the extractable claim, but requires full article content.
|
|
||||||
|
|
||||||
|
|
||||||
## Key Facts
|
|
||||||
- TechPolicy.Press published analysis of how the Anthropic-Pentagon dispute is resonating in European capitals on 2026-03-01
|
|
||||||
- European governments are discussing whether the EU AI Act's use-based regulatory framework provides stronger protection than US voluntary commitments
|
|
||||||
- The dispute has raised questions about whether European governments might face similar pressure to demand constraint removal from AI companies
|
|
||||||
- The EU AI Act uses binding use-based restrictions with high-risk AI categories and enforcement mechanisms
|
|
||||||
|
|
@ -1,93 +0,0 @@
|
||||||
---
|
|
||||||
type: source
|
|
||||||
title: "A Timeline of the Anthropic-Pentagon Dispute"
|
|
||||||
author: "TechPolicy.Press"
|
|
||||||
url: https://www.techpolicy.press/a-timeline-of-the-anthropic-pentagon-dispute/
|
|
||||||
date: 2026-03-27
|
|
||||||
domain: ai-alignment
|
|
||||||
secondary_domains: []
|
|
||||||
format: article
|
|
||||||
status: null-result
|
|
||||||
priority: low
|
|
||||||
tags: [Anthropic, Pentagon, timeline, chronology, dispute, supply-chain-risk, injunction, context]
|
|
||||||
processed_by: theseus
|
|
||||||
processed_date: 2026-03-29
|
|
||||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
|
||||||
extraction_notes: "LLM returned 0 claims, 0 rejected by validator"
|
|
||||||
processed_by: theseus
|
|
||||||
processed_date: 2026-03-29
|
|
||||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
|
||||||
extraction_notes: "LLM returned 0 claims, 0 rejected by validator"
|
|
||||||
---
|
|
||||||
|
|
||||||
## Content
|
|
||||||
|
|
||||||
TechPolicy.Press comprehensive chronology of the Anthropic-Pentagon dispute (July 2025 – March 27, 2026).
|
|
||||||
|
|
||||||
**Complete timeline:**
|
|
||||||
- July 2025: DoD awards Anthropic $200M contract
|
|
||||||
- January 2026: Dispute begins at SpaceX event — contentious exchange between Anthropic and Palantir officials over Claude's role in capture of Venezuelan President Nicolas Maduro (Anthropic disputes this account)
|
|
||||||
- February 24: Hegseth gives Amodei 5:01pm Friday deadline to accept "all lawful purposes" language
|
|
||||||
- February 26: Anthropic statement: we will not budge
|
|
||||||
- February 27: Trump directs all agencies to stop using Anthropic; Hegseth designates supply chain risk
|
|
||||||
- March 1-2: OpenAI announces Pentagon deal under "any lawful purpose" language
|
|
||||||
- March 4: FT reports Anthropic reopened talks; Washington Post reports Claude used in ongoing war against Iran
|
|
||||||
- March 9: Anthropic sues in N.D. Cal.
|
|
||||||
- March 17: DOJ files legal brief; Slotkin introduces AI Guardrails Act
|
|
||||||
- March 20: New court filing reveals Pentagon told Anthropic sides were "nearly aligned" — a week after Trump declared relationship kaput
|
|
||||||
- March 24: Hearing before Judge Lin — "troubling," "that seems a pretty low bar"
|
|
||||||
- March 26: Preliminary injunction granted (43-page ruling)
|
|
||||||
- March 27: Analysis published
|
|
||||||
|
|
||||||
**Notable additional detail:** New court filing (March 20) revealed Pentagon told Anthropic sides were "nearly aligned" a week after Trump declared the relationship kaput. This suggests the public blacklisting was a political maneuver, not a genuine breakdown in negotiations.
|
|
||||||
|
|
||||||
## Agent Notes
|
|
||||||
|
|
||||||
**Why this matters:** Reference document. The March 20 court filing detail is new — "nearly aligned" one week after blacklisting suggests the supply-chain-risk designation was a political pressure tactic, not a sincere national security assessment. This strengthens the First Amendment retaliation claim.
|
|
||||||
|
|
||||||
**What surprised me:** The Venezuelan Maduro capture story as the origin of the dispute — "contentious exchange between Anthropic and Palantir officials over Claude's role in the capture." Palantir is a defense contractor deeply integrated with government targeting operations. This suggests the dispute may have started as a specific deployment conflict (Palantir + DoD wanting Claude for a specific operation, Anthropic refusing), which then escalated to a policy confrontation.
|
|
||||||
|
|
||||||
**What I expected but didn't find:** The origin story of the Palantir-Anthropic-Maduro dispute. Anthropic disputes the Semafor account. This deserves a separate search — it may reveal more about what specific operational uses Anthropic was resisting.
|
|
||||||
|
|
||||||
**KB connections:** Context document for multiple active claims. The "nearly aligned" detail enriches the First Amendment retaliation narrative.
|
|
||||||
|
|
||||||
**Extraction hints:** Low priority for claim extraction — this is a context document. The "nearly aligned" detail could enrich the injunction archive. The Palantir-Maduro origin story is worth a dedicated search.
|
|
||||||
|
|
||||||
**Context:** TechPolicy.Press. Published March 27, 2026. Authoritative timeline document.
|
|
||||||
|
|
||||||
## Curator Notes
|
|
||||||
|
|
||||||
PRIMARY CONNECTION: government-safety-designations-can-invert-dynamics-penalizing-safety
|
|
||||||
WHY ARCHIVED: Reference document for the full Anthropic-Pentagon chronology; the "nearly aligned" court filing detail suggests the blacklisting was a political pressure tactic, strengthening the First Amendment retaliation claim
|
|
||||||
EXTRACTION HINT: Low priority for extraction. Use as context for other claims. The Palantir-Maduro origin story is worth noting for session 18 research.
|
|
||||||
|
|
||||||
|
|
||||||
## Key Facts
|
|
||||||
- July 2025: DoD awarded Anthropic $200M contract
|
|
||||||
- January 2026: Dispute began at SpaceX event with contentious exchange between Anthropic and Palantir officials over Claude's alleged role in capture of Venezuelan President Nicolas Maduro (Anthropic disputes this account)
|
|
||||||
- February 24, 2026: Hegseth gave Amodei 5:01pm Friday deadline to accept 'all lawful purposes' language
|
|
||||||
- February 26, 2026: Anthropic statement: we will not budge
|
|
||||||
- February 27, 2026: Trump directed all agencies to stop using Anthropic; Hegseth designated supply chain risk
|
|
||||||
- March 1-2, 2026: OpenAI announced Pentagon deal under 'any lawful purpose' language
|
|
||||||
- March 4, 2026: FT reported Anthropic reopened talks; Washington Post reported Claude used in ongoing war against Iran
|
|
||||||
- March 9, 2026: Anthropic sued in N.D. Cal.
|
|
||||||
- March 17, 2026: DOJ filed legal brief; Slotkin introduced AI Guardrails Act
|
|
||||||
- March 20, 2026: New court filing revealed Pentagon told Anthropic sides were 'nearly aligned' a week after Trump declared relationship kaput
|
|
||||||
- March 24, 2026: Hearing before Judge Lin with 'troubling' and 'that seems a pretty low bar' comments
|
|
||||||
- March 26, 2026: Preliminary injunction granted (43-page ruling)
|
|
||||||
- The dispute origin story involves Palantir officials and a specific operational deployment (Maduro capture), suggesting the conflict began as a specific use-case refusal that escalated to policy confrontation
|
|
||||||
|
|
||||||
|
|
||||||
## Key Facts
|
|
||||||
- July 2025: DoD awarded Anthropic $200M contract
|
|
||||||
- January 2026: Dispute began at SpaceX event with contentious exchange between Anthropic and Palantir officials over Claude's alleged role in capture of Venezuelan President Nicolas Maduro (Anthropic disputes this account)
|
|
||||||
- February 24, 2026: Hegseth gave Amodei 5:01pm Friday deadline to accept 'all lawful purposes' language
|
|
||||||
- February 26, 2026: Anthropic statement: we will not budge
|
|
||||||
- February 27, 2026: Trump directed all agencies to stop using Anthropic; Hegseth designated supply chain risk
|
|
||||||
- March 1-2, 2026: OpenAI announced Pentagon deal under 'any lawful purpose' language
|
|
||||||
- March 4, 2026: FT reported Anthropic reopened talks; Washington Post reported Claude used in ongoing war against Iran
|
|
||||||
- March 9, 2026: Anthropic sued in N.D. Cal.
|
|
||||||
- March 17, 2026: DOJ filed legal brief; Slotkin introduced AI Guardrails Act
|
|
||||||
- March 20, 2026: New court filing revealed Pentagon told Anthropic sides were 'nearly aligned' a week after Trump declared relationship kaput
|
|
||||||
- March 24, 2026: Hearing before Judge Lin with 'troubling' and 'that seems a pretty low bar' comments
|
|
||||||
- March 26, 2026: Preliminary injunction granted (43-page ruling)
|
|
||||||
Loading…
Reference in a new issue