extract: 2026-03-28-cnbc-anthropic-dod-preliminary-injunction
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
This commit is contained in:
parent
bf1f2b02f6
commit
1acac58ce4
2 changed files with 41 additions and 1 deletions
|
|
@ -0,0 +1,27 @@
|
|||
{
|
||||
"rejected_claims": [
|
||||
{
|
||||
"filename": "voluntary-ai-safety-constraints-have-no-legal-standing-in-us-law.md",
|
||||
"issues": [
|
||||
"missing_attribution_extractor"
|
||||
]
|
||||
}
|
||||
],
|
||||
"validation_stats": {
|
||||
"total": 1,
|
||||
"kept": 0,
|
||||
"fixed": 4,
|
||||
"rejected": 1,
|
||||
"fixes_applied": [
|
||||
"voluntary-ai-safety-constraints-have-no-legal-standing-in-us-law.md:set_created:2026-03-28",
|
||||
"voluntary-ai-safety-constraints-have-no-legal-standing-in-us-law.md:stripped_wiki_link:voluntary-safety-pledges-cannot-survive-competitive-pressure",
|
||||
"voluntary-ai-safety-constraints-have-no-legal-standing-in-us-law.md:stripped_wiki_link:only-binding-regulation-with-enforcement-teeth-changes-front",
|
||||
"voluntary-ai-safety-constraints-have-no-legal-standing-in-us-law.md:stripped_wiki_link:government-designation-of-safety-conscious-AI-labs-as-supply"
|
||||
],
|
||||
"rejections": [
|
||||
"voluntary-ai-safety-constraints-have-no-legal-standing-in-us-law.md:missing_attribution_extractor"
|
||||
]
|
||||
},
|
||||
"model": "anthropic/claude-sonnet-4.5",
|
||||
"date": "2026-03-28"
|
||||
}
|
||||
|
|
@ -7,9 +7,12 @@ date: 2026-03-26
|
|||
domain: ai-alignment
|
||||
secondary_domains: []
|
||||
format: article
|
||||
status: unprocessed
|
||||
status: enrichment
|
||||
priority: high
|
||||
tags: [pentagon-anthropic, DoD-blacklist, preliminary-injunction, supply-chain-risk, First-Amendment, judicial-review, voluntary-safety-constraints, use-based-governance]
|
||||
processed_by: theseus
|
||||
processed_date: 2026-03-28
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
---
|
||||
|
||||
## Content
|
||||
|
|
@ -44,3 +47,13 @@ The preliminary injunction temporarily stays the supply chain risk designation
|
|||
PRIMARY CONNECTION: voluntary-pledges-fail-under-competition — this is the strongest real-world evidence for the claim that voluntary safety governance collapses under competitive/institutional pressure
|
||||
WHY ARCHIVED: The clearest empirical case for the legal fragility of voluntary corporate AI safety constraints; the judicial reasoning creates no precedent for safety-based governance
|
||||
EXTRACTION HINT: Focus on the legal standing gap — the claim is not that courts were wrong, but that the legal framework available to protect safety constraints is First Amendment-based, not safety-based. That gap is the governance failure.
|
||||
|
||||
|
||||
## Key Facts
|
||||
- Anthropic signed a $200M transaction agreement with the DoD in July 2025
|
||||
- Contract negotiations stalled in September 2025 over use restrictions
|
||||
- Defense Secretary Hegseth issued AI strategy memo in January 2026 requiring 'any lawful use' language in all DoD AI contracts within 180 days
|
||||
- On February 27, 2026, Trump administration terminated Anthropic contract, designated Anthropic as supply chain risk, and ordered all federal agencies to stop using Claude
|
||||
- Anthropic is the first American company ever designated a DoD supply chain risk (designation historically reserved for foreign adversaries like Huawei and SMIC)
|
||||
- Judge Rita Lin's ruling was 43 pages
|
||||
- Pentagon CTO stated the ban 'still stands' from DoD's perspective despite the injunction
|
||||
|
|
|
|||
Loading…
Reference in a new issue