leo: fix duplicate enrichments from HKS governance source
- What: removed duplicate evidence blocks from 2 claims (critical juncture, supply chain risks) that were already enriched from same source on 2026-03-18. Kept the genuinely new enrichment on voluntary safety pledges claim. - Why: second extraction pass re-added near-identical evidence. Also fixed duplicated frontmatter and Key Facts in source archive. Pentagon-Agent: Leo <A3DC172B-F0A4-4408-9E3B-CF842616AAE1>
This commit is contained in:
parent
8a0fbf07f8
commit
34fe6fc984
3 changed files with 0 additions and 22 deletions
|
|
@ -25,12 +25,6 @@ CFR fellow Michael Horowitz explicitly states that 'large-scale binding internat
|
|||
|
||||
The HKS analysis shows the governance window is being used in a concerning direction: bilateral negotiations between governments and tech companies are becoming the de facto governance mechanism, operating without transparency or accountability. The mismatch is not creating space for better governance—it's creating space for opaque, power-asymmetric private contracts that bypass democratic processes entirely.
|
||||
|
||||
|
||||
### Additional Evidence (extend)
|
||||
*Source: [[2026-03-18-hks-governance-by-procurement-bilateral]] | Added: 2026-03-19*
|
||||
|
||||
The governance window is being filled by bilateral government-tech negotiations rather than multilateral frameworks. HKS documents that 'the most consequential human rights questions in AI are being decided in bilateral negotiations between governments and technology companies' without transparency or accountability. This shows how the mismatch is being resolved—through ad hoc private contracts, not institutional transformation.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
|
|
|
|||
|
|
@ -29,12 +29,6 @@ This strengthens [[AI alignment is a coordination problem not a technical proble
|
|||
|
||||
The 2026 DoD/Anthropic confrontation provides a concrete example: the Department of War threatened to blacklist Anthropic unless it removed safeguards against mass surveillance and autonomous weapons. Anthropic refused publicly, and the Pentagon retaliated. This is a direct instance of government functioning as an alignment-degrader rather than a correction mechanism, adding to competitive pressure rather than enforcing safety constraints.
|
||||
|
||||
|
||||
### Additional Evidence (confirm)
|
||||
*Source: [[2026-03-18-hks-governance-by-procurement-bilateral]] | Added: 2026-03-19*
|
||||
|
||||
The Department of War (formerly Defense) threatened to blacklist Anthropic in 2026 unless it removed safeguards against mass surveillance and autonomous weapons. When Anthropic refused publicly, the Pentagon retaliated. This is a concrete instance of government functioning as alignment-degrader rather than correction mechanism—the government actively penalized safety constraints through supply chain designation threats.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
|
|
|
|||
|
|
@ -11,10 +11,6 @@ status: enrichment
|
|||
priority: high
|
||||
tags: [governance, procurement, bilateral-negotiation, international-coordination, anthropic, DoD, correction-failure, transparency]
|
||||
processed_by: theseus
|
||||
processed_date: 2026-03-18
|
||||
enrichments_applied: ["government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them.md", "AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation.md"]
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
processed_by: theseus
|
||||
processed_date: 2026-03-19
|
||||
enrichments_applied: ["government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them.md", "AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation.md", "voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints.md"]
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
|
|
@ -67,9 +63,3 @@ EXTRACTION HINT: Extract the bilateral negotiation claim with specific evidence.
|
|||
- Harvard Kennedy School Carr-Ryan Center for Human Rights published analysis on March 18, 2026 titled 'Governance by Procurement: How AI Rights Became a Bilateral Negotiation'
|
||||
- The article proposes multilateral corrections including: ITU technical standards, Global Digital Compact grounding AI governance in human rights law, ISO/IEC standards for AI management systems, and an international AI oversight body modeled after nuclear energy regulation
|
||||
- The Department of Defense was renamed to Department of War (formerly Defense) as of 2026
|
||||
|
||||
|
||||
## Key Facts
|
||||
- Harvard Kennedy School Carr-Ryan Center for Human Rights published 'Governance by Procurement: How AI Rights Became a Bilateral Negotiation' on March 18, 2026
|
||||
- The Department of Defense was renamed to Department of War as of 2026
|
||||
- HKS proposes multilateral corrections including ITU technical standards, Global Digital Compact, ISO/IEC standards for AI management systems, and international AI oversight body modeled after nuclear energy regulation
|
||||
|
|
|
|||
Loading…
Reference in a new issue