diff --git a/inbox/queue/2026-03-29-aljazeera-anthropic-pentagon-open-space-for-regulation.md b/inbox/queue/2026-03-29-aljazeera-anthropic-pentagon-open-space-for-regulation.md index a83af32f..d66ab88e 100644 --- a/inbox/queue/2026-03-29-aljazeera-anthropic-pentagon-open-space-for-regulation.md +++ b/inbox/queue/2026-03-29-aljazeera-anthropic-pentagon-open-space-for-regulation.md @@ -44,7 +44,7 @@ Al Jazeera analysis of the governance implications of the Anthropic-Pentagon lit **What I expected but didn't find:** Any specific mechanism for how court protection translates to statutory law. The "opening" is real but requires a causal chain (court ruling → political salience → midterm outcome → legislative action) that has multiple failure points. **KB connections:** -- [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — the case made this claim visible to the public +- voluntary-safety-pledges-cannot-survive-competitive-pressure — the case made this claim visible to the public - B1 disconfirmation pathway: court ruling + midterms + legislative action is the chain - Anthropic's $20M PAC investment as the institutional investment in the midterms step of this chain @@ -57,6 +57,6 @@ Al Jazeera analysis of the governance implications of the Anthropic-Pentagon lit ## Curator Notes -PRIMARY CONNECTION: [[ai-is-critical-juncture-capabilities-governance-mismatch-transformation-window]] +PRIMARY CONNECTION: ai-is-critical-juncture-capabilities-governance-mismatch-transformation-window WHY ARCHIVED: Expert analysis of the governance opening created by the Anthropic case; establishes the causal chain (court → salience → midterms → legislation) that is the current B1 disconfirmation pathway EXTRACTION HINT: Extract the causal chain as a governance mechanism observation; the multiple failure points in this chain are the extractable insight — "opening space" is not the same as closing the governance gap diff --git a/inbox/queue/2026-03-29-anthropic-alignment-auditbench-hidden-behaviors.md b/inbox/queue/2026-03-29-anthropic-alignment-auditbench-hidden-behaviors.md index f3f0f1bf..3c03a821 100644 --- a/inbox/queue/2026-03-29-anthropic-alignment-auditbench-hidden-behaviors.md +++ b/inbox/queue/2026-03-29-anthropic-alignment-auditbench-hidden-behaviors.md @@ -38,8 +38,8 @@ The benchmark is designed to support development of alignment auditing as a quan **What I expected but didn't find:** I expected the paper to show incremental progress on interpretability closing the gap on harder targets. Instead it shows the gap is **anti-correlated** with adversarial training — tools that help on easy targets hurt on hard targets, suggesting a fundamentally different approach is needed for adversarially trained systems. **KB connections:** -- [[formal-verification-scales-ai-capability-human-review-degrades]] — this is the same dynamic at the auditing layer -- [[capability-and-reliability-are-independent-dimensions]] — hidden behavior categories demonstrate this: high capability, hidden misalignment +- formal-verification-scales-ai-capability-human-review-degrades — this is the same dynamic at the auditing layer +- capability-and-reliability-are-independent-dimensions — hidden behavior categories demonstrate this: high capability, hidden misalignment - RSP v3 October 2026 commitment to interpretability-informed assessment **Extraction hints:** @@ -51,6 +51,6 @@ The benchmark is designed to support development of alignment auditing as a quan ## Curator Notes -PRIMARY CONNECTION: [[scalable-oversight-degrades-as-capability-gaps-grow]] +PRIMARY CONNECTION: scalable-oversight-degrades-as-capability-gaps-grow WHY ARCHIVED: Direct empirical challenge to whether RSP v3's October 2026 interpretability-informed alignment assessment can detect what it needs to detect; establishes that tool-to-agent gap is structural, not just engineering EXTRACTION HINT: Focus on the tool-to-agent gap finding and its implications for governance frameworks that rely on interpretability audits; also flag the hidden-behavior categories (sycophantic deference, opposition to AI regulation) as alignment-relevant examples diff --git a/inbox/queue/2026-03-29-anthropic-pentagon-injunction-first-amendment-lin.md b/inbox/queue/2026-03-29-anthropic-pentagon-injunction-first-amendment-lin.md index 35f96814..c37033b7 100644 --- a/inbox/queue/2026-03-29-anthropic-pentagon-injunction-first-amendment-lin.md +++ b/inbox/queue/2026-03-29-anthropic-pentagon-injunction-first-amendment-lin.md @@ -58,19 +58,19 @@ Federal Judge Rita F. Lin (N.D. Cal.) granted Anthropic's request for a prelimin **What I expected but didn't find:** Any positive AI safety law cited by Anthropic or the court. No statutory basis for AI safety constraint requirements exists. The case is entirely constitutional/APA. **KB connections:** -- [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — the injunction protects the company but doesn't solve the structural incentive problem -- [[government-safety-designations-can-invert-dynamics-penalizing-safety]] — the supply-chain-risk designation is the empirical case for this claim +- voluntary-safety-pledges-cannot-survive-competitive-pressure — the injunction protects the company but doesn't solve the structural incentive problem +- government-safety-designations-can-invert-dynamics-penalizing-safety — the supply-chain-risk designation is the empirical case for this claim - Session 16 CLAIM CANDIDATE A (voluntary constraints have no legal standing) — the injunction provides partial but structurally limited legal protection **Extraction hints:** - Claim: The Anthropic preliminary injunction establishes judicial oversight of executive AI governance but through constitutional/APA grounds — not statutory AI safety law — leaving the positive governance gap intact -- Enrichment: [[government-safety-designations-can-invert-dynamics-penalizing-safety]] — add the Anthropic supply-chain-risk designation as the empirical case +- Enrichment: government-safety-designations-can-invert-dynamics-penalizing-safety — add the Anthropic supply-chain-risk designation as the empirical case - The three grounds (First Amendment, due process, APA) as the current de facto legal framework for AI company safety constraint protection **Context:** Judge Rita F. Lin, N.D. Cal. 43-page ruling. First US federal court intervention in executive-AI-company dispute over defense deployment terms. Anthropic v. U.S. Department of Defense. ## Curator Notes -PRIMARY CONNECTION: [[government-safety-designations-can-invert-dynamics-penalizing-safety]] +PRIMARY CONNECTION: government-safety-designations-can-invert-dynamics-penalizing-safety WHY ARCHIVED: First judicial intervention establishing constitutional but not statutory protection for AI safety constraints; reveals the legal architecture gap in use-based AI safety governance EXTRACTION HINT: Focus on the distinction between negative protection (can't be punished for safety positions) vs positive protection (government must accept safety constraints); the case law basis (First Amendment + APA, not AI safety statute) is the key governance insight diff --git a/inbox/queue/2026-03-29-anthropic-public-first-action-pac-20m-ai-regulation.md b/inbox/queue/2026-03-29-anthropic-public-first-action-pac-20m-ai-regulation.md index d8832c58..32073d63 100644 --- a/inbox/queue/2026-03-29-anthropic-public-first-action-pac-20m-ai-regulation.md +++ b/inbox/queue/2026-03-29-anthropic-public-first-action-pac-20m-ai-regulation.md @@ -42,7 +42,7 @@ On February 12, 2026 — two weeks before the Anthropic-Pentagon blacklisting **What I expected but didn't find:** I expected this to be a purely defensive investment after the blacklisting. Instead it's pre-blacklisting, suggesting Anthropic's strategy was integrated: hold safety red lines + challenge legally + invest politically, all simultaneously. **KB connections:** -- [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — the PAC investment is the strategic acknowledgment of this claim +- voluntary-safety-pledges-cannot-survive-competitive-pressure — the PAC investment is the strategic acknowledgment of this claim - B1 disconfirmation: if the 2026 midterms produce enough pro-regulation candidates, this is the path to statutory AI safety governance weakening B1's "not being treated as such" component - Cross-domain for Leo: AI company political investment patterns as signals of governance architecture failures @@ -55,6 +55,6 @@ On February 12, 2026 — two weeks before the Anthropic-Pentagon blacklisting ## Curator Notes -PRIMARY CONNECTION: [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] +PRIMARY CONNECTION: voluntary-safety-pledges-cannot-survive-competitive-pressure WHY ARCHIVED: Electoral investment as the residual governance strategy when statutory and litigation routes fail; the timing (pre-blacklisting) suggests strategic integration, not reactive response EXTRACTION HINT: Focus on the strategic logic: voluntary → litigation → electoral as the governance stack when statutory AI safety law doesn't exist; the PAC investment as institutional acknowledgment of the governance gap diff --git a/inbox/queue/2026-03-29-congress-diverging-paths-ai-fy2026-ndaa-defense-bills.md b/inbox/queue/2026-03-29-congress-diverging-paths-ai-fy2026-ndaa-defense-bills.md index e6a59890..17c01b4c 100644 --- a/inbox/queue/2026-03-29-congress-diverging-paths-ai-fy2026-ndaa-defense-bills.md +++ b/inbox/queue/2026-03-29-congress-diverging-paths-ai-fy2026-ndaa-defense-bills.md @@ -49,7 +49,7 @@ K&L Gates analysis: "Artificial Intelligence Provisions in the Fiscal Year 2026 **KB connections:** - AI Guardrails Act (Slotkin) — the FY2027 NDAA context for this legislation -- [[adaptive-governance-outperforms-rigid-alignment-blueprints]] — the congressional divergence shows governance is not keeping pace with deployment +- adaptive-governance-outperforms-rigid-alignment-blueprints — the congressional divergence shows governance is not keeping pace with deployment **Extraction hints:** - The Senate oversight emphasis vs House capability emphasis as a structural tension in AI defense governance @@ -60,6 +60,6 @@ K&L Gates analysis: "Artificial Intelligence Provisions in the Fiscal Year 2026 ## Curator Notes -PRIMARY CONNECTION: [[ai-is-critical-juncture-capabilities-governance-mismatch-transformation-window]] +PRIMARY CONNECTION: ai-is-critical-juncture-capabilities-governance-mismatch-transformation-window WHY ARCHIVED: Documents the structural House-Senate divergence on AI defense governance; the oversight-vs-capability tension is the legislative context for the AI Guardrails Act's NDAA pathway EXTRACTION HINT: Focus on the conference process as governance chokepoint; the House capability-expansion framing as the structural obstacle to Senate oversight provisions in FY2027 NDAA diff --git a/inbox/queue/2026-03-29-intercept-openai-surveillance-autonomous-killings-trust-us.md b/inbox/queue/2026-03-29-intercept-openai-surveillance-autonomous-killings-trust-us.md index 0ce44265..2cac1937 100644 --- a/inbox/queue/2026-03-29-intercept-openai-surveillance-autonomous-killings-trust-us.md +++ b/inbox/queue/2026-03-29-intercept-openai-surveillance-autonomous-killings-trust-us.md @@ -47,7 +47,7 @@ The headline captures the structural issue: OpenAI is asking users, government, **What I expected but didn't find:** Any external verification or auditing mechanism in OpenAI's contract. The accountability gap is total. **KB connections:** -- [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — the "trust us" problem is the mechanism +- voluntary-safety-pledges-cannot-survive-competitive-pressure — the "trust us" problem is the mechanism - The race-to-the-bottom dynamic: Anthropic's hard prohibitions → market exclusion; OpenAI's aspirational language → market capture **Extraction hints:** @@ -59,6 +59,6 @@ The headline captures the structural issue: OpenAI is asking users, government, ## Curator Notes -PRIMARY CONNECTION: [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] +PRIMARY CONNECTION: voluntary-safety-pledges-cannot-survive-competitive-pressure WHY ARCHIVED: Empirical case study of the trust-vs-verification gap in voluntary AI safety commitments; the five specific loopholes in OpenAI's amended Pentagon contract language are extractable as evidence EXTRACTION HINT: Focus on the structural claim: voluntary safety constraints without external enforcement mechanisms are statements of intent, not binding safety governance; the "intentionally" qualifier is the extractable example diff --git a/inbox/queue/2026-03-29-meridiem-courts-check-executive-ai-power.md b/inbox/queue/2026-03-29-meridiem-courts-check-executive-ai-power.md index 1ccf2718..c04e138a 100644 --- a/inbox/queue/2026-03-29-meridiem-courts-check-executive-ai-power.md +++ b/inbox/queue/2026-03-29-meridiem-courts-check-executive-ai-power.md @@ -45,8 +45,8 @@ The Meridiem analysis of the broader governance implications of the Anthropic pr **What I expected but didn't find:** Any analysis of what statutory law would need to say to create positive protection for AI safety constraints. The analysis focuses on what courts did, not what legislators would need to do to create durable protection. **KB connections:** -- [[adaptive-governance-outperforms-rigid-alignment-blueprints]] — the three-branch dynamic is the governance architecture question -- [[nation-states-will-assert-control-over-frontier-ai]] — the executive branch behavior confirms this; the judicial branch is the counter-pressure +- adaptive-governance-outperforms-rigid-alignment-blueprints — the three-branch dynamic is the governance architecture question +- nation-states-will-assert-control-over-frontier-ai — the executive branch behavior confirms this; the judicial branch is the counter-pressure - B1 "not being treated as such" — three-branch picture shows governance is contested but not adequate **Extraction hints:** @@ -57,6 +57,6 @@ The Meridiem analysis of the broader governance implications of the Anthropic pr ## Curator Notes -PRIMARY CONNECTION: [[ai-is-critical-juncture-capabilities-governance-mismatch-transformation-window]] +PRIMARY CONNECTION: ai-is-critical-juncture-capabilities-governance-mismatch-transformation-window WHY ARCHIVED: Three-branch governance architecture framing; establishes what courts can and cannot do for AI safety — the limits of judicial protection as a substitute for statutory law EXTRACTION HINT: Extract the courts-can/courts-cannot framework as a claim about the limits of judicial protection for AI safety constraints; the three-branch dynamic as a governance architecture observation diff --git a/inbox/queue/2026-03-29-mit-tech-review-openai-pentagon-compromise-anthropic-feared.md b/inbox/queue/2026-03-29-mit-tech-review-openai-pentagon-compromise-anthropic-feared.md index dd5a8006..ece6b536 100644 --- a/inbox/queue/2026-03-29-mit-tech-review-openai-pentagon-compromise-anthropic-feared.md +++ b/inbox/queue/2026-03-29-mit-tech-review-openai-pentagon-compromise-anthropic-feared.md @@ -47,12 +47,12 @@ MIT Technology Review analysis of the OpenAI-Pentagon deal, published March 2, 2 **What I expected but didn't find:** Any substantive enforcement mechanism in OpenAI's amended language. The "intentionally" qualifier and lack of external verification are loopholes large enough to drive an autonomous weapons program through. **KB connections:** -- [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — this is the clearest empirical confirmation +- voluntary-safety-pledges-cannot-survive-competitive-pressure — this is the clearest empirical confirmation - B2 (alignment as coordination problem) — Anthropic/OpenAI/DoD triangle is the structural case -- [[ai-is-critical-juncture-capabilities-governance-mismatch]] — the compromise reveals the mismatch in real time +- ai-is-critical-juncture-capabilities-governance-mismatch — the compromise reveals the mismatch in real time **Extraction hints:** -- Enrichment: [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — add the Anthropic/OpenAI/DoD structural case as primary evidence +- Enrichment: voluntary-safety-pledges-cannot-survive-competitive-pressure — add the Anthropic/OpenAI/DoD structural case as primary evidence - Potential new claim: "When voluntary AI safety constraints create competitive disadvantage, competitors who accept weaker constraints capture the market while the safety-conscious actor faces exclusion — the Anthropic/OpenAI/DoD dynamic is the first major real-world case" - The "intentionally" qualifier and lack of external enforcement as the gap between nominal and real voluntary constraints @@ -60,6 +60,6 @@ MIT Technology Review analysis of the OpenAI-Pentagon deal, published March 2, 2 ## Curator Notes -PRIMARY CONNECTION: [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] +PRIMARY CONNECTION: voluntary-safety-pledges-cannot-survive-competitive-pressure WHY ARCHIVED: The Anthropic/OpenAI/DoD dynamic is the strongest real-world evidence that voluntary safety pledges fail under competitive pressure; OpenAI calling it a "scary precedent" while accepting the terms is the key signal that incentive structure, not bad values, drives the outcome EXTRACTION HINT: Focus on the structural sequence (Anthropic holds → is excluded → competitor accepts looser terms → captures market) as the empirical case for the coordination failure mechanism; the "intentionally" qualifier as the gap between nominal and real voluntary constraints diff --git a/inbox/queue/2026-03-29-openai-our-agreement-department-of-war.md b/inbox/queue/2026-03-29-openai-our-agreement-department-of-war.md index 337eeeeb..398dcc6e 100644 --- a/inbox/queue/2026-03-29-openai-our-agreement-department-of-war.md +++ b/inbox/queue/2026-03-29-openai-our-agreement-department-of-war.md @@ -41,7 +41,7 @@ The post is titled "Our agreement with the Department of War" — deliberately u **What I expected but didn't find:** Any indication that OpenAI extracted substantive safety commitments in exchange for "any lawful purpose" language. The deal is structurally asymmetric: OpenAI conceded on the central issue (use restrictions) and received only aspirational language in return. **KB connections:** -- [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — primary source for the OpenAI empirical case +- voluntary-safety-pledges-cannot-survive-competitive-pressure — primary source for the OpenAI empirical case - B2 (alignment as coordination problem) — the "scary precedent" + immediate compliance is the behavioral evidence - The MIT Technology Review "what Anthropic feared" piece is the secondary analysis of this primary source @@ -54,6 +54,6 @@ The post is titled "Our agreement with the Department of War" — deliberately u ## Curator Notes -PRIMARY CONNECTION: [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] +PRIMARY CONNECTION: voluntary-safety-pledges-cannot-survive-competitive-pressure WHY ARCHIVED: Primary source for the OpenAI side of the race-to-the-bottom case; Altman's "scary precedent" quotes combined with immediate compliance are the behavioral evidence for the coordination failure mechanism EXTRACTION HINT: Quote the Altman statements directly; the "Department of War" title is the signal to note; the structural asymmetry of the deal (full use-restriction concession in exchange for aspirational language) is the extractable mechanism diff --git a/inbox/queue/2026-03-29-slotkin-ai-guardrails-act-dod-autonomous-weapons.md b/inbox/queue/2026-03-29-slotkin-ai-guardrails-act-dod-autonomous-weapons.md index a5e6ab18..06bd1bef 100644 --- a/inbox/queue/2026-03-29-slotkin-ai-guardrails-act-dod-autonomous-weapons.md +++ b/inbox/queue/2026-03-29-slotkin-ai-guardrails-act-dod-autonomous-weapons.md @@ -39,8 +39,8 @@ Senator Elissa Slotkin (D-MI) introduced the AI Guardrails Act on March 17, 2026 **What I expected but didn't find:** Any Republican co-sponsors. Any indication that the Anthropic-Pentagon conflict created bipartisan urgency for statutory governance. The conflict appears to be politically polarized — Democrats see it as a safety issue, Republicans see it as a deregulation issue. **KB connections:** -- [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — this bill is the legislative response to that claim's empirical validation -- [[ai-critical-juncture-capabilities-governance-mismatch-transformation-window]] — the Slotkin bill is the key test of whether governance can close the mismatch +- voluntary-safety-pledges-cannot-survive-competitive-pressure — this bill is the legislative response to that claim's empirical validation +- ai-critical-juncture-capabilities-governance-mismatch-transformation-window — the Slotkin bill is the key test of whether governance can close the mismatch - Session 16 CLAIM CANDIDATE C (RSP red lines → statutory law as key test) **Extraction hints:** @@ -52,6 +52,6 @@ Senator Elissa Slotkin (D-MI) introduced the AI Guardrails Act on March 17, 2026 ## Curator Notes -PRIMARY CONNECTION: [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] +PRIMARY CONNECTION: voluntary-safety-pledges-cannot-survive-competitive-pressure WHY ARCHIVED: First legislative attempt to convert voluntary AI safety constraints into statutory law; its trajectory is the key test of whether use-based governance can emerge in current US political environment EXTRACTION HINT: Focus on (1) use-based vs capability-threshold framing distinction, (2) the no-co-sponsors status as evidence of governance gap, (3) NDAA conference pathway as the actual legislative route for statutory DoD AI safety constraints diff --git a/inbox/queue/2026-03-29-techpolicy-press-anthropic-pentagon-dispute-reverberates-europe.md b/inbox/queue/2026-03-29-techpolicy-press-anthropic-pentagon-dispute-reverberates-europe.md index 09e33685..7701afe7 100644 --- a/inbox/queue/2026-03-29-techpolicy-press-anthropic-pentagon-dispute-reverberates-europe.md +++ b/inbox/queue/2026-03-29-techpolicy-press-anthropic-pentagon-dispute-reverberates-europe.md @@ -33,8 +33,8 @@ The dispute has prompted discussions in European capitals about: **What I expected but didn't find:** Full article content. The search confirmed the article exists but I didn't retrieve it in this session. **KB connections:** -- [[adaptive-governance-outperforms-rigid-alignment-blueprints]] — EU approach vs US approach as a comparative test -- [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — does EU statutory approach avoid this failure mode? +- adaptive-governance-outperforms-rigid-alignment-blueprints — EU approach vs US approach as a comparative test +- voluntary-safety-pledges-cannot-survive-competitive-pressure — does EU statutory approach avoid this failure mode? - Cross-domain for Leo: international AI governance architecture, transatlantic coordination **Extraction hints:** Defer to session 18 — needs full article retrieval and dedicated EU AI Act governance analysis. @@ -43,6 +43,6 @@ The dispute has prompted discussions in European capitals about: ## Curator Notes -PRIMARY CONNECTION: [[adaptive-governance-outperforms-rigid-alignment-blueprints]] +PRIMARY CONNECTION: adaptive-governance-outperforms-rigid-alignment-blueprints WHY ARCHIVED: International dimension of the US governance architecture failure; the EU AI Act's use-based approach may provide a comparative case for whether statutory governance outperforms voluntary commitments EXTRACTION HINT: INCOMPLETE — needs full article retrieval in session 18. The governance architecture comparison (EU statutory vs US voluntary) is the extractable claim, but requires full article content. diff --git a/inbox/queue/2026-03-29-techpolicy-press-anthropic-pentagon-standoff-limits-corporate-ethics.md b/inbox/queue/2026-03-29-techpolicy-press-anthropic-pentagon-standoff-limits-corporate-ethics.md index 50774506..7ccc4ff0 100644 --- a/inbox/queue/2026-03-29-techpolicy-press-anthropic-pentagon-standoff-limits-corporate-ethics.md +++ b/inbox/queue/2026-03-29-techpolicy-press-anthropic-pentagon-standoff-limits-corporate-ethics.md @@ -43,8 +43,8 @@ Also covered: TechPolicy.Press "Why Congress Should Step Into the Anthropic-Pent **What I expected but didn't find:** Any counter-argument that corporate ethics could be structurally strengthened without statutory backing. The analysis uniformly concludes that voluntary commitments are insufficient. **KB connections:** -- [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — "limits of corporate ethics" is the same thesis -- [[ai-is-critical-juncture-capabilities-governance-mismatch]] — the standoff is the juncture made visible +- voluntary-safety-pledges-cannot-survive-competitive-pressure — "limits of corporate ethics" is the same thesis +- ai-is-critical-juncture-capabilities-governance-mismatch — the standoff is the juncture made visible - B1 "not being treated as such" — the standoff shows government is treating safety as an obstacle, not a priority **Extraction hints:** @@ -55,6 +55,6 @@ Also covered: TechPolicy.Press "Why Congress Should Step Into the Anthropic-Pent ## Curator Notes -PRIMARY CONNECTION: [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] +PRIMARY CONNECTION: voluntary-safety-pledges-cannot-survive-competitive-pressure WHY ARCHIVED: Systematic analysis of why corporate AI safety ethics have structural limits; four-factor framework for why voluntary constraints fail under government pressure is extractable as a claim EXTRACTION HINT: Extract the four-factor structural argument as a claim; also flag "European reverberations" piece as a separate archive target for the EU AI governance angle diff --git a/inbox/queue/2026-03-29-techpolicy-press-anthropic-pentagon-timeline.md b/inbox/queue/2026-03-29-techpolicy-press-anthropic-pentagon-timeline.md index 0ee51684..7d09d85b 100644 --- a/inbox/queue/2026-03-29-techpolicy-press-anthropic-pentagon-timeline.md +++ b/inbox/queue/2026-03-29-techpolicy-press-anthropic-pentagon-timeline.md @@ -49,6 +49,6 @@ TechPolicy.Press comprehensive chronology of the Anthropic-Pentagon dispute (Jul ## Curator Notes -PRIMARY CONNECTION: [[government-safety-designations-can-invert-dynamics-penalizing-safety]] +PRIMARY CONNECTION: government-safety-designations-can-invert-dynamics-penalizing-safety WHY ARCHIVED: Reference document for the full Anthropic-Pentagon chronology; the "nearly aligned" court filing detail suggests the blacklisting was a political pressure tactic, strengthening the First Amendment retaliation claim EXTRACTION HINT: Low priority for extraction. Use as context for other claims. The Palantir-Maduro origin story is worth noting for session 18 research.