auto-fix: strip 34 broken wiki links

Pipeline auto-fixer: removed [[ ]] brackets from links
that don't resolve to existing claims in the knowledge base.
This commit is contained in:
Teleo Agents 2026-03-29 00:12:31 +00:00
parent 43a9a08815
commit 0537002ce3
13 changed files with 34 additions and 34 deletions

View file

@ -44,7 +44,7 @@ Al Jazeera analysis of the governance implications of the Anthropic-Pentagon lit
**What I expected but didn't find:** Any specific mechanism for how court protection translates to statutory law. The "opening" is real but requires a causal chain (court ruling → political salience → midterm outcome → legislative action) that has multiple failure points. **What I expected but didn't find:** Any specific mechanism for how court protection translates to statutory law. The "opening" is real but requires a causal chain (court ruling → political salience → midterm outcome → legislative action) that has multiple failure points.
**KB connections:** **KB connections:**
- [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — the case made this claim visible to the public - voluntary-safety-pledges-cannot-survive-competitive-pressure — the case made this claim visible to the public
- B1 disconfirmation pathway: court ruling + midterms + legislative action is the chain - B1 disconfirmation pathway: court ruling + midterms + legislative action is the chain
- Anthropic's $20M PAC investment as the institutional investment in the midterms step of this chain - Anthropic's $20M PAC investment as the institutional investment in the midterms step of this chain
@ -57,6 +57,6 @@ Al Jazeera analysis of the governance implications of the Anthropic-Pentagon lit
## Curator Notes ## Curator Notes
PRIMARY CONNECTION: [[ai-is-critical-juncture-capabilities-governance-mismatch-transformation-window]] PRIMARY CONNECTION: ai-is-critical-juncture-capabilities-governance-mismatch-transformation-window
WHY ARCHIVED: Expert analysis of the governance opening created by the Anthropic case; establishes the causal chain (court → salience → midterms → legislation) that is the current B1 disconfirmation pathway WHY ARCHIVED: Expert analysis of the governance opening created by the Anthropic case; establishes the causal chain (court → salience → midterms → legislation) that is the current B1 disconfirmation pathway
EXTRACTION HINT: Extract the causal chain as a governance mechanism observation; the multiple failure points in this chain are the extractable insight — "opening space" is not the same as closing the governance gap EXTRACTION HINT: Extract the causal chain as a governance mechanism observation; the multiple failure points in this chain are the extractable insight — "opening space" is not the same as closing the governance gap

View file

@ -38,8 +38,8 @@ The benchmark is designed to support development of alignment auditing as a quan
**What I expected but didn't find:** I expected the paper to show incremental progress on interpretability closing the gap on harder targets. Instead it shows the gap is **anti-correlated** with adversarial training — tools that help on easy targets hurt on hard targets, suggesting a fundamentally different approach is needed for adversarially trained systems. **What I expected but didn't find:** I expected the paper to show incremental progress on interpretability closing the gap on harder targets. Instead it shows the gap is **anti-correlated** with adversarial training — tools that help on easy targets hurt on hard targets, suggesting a fundamentally different approach is needed for adversarially trained systems.
**KB connections:** **KB connections:**
- [[formal-verification-scales-ai-capability-human-review-degrades]] — this is the same dynamic at the auditing layer - formal-verification-scales-ai-capability-human-review-degrades — this is the same dynamic at the auditing layer
- [[capability-and-reliability-are-independent-dimensions]] — hidden behavior categories demonstrate this: high capability, hidden misalignment - capability-and-reliability-are-independent-dimensions — hidden behavior categories demonstrate this: high capability, hidden misalignment
- RSP v3 October 2026 commitment to interpretability-informed assessment - RSP v3 October 2026 commitment to interpretability-informed assessment
**Extraction hints:** **Extraction hints:**
@ -51,6 +51,6 @@ The benchmark is designed to support development of alignment auditing as a quan
## Curator Notes ## Curator Notes
PRIMARY CONNECTION: [[scalable-oversight-degrades-as-capability-gaps-grow]] PRIMARY CONNECTION: scalable-oversight-degrades-as-capability-gaps-grow
WHY ARCHIVED: Direct empirical challenge to whether RSP v3's October 2026 interpretability-informed alignment assessment can detect what it needs to detect; establishes that tool-to-agent gap is structural, not just engineering WHY ARCHIVED: Direct empirical challenge to whether RSP v3's October 2026 interpretability-informed alignment assessment can detect what it needs to detect; establishes that tool-to-agent gap is structural, not just engineering
EXTRACTION HINT: Focus on the tool-to-agent gap finding and its implications for governance frameworks that rely on interpretability audits; also flag the hidden-behavior categories (sycophantic deference, opposition to AI regulation) as alignment-relevant examples EXTRACTION HINT: Focus on the tool-to-agent gap finding and its implications for governance frameworks that rely on interpretability audits; also flag the hidden-behavior categories (sycophantic deference, opposition to AI regulation) as alignment-relevant examples

View file

@ -58,19 +58,19 @@ Federal Judge Rita F. Lin (N.D. Cal.) granted Anthropic's request for a prelimin
**What I expected but didn't find:** Any positive AI safety law cited by Anthropic or the court. No statutory basis for AI safety constraint requirements exists. The case is entirely constitutional/APA. **What I expected but didn't find:** Any positive AI safety law cited by Anthropic or the court. No statutory basis for AI safety constraint requirements exists. The case is entirely constitutional/APA.
**KB connections:** **KB connections:**
- [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — the injunction protects the company but doesn't solve the structural incentive problem - voluntary-safety-pledges-cannot-survive-competitive-pressure — the injunction protects the company but doesn't solve the structural incentive problem
- [[government-safety-designations-can-invert-dynamics-penalizing-safety]] — the supply-chain-risk designation is the empirical case for this claim - government-safety-designations-can-invert-dynamics-penalizing-safety — the supply-chain-risk designation is the empirical case for this claim
- Session 16 CLAIM CANDIDATE A (voluntary constraints have no legal standing) — the injunction provides partial but structurally limited legal protection - Session 16 CLAIM CANDIDATE A (voluntary constraints have no legal standing) — the injunction provides partial but structurally limited legal protection
**Extraction hints:** **Extraction hints:**
- Claim: The Anthropic preliminary injunction establishes judicial oversight of executive AI governance but through constitutional/APA grounds — not statutory AI safety law — leaving the positive governance gap intact - Claim: The Anthropic preliminary injunction establishes judicial oversight of executive AI governance but through constitutional/APA grounds — not statutory AI safety law — leaving the positive governance gap intact
- Enrichment: [[government-safety-designations-can-invert-dynamics-penalizing-safety]] — add the Anthropic supply-chain-risk designation as the empirical case - Enrichment: government-safety-designations-can-invert-dynamics-penalizing-safety — add the Anthropic supply-chain-risk designation as the empirical case
- The three grounds (First Amendment, due process, APA) as the current de facto legal framework for AI company safety constraint protection - The three grounds (First Amendment, due process, APA) as the current de facto legal framework for AI company safety constraint protection
**Context:** Judge Rita F. Lin, N.D. Cal. 43-page ruling. First US federal court intervention in executive-AI-company dispute over defense deployment terms. Anthropic v. U.S. Department of Defense. **Context:** Judge Rita F. Lin, N.D. Cal. 43-page ruling. First US federal court intervention in executive-AI-company dispute over defense deployment terms. Anthropic v. U.S. Department of Defense.
## Curator Notes ## Curator Notes
PRIMARY CONNECTION: [[government-safety-designations-can-invert-dynamics-penalizing-safety]] PRIMARY CONNECTION: government-safety-designations-can-invert-dynamics-penalizing-safety
WHY ARCHIVED: First judicial intervention establishing constitutional but not statutory protection for AI safety constraints; reveals the legal architecture gap in use-based AI safety governance WHY ARCHIVED: First judicial intervention establishing constitutional but not statutory protection for AI safety constraints; reveals the legal architecture gap in use-based AI safety governance
EXTRACTION HINT: Focus on the distinction between negative protection (can't be punished for safety positions) vs positive protection (government must accept safety constraints); the case law basis (First Amendment + APA, not AI safety statute) is the key governance insight EXTRACTION HINT: Focus on the distinction between negative protection (can't be punished for safety positions) vs positive protection (government must accept safety constraints); the case law basis (First Amendment + APA, not AI safety statute) is the key governance insight

View file

@ -42,7 +42,7 @@ On February 12, 2026 — two weeks before the Anthropic-Pentagon blacklisting
**What I expected but didn't find:** I expected this to be a purely defensive investment after the blacklisting. Instead it's pre-blacklisting, suggesting Anthropic's strategy was integrated: hold safety red lines + challenge legally + invest politically, all simultaneously. **What I expected but didn't find:** I expected this to be a purely defensive investment after the blacklisting. Instead it's pre-blacklisting, suggesting Anthropic's strategy was integrated: hold safety red lines + challenge legally + invest politically, all simultaneously.
**KB connections:** **KB connections:**
- [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — the PAC investment is the strategic acknowledgment of this claim - voluntary-safety-pledges-cannot-survive-competitive-pressure — the PAC investment is the strategic acknowledgment of this claim
- B1 disconfirmation: if the 2026 midterms produce enough pro-regulation candidates, this is the path to statutory AI safety governance weakening B1's "not being treated as such" component - B1 disconfirmation: if the 2026 midterms produce enough pro-regulation candidates, this is the path to statutory AI safety governance weakening B1's "not being treated as such" component
- Cross-domain for Leo: AI company political investment patterns as signals of governance architecture failures - Cross-domain for Leo: AI company political investment patterns as signals of governance architecture failures
@ -55,6 +55,6 @@ On February 12, 2026 — two weeks before the Anthropic-Pentagon blacklisting
## Curator Notes ## Curator Notes
PRIMARY CONNECTION: [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] PRIMARY CONNECTION: voluntary-safety-pledges-cannot-survive-competitive-pressure
WHY ARCHIVED: Electoral investment as the residual governance strategy when statutory and litigation routes fail; the timing (pre-blacklisting) suggests strategic integration, not reactive response WHY ARCHIVED: Electoral investment as the residual governance strategy when statutory and litigation routes fail; the timing (pre-blacklisting) suggests strategic integration, not reactive response
EXTRACTION HINT: Focus on the strategic logic: voluntary → litigation → electoral as the governance stack when statutory AI safety law doesn't exist; the PAC investment as institutional acknowledgment of the governance gap EXTRACTION HINT: Focus on the strategic logic: voluntary → litigation → electoral as the governance stack when statutory AI safety law doesn't exist; the PAC investment as institutional acknowledgment of the governance gap

View file

@ -49,7 +49,7 @@ K&L Gates analysis: "Artificial Intelligence Provisions in the Fiscal Year 2026
**KB connections:** **KB connections:**
- AI Guardrails Act (Slotkin) — the FY2027 NDAA context for this legislation - AI Guardrails Act (Slotkin) — the FY2027 NDAA context for this legislation
- [[adaptive-governance-outperforms-rigid-alignment-blueprints]] — the congressional divergence shows governance is not keeping pace with deployment - adaptive-governance-outperforms-rigid-alignment-blueprints — the congressional divergence shows governance is not keeping pace with deployment
**Extraction hints:** **Extraction hints:**
- The Senate oversight emphasis vs House capability emphasis as a structural tension in AI defense governance - The Senate oversight emphasis vs House capability emphasis as a structural tension in AI defense governance
@ -60,6 +60,6 @@ K&L Gates analysis: "Artificial Intelligence Provisions in the Fiscal Year 2026
## Curator Notes ## Curator Notes
PRIMARY CONNECTION: [[ai-is-critical-juncture-capabilities-governance-mismatch-transformation-window]] PRIMARY CONNECTION: ai-is-critical-juncture-capabilities-governance-mismatch-transformation-window
WHY ARCHIVED: Documents the structural House-Senate divergence on AI defense governance; the oversight-vs-capability tension is the legislative context for the AI Guardrails Act's NDAA pathway WHY ARCHIVED: Documents the structural House-Senate divergence on AI defense governance; the oversight-vs-capability tension is the legislative context for the AI Guardrails Act's NDAA pathway
EXTRACTION HINT: Focus on the conference process as governance chokepoint; the House capability-expansion framing as the structural obstacle to Senate oversight provisions in FY2027 NDAA EXTRACTION HINT: Focus on the conference process as governance chokepoint; the House capability-expansion framing as the structural obstacle to Senate oversight provisions in FY2027 NDAA

View file

@ -47,7 +47,7 @@ The headline captures the structural issue: OpenAI is asking users, government,
**What I expected but didn't find:** Any external verification or auditing mechanism in OpenAI's contract. The accountability gap is total. **What I expected but didn't find:** Any external verification or auditing mechanism in OpenAI's contract. The accountability gap is total.
**KB connections:** **KB connections:**
- [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — the "trust us" problem is the mechanism - voluntary-safety-pledges-cannot-survive-competitive-pressure — the "trust us" problem is the mechanism
- The race-to-the-bottom dynamic: Anthropic's hard prohibitions → market exclusion; OpenAI's aspirational language → market capture - The race-to-the-bottom dynamic: Anthropic's hard prohibitions → market exclusion; OpenAI's aspirational language → market capture
**Extraction hints:** **Extraction hints:**
@ -59,6 +59,6 @@ The headline captures the structural issue: OpenAI is asking users, government,
## Curator Notes ## Curator Notes
PRIMARY CONNECTION: [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] PRIMARY CONNECTION: voluntary-safety-pledges-cannot-survive-competitive-pressure
WHY ARCHIVED: Empirical case study of the trust-vs-verification gap in voluntary AI safety commitments; the five specific loopholes in OpenAI's amended Pentagon contract language are extractable as evidence WHY ARCHIVED: Empirical case study of the trust-vs-verification gap in voluntary AI safety commitments; the five specific loopholes in OpenAI's amended Pentagon contract language are extractable as evidence
EXTRACTION HINT: Focus on the structural claim: voluntary safety constraints without external enforcement mechanisms are statements of intent, not binding safety governance; the "intentionally" qualifier is the extractable example EXTRACTION HINT: Focus on the structural claim: voluntary safety constraints without external enforcement mechanisms are statements of intent, not binding safety governance; the "intentionally" qualifier is the extractable example

View file

@ -45,8 +45,8 @@ The Meridiem analysis of the broader governance implications of the Anthropic pr
**What I expected but didn't find:** Any analysis of what statutory law would need to say to create positive protection for AI safety constraints. The analysis focuses on what courts did, not what legislators would need to do to create durable protection. **What I expected but didn't find:** Any analysis of what statutory law would need to say to create positive protection for AI safety constraints. The analysis focuses on what courts did, not what legislators would need to do to create durable protection.
**KB connections:** **KB connections:**
- [[adaptive-governance-outperforms-rigid-alignment-blueprints]] — the three-branch dynamic is the governance architecture question - adaptive-governance-outperforms-rigid-alignment-blueprints — the three-branch dynamic is the governance architecture question
- [[nation-states-will-assert-control-over-frontier-ai]] — the executive branch behavior confirms this; the judicial branch is the counter-pressure - nation-states-will-assert-control-over-frontier-ai — the executive branch behavior confirms this; the judicial branch is the counter-pressure
- B1 "not being treated as such" — three-branch picture shows governance is contested but not adequate - B1 "not being treated as such" — three-branch picture shows governance is contested but not adequate
**Extraction hints:** **Extraction hints:**
@ -57,6 +57,6 @@ The Meridiem analysis of the broader governance implications of the Anthropic pr
## Curator Notes ## Curator Notes
PRIMARY CONNECTION: [[ai-is-critical-juncture-capabilities-governance-mismatch-transformation-window]] PRIMARY CONNECTION: ai-is-critical-juncture-capabilities-governance-mismatch-transformation-window
WHY ARCHIVED: Three-branch governance architecture framing; establishes what courts can and cannot do for AI safety — the limits of judicial protection as a substitute for statutory law WHY ARCHIVED: Three-branch governance architecture framing; establishes what courts can and cannot do for AI safety — the limits of judicial protection as a substitute for statutory law
EXTRACTION HINT: Extract the courts-can/courts-cannot framework as a claim about the limits of judicial protection for AI safety constraints; the three-branch dynamic as a governance architecture observation EXTRACTION HINT: Extract the courts-can/courts-cannot framework as a claim about the limits of judicial protection for AI safety constraints; the three-branch dynamic as a governance architecture observation

View file

@ -47,12 +47,12 @@ MIT Technology Review analysis of the OpenAI-Pentagon deal, published March 2, 2
**What I expected but didn't find:** Any substantive enforcement mechanism in OpenAI's amended language. The "intentionally" qualifier and lack of external verification are loopholes large enough to drive an autonomous weapons program through. **What I expected but didn't find:** Any substantive enforcement mechanism in OpenAI's amended language. The "intentionally" qualifier and lack of external verification are loopholes large enough to drive an autonomous weapons program through.
**KB connections:** **KB connections:**
- [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — this is the clearest empirical confirmation - voluntary-safety-pledges-cannot-survive-competitive-pressure — this is the clearest empirical confirmation
- B2 (alignment as coordination problem) — Anthropic/OpenAI/DoD triangle is the structural case - B2 (alignment as coordination problem) — Anthropic/OpenAI/DoD triangle is the structural case
- [[ai-is-critical-juncture-capabilities-governance-mismatch]] — the compromise reveals the mismatch in real time - ai-is-critical-juncture-capabilities-governance-mismatch — the compromise reveals the mismatch in real time
**Extraction hints:** **Extraction hints:**
- Enrichment: [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — add the Anthropic/OpenAI/DoD structural case as primary evidence - Enrichment: voluntary-safety-pledges-cannot-survive-competitive-pressure — add the Anthropic/OpenAI/DoD structural case as primary evidence
- Potential new claim: "When voluntary AI safety constraints create competitive disadvantage, competitors who accept weaker constraints capture the market while the safety-conscious actor faces exclusion — the Anthropic/OpenAI/DoD dynamic is the first major real-world case" - Potential new claim: "When voluntary AI safety constraints create competitive disadvantage, competitors who accept weaker constraints capture the market while the safety-conscious actor faces exclusion — the Anthropic/OpenAI/DoD dynamic is the first major real-world case"
- The "intentionally" qualifier and lack of external enforcement as the gap between nominal and real voluntary constraints - The "intentionally" qualifier and lack of external enforcement as the gap between nominal and real voluntary constraints
@ -60,6 +60,6 @@ MIT Technology Review analysis of the OpenAI-Pentagon deal, published March 2, 2
## Curator Notes ## Curator Notes
PRIMARY CONNECTION: [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] PRIMARY CONNECTION: voluntary-safety-pledges-cannot-survive-competitive-pressure
WHY ARCHIVED: The Anthropic/OpenAI/DoD dynamic is the strongest real-world evidence that voluntary safety pledges fail under competitive pressure; OpenAI calling it a "scary precedent" while accepting the terms is the key signal that incentive structure, not bad values, drives the outcome WHY ARCHIVED: The Anthropic/OpenAI/DoD dynamic is the strongest real-world evidence that voluntary safety pledges fail under competitive pressure; OpenAI calling it a "scary precedent" while accepting the terms is the key signal that incentive structure, not bad values, drives the outcome
EXTRACTION HINT: Focus on the structural sequence (Anthropic holds → is excluded → competitor accepts looser terms → captures market) as the empirical case for the coordination failure mechanism; the "intentionally" qualifier as the gap between nominal and real voluntary constraints EXTRACTION HINT: Focus on the structural sequence (Anthropic holds → is excluded → competitor accepts looser terms → captures market) as the empirical case for the coordination failure mechanism; the "intentionally" qualifier as the gap between nominal and real voluntary constraints

View file

@ -41,7 +41,7 @@ The post is titled "Our agreement with the Department of War" — deliberately u
**What I expected but didn't find:** Any indication that OpenAI extracted substantive safety commitments in exchange for "any lawful purpose" language. The deal is structurally asymmetric: OpenAI conceded on the central issue (use restrictions) and received only aspirational language in return. **What I expected but didn't find:** Any indication that OpenAI extracted substantive safety commitments in exchange for "any lawful purpose" language. The deal is structurally asymmetric: OpenAI conceded on the central issue (use restrictions) and received only aspirational language in return.
**KB connections:** **KB connections:**
- [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — primary source for the OpenAI empirical case - voluntary-safety-pledges-cannot-survive-competitive-pressure — primary source for the OpenAI empirical case
- B2 (alignment as coordination problem) — the "scary precedent" + immediate compliance is the behavioral evidence - B2 (alignment as coordination problem) — the "scary precedent" + immediate compliance is the behavioral evidence
- The MIT Technology Review "what Anthropic feared" piece is the secondary analysis of this primary source - The MIT Technology Review "what Anthropic feared" piece is the secondary analysis of this primary source
@ -54,6 +54,6 @@ The post is titled "Our agreement with the Department of War" — deliberately u
## Curator Notes ## Curator Notes
PRIMARY CONNECTION: [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] PRIMARY CONNECTION: voluntary-safety-pledges-cannot-survive-competitive-pressure
WHY ARCHIVED: Primary source for the OpenAI side of the race-to-the-bottom case; Altman's "scary precedent" quotes combined with immediate compliance are the behavioral evidence for the coordination failure mechanism WHY ARCHIVED: Primary source for the OpenAI side of the race-to-the-bottom case; Altman's "scary precedent" quotes combined with immediate compliance are the behavioral evidence for the coordination failure mechanism
EXTRACTION HINT: Quote the Altman statements directly; the "Department of War" title is the signal to note; the structural asymmetry of the deal (full use-restriction concession in exchange for aspirational language) is the extractable mechanism EXTRACTION HINT: Quote the Altman statements directly; the "Department of War" title is the signal to note; the structural asymmetry of the deal (full use-restriction concession in exchange for aspirational language) is the extractable mechanism

View file

@ -39,8 +39,8 @@ Senator Elissa Slotkin (D-MI) introduced the AI Guardrails Act on March 17, 2026
**What I expected but didn't find:** Any Republican co-sponsors. Any indication that the Anthropic-Pentagon conflict created bipartisan urgency for statutory governance. The conflict appears to be politically polarized — Democrats see it as a safety issue, Republicans see it as a deregulation issue. **What I expected but didn't find:** Any Republican co-sponsors. Any indication that the Anthropic-Pentagon conflict created bipartisan urgency for statutory governance. The conflict appears to be politically polarized — Democrats see it as a safety issue, Republicans see it as a deregulation issue.
**KB connections:** **KB connections:**
- [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — this bill is the legislative response to that claim's empirical validation - voluntary-safety-pledges-cannot-survive-competitive-pressure — this bill is the legislative response to that claim's empirical validation
- [[ai-critical-juncture-capabilities-governance-mismatch-transformation-window]] — the Slotkin bill is the key test of whether governance can close the mismatch - ai-critical-juncture-capabilities-governance-mismatch-transformation-window — the Slotkin bill is the key test of whether governance can close the mismatch
- Session 16 CLAIM CANDIDATE C (RSP red lines → statutory law as key test) - Session 16 CLAIM CANDIDATE C (RSP red lines → statutory law as key test)
**Extraction hints:** **Extraction hints:**
@ -52,6 +52,6 @@ Senator Elissa Slotkin (D-MI) introduced the AI Guardrails Act on March 17, 2026
## Curator Notes ## Curator Notes
PRIMARY CONNECTION: [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] PRIMARY CONNECTION: voluntary-safety-pledges-cannot-survive-competitive-pressure
WHY ARCHIVED: First legislative attempt to convert voluntary AI safety constraints into statutory law; its trajectory is the key test of whether use-based governance can emerge in current US political environment WHY ARCHIVED: First legislative attempt to convert voluntary AI safety constraints into statutory law; its trajectory is the key test of whether use-based governance can emerge in current US political environment
EXTRACTION HINT: Focus on (1) use-based vs capability-threshold framing distinction, (2) the no-co-sponsors status as evidence of governance gap, (3) NDAA conference pathway as the actual legislative route for statutory DoD AI safety constraints EXTRACTION HINT: Focus on (1) use-based vs capability-threshold framing distinction, (2) the no-co-sponsors status as evidence of governance gap, (3) NDAA conference pathway as the actual legislative route for statutory DoD AI safety constraints

View file

@ -33,8 +33,8 @@ The dispute has prompted discussions in European capitals about:
**What I expected but didn't find:** Full article content. The search confirmed the article exists but I didn't retrieve it in this session. **What I expected but didn't find:** Full article content. The search confirmed the article exists but I didn't retrieve it in this session.
**KB connections:** **KB connections:**
- [[adaptive-governance-outperforms-rigid-alignment-blueprints]] — EU approach vs US approach as a comparative test - adaptive-governance-outperforms-rigid-alignment-blueprints — EU approach vs US approach as a comparative test
- [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — does EU statutory approach avoid this failure mode? - voluntary-safety-pledges-cannot-survive-competitive-pressure — does EU statutory approach avoid this failure mode?
- Cross-domain for Leo: international AI governance architecture, transatlantic coordination - Cross-domain for Leo: international AI governance architecture, transatlantic coordination
**Extraction hints:** Defer to session 18 — needs full article retrieval and dedicated EU AI Act governance analysis. **Extraction hints:** Defer to session 18 — needs full article retrieval and dedicated EU AI Act governance analysis.
@ -43,6 +43,6 @@ The dispute has prompted discussions in European capitals about:
## Curator Notes ## Curator Notes
PRIMARY CONNECTION: [[adaptive-governance-outperforms-rigid-alignment-blueprints]] PRIMARY CONNECTION: adaptive-governance-outperforms-rigid-alignment-blueprints
WHY ARCHIVED: International dimension of the US governance architecture failure; the EU AI Act's use-based approach may provide a comparative case for whether statutory governance outperforms voluntary commitments WHY ARCHIVED: International dimension of the US governance architecture failure; the EU AI Act's use-based approach may provide a comparative case for whether statutory governance outperforms voluntary commitments
EXTRACTION HINT: INCOMPLETE — needs full article retrieval in session 18. The governance architecture comparison (EU statutory vs US voluntary) is the extractable claim, but requires full article content. EXTRACTION HINT: INCOMPLETE — needs full article retrieval in session 18. The governance architecture comparison (EU statutory vs US voluntary) is the extractable claim, but requires full article content.

View file

@ -43,8 +43,8 @@ Also covered: TechPolicy.Press "Why Congress Should Step Into the Anthropic-Pent
**What I expected but didn't find:** Any counter-argument that corporate ethics could be structurally strengthened without statutory backing. The analysis uniformly concludes that voluntary commitments are insufficient. **What I expected but didn't find:** Any counter-argument that corporate ethics could be structurally strengthened without statutory backing. The analysis uniformly concludes that voluntary commitments are insufficient.
**KB connections:** **KB connections:**
- [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] — "limits of corporate ethics" is the same thesis - voluntary-safety-pledges-cannot-survive-competitive-pressure — "limits of corporate ethics" is the same thesis
- [[ai-is-critical-juncture-capabilities-governance-mismatch]] — the standoff is the juncture made visible - ai-is-critical-juncture-capabilities-governance-mismatch — the standoff is the juncture made visible
- B1 "not being treated as such" — the standoff shows government is treating safety as an obstacle, not a priority - B1 "not being treated as such" — the standoff shows government is treating safety as an obstacle, not a priority
**Extraction hints:** **Extraction hints:**
@ -55,6 +55,6 @@ Also covered: TechPolicy.Press "Why Congress Should Step Into the Anthropic-Pent
## Curator Notes ## Curator Notes
PRIMARY CONNECTION: [[voluntary-safety-pledges-cannot-survive-competitive-pressure]] PRIMARY CONNECTION: voluntary-safety-pledges-cannot-survive-competitive-pressure
WHY ARCHIVED: Systematic analysis of why corporate AI safety ethics have structural limits; four-factor framework for why voluntary constraints fail under government pressure is extractable as a claim WHY ARCHIVED: Systematic analysis of why corporate AI safety ethics have structural limits; four-factor framework for why voluntary constraints fail under government pressure is extractable as a claim
EXTRACTION HINT: Extract the four-factor structural argument as a claim; also flag "European reverberations" piece as a separate archive target for the EU AI governance angle EXTRACTION HINT: Extract the four-factor structural argument as a claim; also flag "European reverberations" piece as a separate archive target for the EU AI governance angle

View file

@ -49,6 +49,6 @@ TechPolicy.Press comprehensive chronology of the Anthropic-Pentagon dispute (Jul
## Curator Notes ## Curator Notes
PRIMARY CONNECTION: [[government-safety-designations-can-invert-dynamics-penalizing-safety]] PRIMARY CONNECTION: government-safety-designations-can-invert-dynamics-penalizing-safety
WHY ARCHIVED: Reference document for the full Anthropic-Pentagon chronology; the "nearly aligned" court filing detail suggests the blacklisting was a political pressure tactic, strengthening the First Amendment retaliation claim WHY ARCHIVED: Reference document for the full Anthropic-Pentagon chronology; the "nearly aligned" court filing detail suggests the blacklisting was a political pressure tactic, strengthening the First Amendment retaliation claim
EXTRACTION HINT: Low priority for extraction. Use as context for other claims. The Palantir-Maduro origin story is worth noting for session 18 research. EXTRACTION HINT: Low priority for extraction. Use as context for other claims. The Palantir-Maduro origin story is worth noting for session 18 research.