auto-fix: strip 7 broken wiki links
Pipeline auto-fixer: removed [[ ]] brackets from links that don't resolve to existing claims in the knowledge base.
This commit is contained in:
parent
fd8b935473
commit
b0871bc831
6 changed files with 7 additions and 7 deletions
|
|
@ -54,8 +54,8 @@ Georgia Tech analysis (March 11, 2026): "the tech doesn't lessen the need for hu
|
||||||
- [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]] — this source shows voluntary alignment constraints being penalized by government coercive instruments, not competitive pressure from other labs
|
- [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]] — this source shows voluntary alignment constraints being penalized by government coercive instruments, not competitive pressure from other labs
|
||||||
|
|
||||||
**Extraction hints:**
|
**Extraction hints:**
|
||||||
1. **ENRICHMENT CANDIDATE:** [[voluntary safety pledges cannot survive competitive pressure]] — add government coercive instrument as a second mechanism for voluntary constraint failure, distinct from competitive pressure
|
1. **ENRICHMENT CANDIDATE:** voluntary safety pledges cannot survive competitive pressure — add government coercive instrument as a second mechanism for voluntary constraint failure, distinct from competitive pressure
|
||||||
2. **ENRICHMENT CANDIDATE:** [[government designation of safety-conscious AI labs as supply chain risks]] — add Amodei's formal statement as primary evidence of what the supply chain designation was targeting
|
2. **ENRICHMENT CANDIDATE:** government designation of safety-conscious AI labs as supply chain risks — add Amodei's formal statement as primary evidence of what the supply chain designation was targeting
|
||||||
|
|
||||||
## Curator Notes (structured handoff for extractor)
|
## Curator Notes (structured handoff for extractor)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -50,7 +50,7 @@ intake_tier: research-task
|
||||||
|
|
||||||
**Extraction hints:**
|
**Extraction hints:**
|
||||||
1. **NEW CLAIM CANDIDATE:** "The Anthropic supply chain designation followed the Maduro capture operation in which Claude-Maven was used, revealing the designation as a retroactive coercive instrument to compel removal of alignment constraints rather than a prospective security enforcement measure" — strengthens Mode 2 with causal specificity
|
1. **NEW CLAIM CANDIDATE:** "The Anthropic supply chain designation followed the Maduro capture operation in which Claude-Maven was used, revealing the designation as a retroactive coercive instrument to compel removal of alignment constraints rather than a prospective security enforcement measure" — strengthens Mode 2 with causal specificity
|
||||||
2. **ENRICHMENT CANDIDATE:** [[government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic]] — add the Maduro → designation → Iran → DC Circuit "active military conflict" causal chain as evidence
|
2. **ENRICHMENT CANDIDATE:** government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic — add the Maduro → designation → Iran → DC Circuit "active military conflict" causal chain as evidence
|
||||||
|
|
||||||
## Curator Notes (structured handoff for extractor)
|
## Curator Notes (structured handoff for extractor)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -59,7 +59,7 @@ By signing NVIDIA Nemotron and Reflection AI (pre-model, based on open-weight co
|
||||||
|
|
||||||
**KB connections:**
|
**KB connections:**
|
||||||
- [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]] — extends to: no research group is developing governance architecture that functions without centralized accountability
|
- [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]] — extends to: no research group is developing governance architecture that functions without centralized accountability
|
||||||
- [[voluntary safety pledges cannot survive competitive pressure]] — open-weight deployment eliminates the entity that would make voluntary safety pledges
|
- voluntary safety pledges cannot survive competitive pressure — open-weight deployment eliminates the entity that would make voluntary safety pledges
|
||||||
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — extends: open-weight deployment eliminates even the structure in which an alignment tax could exist
|
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — extends: open-weight deployment eliminates even the structure in which an alignment tax could exist
|
||||||
|
|
||||||
**Extraction hints:**
|
**Extraction hints:**
|
||||||
|
|
|
||||||
|
|
@ -47,7 +47,7 @@ The danger of Mode 6 is not that it requires extraordinary conditions — it req
|
||||||
|
|
||||||
**KB connections:**
|
**KB connections:**
|
||||||
- [[AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation]] — Mode 6 is what fills that window during emergencies
|
- [[AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation]] — Mode 6 is what fills that window during emergencies
|
||||||
- [[nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function]] — Mode 6 is the judicial expression of this claim
|
- nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function — Mode 6 is the judicial expression of this claim
|
||||||
|
|
||||||
**Extraction hints:**
|
**Extraction hints:**
|
||||||
1. **NO EXTRACTION YET** — Mode 6 at experimental confidence (one case). Second case search negative. Flag for future sessions: if DC Circuit rules on May 19 with continued emergency rationale reliance, update Mode 6 confidence upward (now two data points — April 8 stay denial + May 19 ruling if consistent).
|
1. **NO EXTRACTION YET** — Mode 6 at experimental confidence (one case). Second case search negative. Flag for future sessions: if DC Circuit rules on May 19 with continued emergency rationale reliance, update Mode 6 confidence upward (now two data points — April 8 stay denial + May 19 ruling if consistent).
|
||||||
|
|
|
||||||
|
|
@ -44,7 +44,7 @@ Anthropic has Claude (widely deployed, AISI-evaluated, highest benchmark perform
|
||||||
- [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]] — the institutional gap now extends to procurement
|
- [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]] — the institutional gap now extends to procurement
|
||||||
|
|
||||||
**Extraction hints:**
|
**Extraction hints:**
|
||||||
1. **ENRICHMENT CANDIDATE:** The existing [[government designation of safety-conscious AI labs as supply chain risks]] claim — Reflection AI's deal is the positive-form corollary: government endorsement of non-safety-constrained labs.
|
1. **ENRICHMENT CANDIDATE:** The existing government designation of safety-conscious AI labs as supply chain risks claim — Reflection AI's deal is the positive-form corollary: government endorsement of non-safety-constrained labs.
|
||||||
2. **NEW CLAIM CANDIDATE (lower priority):** "DoD pre-committed to open-weight AI deployment at IL7 classification before any capability evaluation by signing Reflection AI (zero released models), revealing that procurement decisions are selecting governance architecture rather than assessed capabilities."
|
2. **NEW CLAIM CANDIDATE (lower priority):** "DoD pre-committed to open-weight AI deployment at IL7 classification before any capability evaluation by signing Reflection AI (zero released models), revealing that procurement decisions are selecting governance architecture rather than assessed capabilities."
|
||||||
|
|
||||||
## Curator Notes (structured handoff for extractor)
|
## Curator Notes (structured handoff for extractor)
|
||||||
|
|
|
||||||
|
|
@ -49,7 +49,7 @@ This is a form of "compliance theater at the executive branch level." The EO cre
|
||||||
**What I expected but didn't find:** I expected the EO to include specific language about Anthropic's status (re-admitting them to federal procurement). The pre-release review framing doesn't address the supply chain designation at all — it's a new regulatory instrument on top of the existing designation, not a replacement for it. B1 disconfirmation target (EO with red lines preserved) remains NOT DISCONFIRMED.
|
**What I expected but didn't find:** I expected the EO to include specific language about Anthropic's status (re-admitting them to federal procurement). The pre-release review framing doesn't address the supply chain designation at all — it's a new regulatory instrument on top of the existing designation, not a replacement for it. B1 disconfirmation target (EO with red lines preserved) remains NOT DISCONFIRMED.
|
||||||
|
|
||||||
**KB connections:**
|
**KB connections:**
|
||||||
- [[voluntary safety pledges cannot survive competitive pressure]] — the EO is the government version of this: the review mechanism is designed around the politically salient Mythos cybersecurity crisis, not the structural alignment problems the KB has documented
|
- voluntary safety pledges cannot survive competitive pressure — the EO is the government version of this: the review mechanism is designed around the politically salient Mythos cybersecurity crisis, not the structural alignment problems the KB has documented
|
||||||
- [[AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation]] — the EO is an example of governance responding to the wrong signal
|
- [[AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation]] — the EO is an example of governance responding to the wrong signal
|
||||||
- EU AI Act compliance theater (Session 39-40 archives) — same structural pattern at federal executive level
|
- EU AI Act compliance theater (Session 39-40 archives) — same structural pattern at federal executive level
|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue