auto-fix: strip 41 broken wiki links
Pipeline auto-fixer: removed [[ ]] brackets from links that don't resolve to existing claims in the knowledge base.
This commit is contained in:
parent
518c2b0764
commit
edd8330e89
11 changed files with 21 additions and 21 deletions
|
|
@ -156,7 +156,7 @@ The institutional picture is **contested**, not just inadequate. That's actually
|
|||
### Branching Points (one finding opened multiple directions)
|
||||
|
||||
- **The Anthropic-Pentagon conflict spawns two KB contribution directions**:
|
||||
- Direction A (clean claim, highest priority): Voluntary corporate safety constraints have no legal standing — write as a KB claim with the Anthropic case as primary evidence. Connect to [[institutional-gap]] and [[voluntary-pledges-fail-under-competition]].
|
||||
- Direction A (clean claim, highest priority): Voluntary corporate safety constraints have no legal standing — write as a KB claim with the Anthropic case as primary evidence. Connect to institutional-gap and voluntary-pledges-fail-under-competition.
|
||||
- Direction B (richer but harder): The Anthropic/OpenAI divergence as race-to-the-bottom evidence — this directly supports B2 (alignment as coordination problem). Write as a claim connecting the empirical case to the theoretical frame. Direction A first — it's a cleaner KB contribution.
|
||||
|
||||
- **The interpretability-governance gap is emerging**: Direction A: Is the October 2026 interpretability-informed alignment assessment a new form of benchmark-reality gap? The research is advancing, but the governance application is undefined. This would extend the session 13-15 benchmark-reality work from capability evaluation to interpretability evaluation. Direction B: Focus on the Constitutional Classifiers as a genuine technical advance — separate from the governance question. Direction A first — the governance connection is the more novel contribution.
|
||||
|
|
|
|||
|
|
@ -36,13 +36,13 @@ The AI strategy memo is described as reflecting the Trump administration's broad
|
|||
|
||||
**What I expected but didn't find:** Any DoD legal or technical analysis justifying why autonomous weapons and mass surveillance prohibitions are incompatible with lawful use (i.e., an argument that these prohibitions are safety-unnecessary, not just politically inconvenient). The demand appears to be policy/ideological, not technical.
|
||||
|
||||
**KB connections:** [[voluntary-pledges-fail-under-competition]] — this is the coercive mechanism; [[government-risk-designation-inverts-regulation]] — the supply chain risk designation is the inverted regulatory tool; [[coordination-problem-reframe]] — the DoD memo creates a coordination environment where safety-conscious actors are penalized.
|
||||
**KB connections:** voluntary-pledges-fail-under-competition — this is the coercive mechanism; government-risk-designation-inverts-regulation — the supply chain risk designation is the inverted regulatory tool; coordination-problem-reframe — the DoD memo creates a coordination environment where safety-conscious actors are penalized.
|
||||
|
||||
**Extraction hints:** The DoD memo is a policy artifact that could ground a claim about government-AI safety governance inversion — not just "government isn't treating alignment as the greatest problem" but "government is actively establishing policy frameworks that punish AI companies for safety constraints." The January 2026 Hegseth AI strategy memo is the policy document to cite.
|
||||
|
||||
**Context:** The Hegseth memo came one month after the Trump inauguration. It reflects the new administration's approach to AI: maximize capability deployment for national security uses, treat private company safety constraints as obstacles rather than appropriate governance. This is a sharp break from the Biden-era executive order on AI safety (October 2023) which encouraged responsible development.
|
||||
|
||||
## Curator Notes (structured handoff for extractor)
|
||||
PRIMARY CONNECTION: [[government-risk-designation-inverts-regulation]] — the Hegseth memo is the precipitating policy; [[voluntary-pledges-fail-under-competition]] — coercive mechanism made explicit
|
||||
PRIMARY CONNECTION: government-risk-designation-inverts-regulation — the Hegseth memo is the precipitating policy; voluntary-pledges-fail-under-competition — coercive mechanism made explicit
|
||||
WHY ARCHIVED: The memo is the policy document establishing that US government will actively penalize safety constraints in AI contracts — the clearest single document for B1's institutional inadequacy claim
|
||||
EXTRACTION HINT: The claim should be specific: the Hegseth "any lawful use" memo represents US government policy that AI safety constraints in deployment contracts are improper limitations on government authority — establishing active institutional opposition, not just neglect.
|
||||
|
|
|
|||
|
|
@ -40,13 +40,13 @@ The Intercept noted: OpenAI CEO Sam Altman stated publicly that users "are going
|
|||
|
||||
**What I expected but didn't find:** Any OpenAI public statement arguing that their approach is genuinely safer than outright bans, or any technical/governance argument for why "any lawful purpose" with aspirational limits is preferable to hard contractual prohibitions. The stated rationale is implicitly competitive, not principled.
|
||||
|
||||
**KB connections:** [[voluntary-pledges-fail-under-competition]] — this is the empirical case study. [[coordination-problem-reframe]] — the Anthropic/OpenAI divergence illustrates multipolar failure. [[institutional-gap]] — no external mechanism enforces either company's commitments.
|
||||
**KB connections:** voluntary-pledges-fail-under-competition — this is the empirical case study. coordination-problem-reframe — the Anthropic/OpenAI divergence illustrates multipolar failure. institutional-gap — no external mechanism enforces either company's commitments.
|
||||
|
||||
**Extraction hints:** Two claim candidates: (1) The OpenAI-Anthropic-Pentagon sequence as direct evidence that voluntary safety governance is self-undermining under competitive dynamics — produces a race to looser constraints, not a race to higher safety. (2) The "trust us" governance model (Altman quote) as the logical endpoint of voluntary safety governance without legal standing — safety depends entirely on self-attestation with no external verification.
|
||||
|
||||
**Context:** OpenAI announced its deal on February 28 — the same day as Anthropic's blacklisting. The timing is not coincidental; multiple sources describe OpenAI as moving quickly to capture the DoD market vacated by Anthropic. This is competitive dynamics in AI safety governance documented in real time.
|
||||
|
||||
## Curator Notes (structured handoff for extractor)
|
||||
PRIMARY CONNECTION: [[voluntary-pledges-fail-under-competition]] — direct empirical evidence for the mechanism this claim describes
|
||||
PRIMARY CONNECTION: voluntary-pledges-fail-under-competition — direct empirical evidence for the mechanism this claim describes
|
||||
WHY ARCHIVED: The explicit competitive timing (hours after Anthropic blacklisting) makes the race-to-the-bottom dynamic unusually visible; the Altman "trust us" quote captures the endpoint of voluntary governance
|
||||
EXTRACTION HINT: The contrast claim — not just that OpenAI accepted looser terms, but that the market mechanism rewarded them for doing so — is the core contribution. Connect to the B2 coordination failure thesis.
|
||||
|
|
|
|||
|
|
@ -47,13 +47,13 @@ GovAI's systematic analysis of what changed between RSP v2.2 and RSP v3.0 (effec
|
|||
|
||||
**What I expected but didn't find:** Any Anthropic-published rationale for the specific removals. RSP v3.0 itself presumably contains language about scope, but GovAI's analysis suggests that language doesn't explain why these domains were removed from binding commitments specifically.
|
||||
|
||||
**KB connections:** [[voluntary-pledges-fail-under-competition]] — the pause removal is direct evidence; [[institutional-gap]] — the binding→recommendation demotion widens the gap; [[verification-degrades-faster-than-capability-grows]] — the interpretability commitment is the proposed countermeasure.
|
||||
**KB connections:** voluntary-pledges-fail-under-competition — the pause removal is direct evidence; institutional-gap — the binding→recommendation demotion widens the gap; verification-degrades-faster-than-capability-grows — the interpretability commitment is the proposed countermeasure.
|
||||
|
||||
**Extraction hints:** The most useful claim from this source is about the transparency-vs-binding tradeoff in RSP v3.0: transparency infrastructure (roadmap, reports) increased while binding commitments decreased. This is a specific governance architecture pattern — public accountability without enforcement. Whether transparency without binding constraints produces genuine accountability is an empirical question the KB could track.
|
||||
|
||||
**Context:** GovAI is the leading academic organization analyzing frontier AI safety governance. Their analysis is authoritative and widely cited in the AI safety community. The "reflections" portion of their analysis represents considered institutional views, not just factual reporting.
|
||||
|
||||
## Curator Notes (structured handoff for extractor)
|
||||
PRIMARY CONNECTION: [[voluntary-pledges-fail-under-competition]] — pause removal is the clearest evidence; transparency-binding tradeoff is the new governance pattern to track
|
||||
PRIMARY CONNECTION: voluntary-pledges-fail-under-competition — pause removal is the clearest evidence; transparency-binding tradeoff is the new governance pattern to track
|
||||
WHY ARCHIVED: GovAI's analysis is the authoritative RSP v3.0 change log; the cyber/CBRN removal without explanation is the key unexplained governance fact
|
||||
EXTRACTION HINT: Focus on the transparency-without-binding-constraints pattern as a new KB claim — RSP v3.0 increases public accountability infrastructure (roadmaps, reports) while decreasing binding safety obligations, making it a test case for whether transparency without enforcement produces safety outcomes.
|
||||
|
|
|
|||
|
|
@ -37,13 +37,13 @@ These are the exact three prohibitions Anthropic maintained in its DoD contract.
|
|||
|
||||
**What I expected but didn't find:** Any Republican co-sponsorship or bipartisan response. The absence of Republican engagement suggests these prohibitions are politically contested (seen as constraints on military capabilities rather than safety requirements), not just lacking political attention.
|
||||
|
||||
**KB connections:** [[institutional-gap]], [[voluntary-pledges-fail-under-competition]]. The Axios piece explicitly names the gap that the Slotkin bill is trying to fill.
|
||||
**KB connections:** institutional-gap, voluntary-pledges-fail-under-competition. The Axios piece explicitly names the gap that the Slotkin bill is trying to fill.
|
||||
|
||||
**Extraction hints:** This source is primarily supporting evidence for the Slotkin AI Guardrails Act archive. The key contribution is confirming the three-category gap (autonomous weapons, domestic surveillance, nuclear AI) in existing US statutory law.
|
||||
|
||||
**Context:** The March 2 Axios piece is the earliest documentation of the legislative response. The Slotkin bill (March 17) is the formal embodiment of what Axios described here. Archive together as a sequence.
|
||||
|
||||
## Curator Notes (structured handoff for extractor)
|
||||
PRIMARY CONNECTION: [[institutional-gap]] — confirms that the three core prohibitions Anthropic maintained have no statutory backing in US law
|
||||
PRIMARY CONNECTION: institutional-gap — confirms that the three core prohibitions Anthropic maintained have no statutory backing in US law
|
||||
WHY ARCHIVED: Documents the legislative response timeline and confirms the specific statutory gaps; useful context for the Slotkin bill archive
|
||||
EXTRACTION HINT: Use primarily as supporting evidence for the Slotkin AI Guardrails Act claim. The key observation: Anthropic was privately filling a public governance gap — private safety contracts were substituting for absent statute.
|
||||
|
|
|
|||
|
|
@ -34,13 +34,13 @@ Oxford University experts commented on the Pentagon-Anthropic dispute, identifyi
|
|||
|
||||
**What I expected but didn't find:** Any Oxford commentary specifically on the AI safety case for outright bans vs. aspirational constraints — the technical debate about whether "any lawful purpose" is more dangerous than contractual prohibitions. The expert commentary focuses on governance structure, not technical capability.
|
||||
|
||||
**KB connections:** [[institutional-gap]], [[government-risk-designation-inverts-regulation]], [[coordination-problem-reframe]]. The "companies define safety boundaries" framing connects directly to the private governance architecture described in [[voluntary-pledges-fail-under-competition]].
|
||||
**KB connections:** institutional-gap, government-risk-designation-inverts-regulation, coordination-problem-reframe. The "companies define safety boundaries" framing connects directly to the private governance architecture described in voluntary-pledges-fail-under-competition.
|
||||
|
||||
**Extraction hints:** The inflection point framing — "whether companies or governments will define safety boundaries" — could anchor a claim about the governance authority gap: in the absence of statutory AI safety requirements, safety governance defaults to private actors, who face competitive pressure to weaken constraints. This is a structural governance claim independent of the specific Anthropic case.
|
||||
|
||||
**Context:** Oxford University has significant AI governance research presence (Future of Humanity Institute legacy, various AI ethics programs). The expert comment framing is authoritative institutional analysis, not advocacy.
|
||||
|
||||
## Curator Notes (structured handoff for extractor)
|
||||
PRIMARY CONNECTION: [[institutional-gap]] — Oxford explicitly names the gap as "institutional failure to establish protective frameworks proactively"
|
||||
PRIMARY CONNECTION: institutional-gap — Oxford explicitly names the gap as "institutional failure to establish protective frameworks proactively"
|
||||
WHY ARCHIVED: Provides institutional academic framing for the private-vs-government governance authority question; the "70 million cameras" quantification is a concrete risk proxy
|
||||
EXTRACTION HINT: The claim about governance authority defaulting to private actors (companies defining safety boundaries) in the absence of statutory requirements is the most generalizable contribution — it extends beyond the Anthropic case to the structural AI governance landscape.
|
||||
|
|
|
|||
|
|
@ -38,13 +38,13 @@ The Intercept framed the Anthropic/OpenAI divergence as: Anthropic pursued a mor
|
|||
|
||||
**What I expected but didn't find:** Any technical argument from OpenAI about why outright bans are worse governance than "any lawful purpose" with aspirational limits. The public-facing argument is pragmatic ("if we don't do it, someone less safety-conscious will") not principled (outright bans are wrong). This is the same argument Anthropic explicitly rejected.
|
||||
|
||||
**KB connections:** [[voluntary-pledges-fail-under-competition]] — Altman's "trust us" is the explicit admission that the governance architecture is self-attestation-only; [[coordination-problem-reframe]] — captures the multipolar dynamic where pragmatic safety creates competitive cover for abandoning principled safety.
|
||||
**KB connections:** voluntary-pledges-fail-under-competition — Altman's "trust us" is the explicit admission that the governance architecture is self-attestation-only; coordination-problem-reframe — captures the multipolar dynamic where pragmatic safety creates competitive cover for abandoning principled safety.
|
||||
|
||||
**Extraction hints:** The "trust us" quote could anchor a claim about self-attestation as the governance endpoint of voluntary safety commitments — when external enforcement is absent, safety reduces to the CEO's public statements. This is a governance architecture claim, not a capability claim.
|
||||
|
||||
**Context:** The Intercept piece appeared March 8, after OpenAI's March 2 amended contract. By that point, the comparison with Anthropic's blacklisting was fully visible. The piece reflects concern from AI safety observers that OpenAI's pragmatic approach creates a template that normalizes government override of safety constraints.
|
||||
|
||||
## Curator Notes (structured handoff for extractor)
|
||||
PRIMARY CONNECTION: [[voluntary-pledges-fail-under-competition]] — "trust us" is the endpoint this claim describes; [[institutional-gap]] — the absence of external verification is the gap
|
||||
PRIMARY CONNECTION: voluntary-pledges-fail-under-competition — "trust us" is the endpoint this claim describes; institutional-gap — the absence of external verification is the gap
|
||||
WHY ARCHIVED: Altman quote captures the self-attestation endpoint of voluntary governance; the Anthropic/OpenAI comparison is unusually explicit about the moral vs. pragmatic tradeoff
|
||||
EXTRACTION HINT: The claim should focus on governance architecture, not company ethics: voluntary safety commitments without external enforcement reduce to CEO public statements. The "trust us" quote is the evidence.
|
||||
|
|
|
|||
|
|
@ -41,13 +41,13 @@ Senator Adam Schiff (D-CA) is drafting complementary legislation placing "common
|
|||
|
||||
**What I expected but didn't find:** Any Republican co-sponsors or bipartisan support. The legislation appears entirely partisan (Democratic minority), which significantly reduces its near-term passage prospects given the current political environment.
|
||||
|
||||
**KB connections:** Directly extends [[voluntary-pledges-fail-under-competition]] — this legislation is the proposed solution to the governance failure that claim describes. Also connects to [[institutional-gap]] — the bill is trying to fill the exact gap this claim identifies. Relevant to [[government-risk-designation-inverts-regulation]] — the Senate response shows the inversion can be contested through legislative channels.
|
||||
**KB connections:** Directly extends voluntary-pledges-fail-under-competition — this legislation is the proposed solution to the governance failure that claim describes. Also connects to institutional-gap — the bill is trying to fill the exact gap this claim identifies. Relevant to government-risk-designation-inverts-regulation — the Senate response shows the inversion can be contested through legislative channels.
|
||||
|
||||
**Extraction hints:** The primary claim is narrow but significant: this is the first legislative attempt to convert voluntary corporate AI safety commitments into binding federal law. This is a milestone, regardless of whether it passes. Secondary claim: the legislative response to the Anthropic-Pentagon conflict demonstrates that court injunctions alone cannot resolve the governance authority gap — statutory protection is required.
|
||||
|
||||
**Context:** Slotkin is a former CIA officer and Defense Department official with national security credibility. Her framing (not a general AI safety bill, but a specific DoD-focused use prohibition) is strategically targeted to appeal to national security-focused legislators. The bill's specificity (autonomous weapons, domestic surveillance, nuclear) mirrors exactly the red lines Anthropic maintained.
|
||||
|
||||
## Curator Notes (structured handoff for extractor)
|
||||
PRIMARY CONNECTION: [[institutional-gap]] — this bill is the direct legislative attempt to close it; [[voluntary-pledges-fail-under-competition]] — this is the proposed statutory remedy
|
||||
PRIMARY CONNECTION: institutional-gap — this bill is the direct legislative attempt to close it; voluntary-pledges-fail-under-competition — this is the proposed statutory remedy
|
||||
WHY ARCHIVED: First legislative conversion of voluntary corporate safety commitments into proposed binding law; its trajectory is the key test of whether use-based governance can emerge
|
||||
EXTRACTION HINT: Frame the claim around what the bill represents structurally (voluntary→binding conversion attempt), not its passage probability. The significance is in the framing, not the current political odds.
|
||||
|
|
|
|||
|
|
@ -36,13 +36,13 @@ Al Jazeera analysis of the Anthropic-Pentagon case and its implications for AI r
|
|||
|
||||
**What I expected but didn't find:** Any specific mechanism by which the court case would create regulatory space — the "could open space" framing is conditional. The article acknowledges this is a potential, not a certain, pathway.
|
||||
|
||||
**KB connections:** [[institutional-gap]], [[government-risk-designation-inverts-regulation]]. The "companies vs. governments define safety boundaries" framing extends the institutional-gap claim to the governance authority question.
|
||||
**KB connections:** institutional-gap, government-risk-designation-inverts-regulation. The "companies vs. governments define safety boundaries" framing extends the institutional-gap claim to the governance authority question.
|
||||
|
||||
**Extraction hints:** The most valuable contribution is the "already deploying AI for targeting" observation — this is a concrete deployment fact that grounds the governance urgency argument in present reality, not future projection. The 70 million cameras quantification is also useful as a concrete proxy for the domestic surveillance risk.
|
||||
|
||||
**Context:** Al Jazeera provides international perspective on the US-specific conflict. The framing as an "inflection point" is consistent with Oxford experts' assessment (March 6). The convergence of multiple authoritative sources on the inflection point framing suggests genuine consensus that the Anthropic case has governance significance beyond the immediate litigation.
|
||||
|
||||
## Curator Notes (structured handoff for extractor)
|
||||
PRIMARY CONNECTION: [[institutional-gap]] — the "already deploying AI for targeting" observation makes the gap concrete and present-tense
|
||||
PRIMARY CONNECTION: institutional-gap — the "already deploying AI for targeting" observation makes the gap concrete and present-tense
|
||||
WHY ARCHIVED: The "companies vs. governments define safety boundaries" governance authority framing; the present-tense targeting deployment observation; international perspective on US governance failure
|
||||
EXTRACTION HINT: Use the "already deploying AI for targeting" observation to ground the institutional gap claim in current deployment reality, not just capability trajectory. The gap is not between current capability and future risk — it's between current deployment and current governance.
|
||||
|
|
|
|||
|
|
@ -36,13 +36,13 @@ Key claims from the essay (based on search result excerpts and Anthropic's state
|
|||
|
||||
**What I expected but didn't find:** Specific criteria for what a "passing" interpretability-informed alignment assessment would look like. The essay (and RSP v3.0) describe the goal but not the standard. The "urgency" framing suggests the technique is needed but may not be deployable at governance-grade reliability by October 2026.
|
||||
|
||||
**KB connections:** Directly informs the active thread on "what does passing October 2026 interpretability assessment look like?" Connects to [[verification-degrades-faster-than-capability-grows]] (B4 in beliefs) — interpretability is specifically trying to address this degradation problem. Also connects to the benchmark-reality gap claim series from sessions 13-15.
|
||||
**KB connections:** Directly informs the active thread on "what does passing October 2026 interpretability assessment look like?" Connects to verification-degrades-faster-than-capability-grows (B4 in beliefs) — interpretability is specifically trying to address this degradation problem. Also connects to the benchmark-reality gap claim series from sessions 13-15.
|
||||
|
||||
**Extraction hints:** Two potential claims: (1) Mechanistic interpretability as the proposed solution to behavioral verification failure — grounded in Amodei's essay and the RSP v3.0 commitment. (2) The gap between interpretability research progress and governance-grade application — MIT Tech Review names it a breakthrough while RSP v3.0 requires it for alignment thresholds by October 2026; these may not be compatible timelines.
|
||||
|
||||
**Context:** Amodei has significant credibility on this topic as Anthropic's CEO and co-founder. His essays on AI safety represent Anthropic's public intellectual position, not just personal views. The essay should be read as stating Anthropic's alignment research philosophy.
|
||||
|
||||
## Curator Notes (structured handoff for extractor)
|
||||
PRIMARY CONNECTION: [[verification-degrades-faster-than-capability-grows]] — interpretability is the proposed technical solution; RSP v3.0 October 2026 timeline is the governance application
|
||||
PRIMARY CONNECTION: verification-degrades-faster-than-capability-grows — interpretability is the proposed technical solution; RSP v3.0 October 2026 timeline is the governance application
|
||||
WHY ARCHIVED: Grounds the interpretability urgency thesis in Anthropic's own intellectual framing; useful for evaluating whether October 2026 RSP commitment is achievable
|
||||
EXTRACTION HINT: The most useful claim is the gap between research progress (breakthrough technology designation) and governance-grade application (formal alignment threshold assessment by October 2026) — this may be a new form of benchmark-governance gap.
|
||||
|
|
|
|||
|
|
@ -34,13 +34,13 @@ The preliminary injunction temporarily stays the supply chain risk designation
|
|||
|
||||
**What I expected but didn't find:** Any court reasoning grounded in AI safety principles, administrative law on dangerous technologies, or existing statutory frameworks that could be applied to AI deployment safety. The ruling is entirely about speech and retaliation, not about the substantive merits of AI safety constraints.
|
||||
|
||||
**KB connections:** Directly supports [[voluntary-pledges-fail-under-competition]], [[institutional-gap]], [[coordination-problem-reframe]]. Extends B2 (alignment as coordination problem) — the Pentagon-Anthropic conflict is a real-world instance of voluntary safety governance failing under competitive/institutional pressure.
|
||||
**KB connections:** Directly supports voluntary-pledges-fail-under-competition, institutional-gap, coordination-problem-reframe. Extends B2 (alignment as coordination problem) — the Pentagon-Anthropic conflict is a real-world instance of voluntary safety governance failing under competitive/institutional pressure.
|
||||
|
||||
**Extraction hints:** Primary claim: voluntary corporate AI safety constraints have no legal standing in US law — they are contractual aspirations that governments can demand the removal of, with courts protecting only speech rights, not safety requirements. Secondary claim: courts applying First Amendment retaliation analysis to AI safety governance creates a perverse incentive structure where safety commitments are protected only as expression, not as binding obligations.
|
||||
|
||||
**Context:** Anthropic is the first American company ever designated a DoD supply chain risk — a designation historically used for Huawei, SMIC, and other Chinese tech firms. This context makes the designation's purpose (punishment for non-compliance rather than genuine security assessment) explicit.
|
||||
|
||||
## Curator Notes (structured handoff for extractor)
|
||||
PRIMARY CONNECTION: [[voluntary-pledges-fail-under-competition]] — this is the strongest real-world evidence for the claim that voluntary safety governance collapses under competitive/institutional pressure
|
||||
PRIMARY CONNECTION: voluntary-pledges-fail-under-competition — this is the strongest real-world evidence for the claim that voluntary safety governance collapses under competitive/institutional pressure
|
||||
WHY ARCHIVED: The clearest empirical case for the legal fragility of voluntary corporate AI safety constraints; the judicial reasoning creates no precedent for safety-based governance
|
||||
EXTRACTION HINT: Focus on the legal standing gap — the claim is not that courts were wrong, but that the legal framework available to protect safety constraints is First Amendment-based, not safety-based. That gap is the governance failure.
|
||||
|
|
|
|||
Loading…
Reference in a new issue