From 18190084a3f25f96240836703155d2fbd3f1e4ff Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Fri, 20 Mar 2026 00:53:27 +0000 Subject: [PATCH] auto-fix: strip 4 broken wiki links Pipeline auto-fixer: removed [[ ]] brackets from links that don't resolve to existing claims in the knowledge base. --- ...s and removed safety language from mission statements.md | 6 +++--- ...titutional-governance-built-on-unreliable-foundations.md | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/domains/ai-alignment/AI transparency is declining not improving because Stanford FMTI scores dropped 17 points in one year while frontier labs dissolved safety teams and removed safety language from mission statements.md b/domains/ai-alignment/AI transparency is declining not improving because Stanford FMTI scores dropped 17 points in one year while frontier labs dissolved safety teams and removed safety language from mission statements.md index 41535962..0ca0eab7 100644 --- a/domains/ai-alignment/AI transparency is declining not improving because Stanford FMTI scores dropped 17 points in one year while frontier labs dissolved safety teams and removed safety language from mission statements.md +++ b/domains/ai-alignment/AI transparency is declining not improving because Stanford FMTI scores dropped 17 points in one year while frontier labs dissolved safety teams and removed safety language from mission statements.md @@ -31,19 +31,19 @@ The alignment implication: transparency is a prerequisite for external oversight ### Additional Evidence (extend) -*Source: [[2024-12-00-uuk-mitigations-gpai-systemic-risks-76-experts]] | Added: 2026-03-19* +*Source: 2024-12-00-uuk-mitigations-gpai-systemic-risks-76-experts | Added: 2026-03-19* Expert consensus identifies 'external scrutiny, proactive evaluation and transparency' as the key principles for mitigating AI systemic risks, with third-party audits as the top-3 implementation priority. The transparency decline documented by Stanford FMTI is moving in the opposite direction from what 76 cross-domain experts identify as necessary. ### Additional Evidence (extend) -*Source: [[2025-08-00-mccaslin-stream-chembio-evaluation-reporting]] | Added: 2026-03-19* +*Source: 2025-08-00-mccaslin-stream-chembio-evaluation-reporting | Added: 2026-03-19* STREAM proposal identifies that current model reports lack 'sufficient detail to enable meaningful independent assessment' of dangerous capability evaluations. The need for a standardized reporting framework confirms that transparency problems extend beyond general disclosure (FMTI scores) to the specific domain of dangerous capability evaluation where external verification is currently impossible. ### Additional Evidence (confirm) -*Source: [[2026-03-16-theseus-ai-coordination-governance-evidence]] | Added: 2026-03-19* +*Source: 2026-03-16-theseus-ai-coordination-governance-evidence | Added: 2026-03-19* Stanford FMTI 2024→2025 data: mean transparency score declined 17 points. Meta -29 points, Mistral -37 points, OpenAI -14 points. OpenAI removed 'safely' from mission statement (Nov 2025), dissolved Superalignment team (May 2024) and Mission Alignment team (Feb 2026). Google accused by 60 UK lawmakers of violating Seoul commitments with Gemini 2.5 Pro (Apr 2025). diff --git a/domains/ai-alignment/pre-deployment-AI-evaluations-do-not-predict-real-world-risk-creating-institutional-governance-built-on-unreliable-foundations.md b/domains/ai-alignment/pre-deployment-AI-evaluations-do-not-predict-real-world-risk-creating-institutional-governance-built-on-unreliable-foundations.md index 3fbabf67..784ae5dd 100644 --- a/domains/ai-alignment/pre-deployment-AI-evaluations-do-not-predict-real-world-risk-creating-institutional-governance-built-on-unreliable-foundations.md +++ b/domains/ai-alignment/pre-deployment-AI-evaluations-do-not-predict-real-world-risk-creating-institutional-governance-built-on-unreliable-foundations.md @@ -52,7 +52,7 @@ METR and UK AISI evaluations as of March 2026 focus primarily on sabotage risk a ### Additional Evidence (confirm) -*Source: [[2026-02-23-shapira-agents-of-chaos]] | Added: 2026-03-19* +*Source: 2026-02-23-shapira-agents-of-chaos | Added: 2026-03-19* Agents of Chaos demonstrates that static single-agent benchmarks fail to capture vulnerabilities that emerge in realistic multi-agent deployment. The study's central argument is that pre-deployment evaluations are insufficient because they cannot test for cross-agent propagation, identity spoofing, and unauthorized compliance patterns that only manifest in multi-party environments with persistent state.