auto-fix: strip 11 broken wiki links
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run

Pipeline auto-fixer: removed [[ ]] brackets from links
that don't resolve to existing claims in the knowledge base.
This commit is contained in:
Teleo Agents 2026-03-26 03:03:20 +00:00
parent 108a0d631c
commit 4207098983
2 changed files with 11 additions and 11 deletions

View file

@ -23,20 +23,20 @@ The structural point is about threat proximity. AI takeover requires autonomy, r
### Additional Evidence (confirm) ### Additional Evidence (confirm)
*Source: [[2026-02-00-international-ai-safety-report-2026]] | Added: 2026-03-11 | Extractor: anthropic/claude-sonnet-4.5* *Source: 2026-02-00-international-ai-safety-report-2026 | Added: 2026-03-11 | Extractor: anthropic/claude-sonnet-4.5*
The International AI Safety Report 2026 (multi-government committee, February 2026) confirms that 'biological/chemical weapons information accessible through AI systems' is a documented malicious use risk. While the report does not specify the expertise level required (PhD vs amateur), it categorizes bio/chem weapons information access alongside AI-generated persuasion and cyberattack capabilities as confirmed malicious use risks, giving institutional multi-government validation to the bioterrorism concern. The International AI Safety Report 2026 (multi-government committee, February 2026) confirms that 'biological/chemical weapons information accessible through AI systems' is a documented malicious use risk. While the report does not specify the expertise level required (PhD vs amateur), it categorizes bio/chem weapons information access alongside AI-generated persuasion and cyberattack capabilities as confirmed malicious use risks, giving institutional multi-government validation to the bioterrorism concern.
### Additional Evidence (extend) ### Additional Evidence (extend)
*Source: [[2025-08-00-mccaslin-stream-chembio-evaluation-reporting]] | Added: 2026-03-19* *Source: 2025-08-00-mccaslin-stream-chembio-evaluation-reporting | Added: 2026-03-19*
STREAM framework proposes standardized ChemBio evaluation reporting with 23-expert consensus on disclosure requirements. The focus on ChemBio as the initial domain for standardized dangerous capability reporting signals that this is recognized across government, civil society, academia, and frontier labs as the highest-priority risk domain requiring transparency infrastructure. STREAM framework proposes standardized ChemBio evaluation reporting with 23-expert consensus on disclosure requirements. The focus on ChemBio as the initial domain for standardized dangerous capability reporting signals that this is recognized across government, civil society, academia, and frontier labs as the highest-priority risk domain requiring transparency infrastructure.
--- ---
### Additional Evidence (extend) ### Additional Evidence (extend)
*Source: [[2026-03-26-aisle-openssl-zero-days]] | Added: 2026-03-26* *Source: 2026-03-26-aisle-openssl-zero-days | Added: 2026-03-26*
AISLE's autonomous discovery of 12 OpenSSL CVEs including a 30-year-old bug demonstrates that AI also lowers the expertise barrier for offensive cyber from specialized security researcher to automated system. Unlike bioweapons, zero-day discovery is also a defensive capability, but the dual-use nature means the same autonomous system that defends can be redirected offensively. The fact that this capability is already deployed commercially while governance frameworks haven't incorporated it suggests the expertise-barrier-lowering dynamic extends beyond bio to cyber domains. AISLE's autonomous discovery of 12 OpenSSL CVEs including a 30-year-old bug demonstrates that AI also lowers the expertise barrier for offensive cyber from specialized security researcher to automated system. Unlike bioweapons, zero-day discovery is also a defensive capability, but the dual-use nature means the same autonomous system that defends can be redirected offensively. The fact that this capability is already deployed commercially while governance frameworks haven't incorporated it suggests the expertise-barrier-lowering dynamic extends beyond bio to cyber domains.

View file

@ -23,48 +23,48 @@ The timing is revealing: Anthropic dropped its safety pledge the same week the P
### Additional Evidence (confirm) ### Additional Evidence (confirm)
*Source: [[2026-02-00-anthropic-rsp-rollback]] | Added: 2026-03-10 | Extractor: anthropic/claude-sonnet-4.5* *Source: 2026-02-00-anthropic-rsp-rollback | Added: 2026-03-10 | Extractor: anthropic/claude-sonnet-4.5*
Anthropic, widely considered the most safety-focused frontier AI lab, rolled back its Responsible Scaling Policy (RSP) in February 2026. The original 2023 RSP committed to never training an AI system unless the company could guarantee in advance that safety measures were adequate. The new RSP explicitly acknowledges the structural dynamic: safety work 'requires collaboration (and in some cases sacrifices) from multiple parts of the company and can be at cross-purposes with immediate competitive and commercial priorities.' This represents the highest-profile case of a voluntary AI safety commitment collapsing under competitive pressure. Anthropic's own language confirms the mechanism: safety is a competitive cost ('sacrifices') that conflicts with commercial imperatives ('at cross-purposes'). Notably, no alternative coordination mechanism was proposed—they weakened the commitment without proposing what would make it sustainable (industry-wide agreements, regulatory requirements, market mechanisms). This is particularly significant because Anthropic is the organization most publicly committed to safety governance, making their rollback empirical validation that even safety-prioritizing institutions cannot sustain unilateral commitments under competitive pressure. Anthropic, widely considered the most safety-focused frontier AI lab, rolled back its Responsible Scaling Policy (RSP) in February 2026. The original 2023 RSP committed to never training an AI system unless the company could guarantee in advance that safety measures were adequate. The new RSP explicitly acknowledges the structural dynamic: safety work 'requires collaboration (and in some cases sacrifices) from multiple parts of the company and can be at cross-purposes with immediate competitive and commercial priorities.' This represents the highest-profile case of a voluntary AI safety commitment collapsing under competitive pressure. Anthropic's own language confirms the mechanism: safety is a competitive cost ('sacrifices') that conflicts with commercial imperatives ('at cross-purposes'). Notably, no alternative coordination mechanism was proposed—they weakened the commitment without proposing what would make it sustainable (industry-wide agreements, regulatory requirements, market mechanisms). This is particularly significant because Anthropic is the organization most publicly committed to safety governance, making their rollback empirical validation that even safety-prioritizing institutions cannot sustain unilateral commitments under competitive pressure.
### Additional Evidence (confirm) ### Additional Evidence (confirm)
*Source: [[2026-02-00-international-ai-safety-report-2026]] | Added: 2026-03-11 | Extractor: anthropic/claude-sonnet-4.5* *Source: 2026-02-00-international-ai-safety-report-2026 | Added: 2026-03-11 | Extractor: anthropic/claude-sonnet-4.5*
The International AI Safety Report 2026 (multi-government committee, February 2026) confirms that risk management remains 'largely voluntary' as of early 2026. While 12 companies published Frontier AI Safety Frameworks in 2025, these remain voluntary commitments without binding legal requirements. The report notes that 'a small number of regulatory regimes beginning to formalize risk management as legal requirements,' but the dominant governance mode is still voluntary pledges. This provides multi-government institutional confirmation that the structural race-to-the-bottom predicted by the alignment tax is actually occurring—voluntary frameworks are not transitioning to binding requirements at the pace needed to prevent competitive pressure from eroding safety commitments. The International AI Safety Report 2026 (multi-government committee, February 2026) confirms that risk management remains 'largely voluntary' as of early 2026. While 12 companies published Frontier AI Safety Frameworks in 2025, these remain voluntary commitments without binding legal requirements. The report notes that 'a small number of regulatory regimes beginning to formalize risk management as legal requirements,' but the dominant governance mode is still voluntary pledges. This provides multi-government institutional confirmation that the structural race-to-the-bottom predicted by the alignment tax is actually occurring—voluntary frameworks are not transitioning to binding requirements at the pace needed to prevent competitive pressure from eroding safety commitments.
### Additional Evidence (confirm) ### Additional Evidence (confirm)
*Source: [[2024-12-00-uuk-mitigations-gpai-systemic-risks-76-experts]] | Added: 2026-03-19* *Source: 2024-12-00-uuk-mitigations-gpai-systemic-risks-76-experts | Added: 2026-03-19*
The gap between expert consensus (76 specialists identify third-party audits as top-3 priority) and actual implementation (no mandatory audit requirements at major labs) demonstrates that knowing what's needed is insufficient. Even when the field's experts across multiple domains agree on priorities, competitive dynamics prevent voluntary adoption. The gap between expert consensus (76 specialists identify third-party audits as top-3 priority) and actual implementation (no mandatory audit requirements at major labs) demonstrates that knowing what's needed is insufficient. Even when the field's experts across multiple domains agree on priorities, competitive dynamics prevent voluntary adoption.
### Additional Evidence (confirm) ### Additional Evidence (confirm)
*Source: [[2026-03-16-theseus-ai-coordination-governance-evidence]] | Added: 2026-03-19* *Source: 2026-03-16-theseus-ai-coordination-governance-evidence | Added: 2026-03-19*
Comprehensive evidence across governance mechanisms: ALL international declarations (Bletchley, Seoul, Paris, Hiroshima, OECD, UN) produced zero verified behavioral change. Frontier Model Forum produced no binding commitments. White House voluntary commitments eroded. 450+ organizations lobbied on AI in 2025 ($92M in fees), California SB 1047 vetoed after industry pressure. Only binding regulation (EU AI Act, China enforcement, US export controls) changed behavior. Comprehensive evidence across governance mechanisms: ALL international declarations (Bletchley, Seoul, Paris, Hiroshima, OECD, UN) produced zero verified behavioral change. Frontier Model Forum produced no binding commitments. White House voluntary commitments eroded. 450+ organizations lobbied on AI in 2025 ($92M in fees), California SB 1047 vetoed after industry pressure. Only binding regulation (EU AI Act, China enforcement, US export controls) changed behavior.
### Additional Evidence (extend) ### Additional Evidence (extend)
*Source: [[2026-03-18-hks-governance-by-procurement-bilateral]] | Added: 2026-03-19* *Source: 2026-03-18-hks-governance-by-procurement-bilateral | Added: 2026-03-19*
Government pressure adds to competitive dynamics. The DoD/Anthropic episode shows that safety-conscious labs face not just market competition but active government penalties for maintaining safeguards. The Pentagon threatened blacklisting specifically because Anthropic maintained protections against mass surveillance and autonomous weapons—government as competitive pressure amplifier. Government pressure adds to competitive dynamics. The DoD/Anthropic episode shows that safety-conscious labs face not just market competition but active government penalties for maintaining safeguards. The Pentagon threatened blacklisting specifically because Anthropic maintained protections against mass surveillance and autonomous weapons—government as competitive pressure amplifier.
--- ---
### Additional Evidence (extend) ### Additional Evidence (extend)
*Source: [[2026-03-21-research-compliance-translation-gap]] | Added: 2026-03-21* *Source: 2026-03-21-research-compliance-translation-gap | Added: 2026-03-21*
The research-to-compliance translation gap fails for the same structural reason voluntary commitments fail: nothing makes labs adopt research evaluations that exist. RepliBench was published in April 2025 before EU AI Act obligations took effect in August 2025, proving the tools existed before mandatory requirements—but no mechanism translated availability into obligation. The research-to-compliance translation gap fails for the same structural reason voluntary commitments fail: nothing makes labs adopt research evaluations that exist. RepliBench was published in April 2025 before EU AI Act obligations took effect in August 2025, proving the tools existed before mandatory requirements—but no mechanism translated availability into obligation.
### Additional Evidence (extend) ### Additional Evidence (extend)
*Source: [[2026-03-00-mengesha-coordination-gap-frontier-ai-safety]] | Added: 2026-03-22* *Source: 2026-03-00-mengesha-coordination-gap-frontier-ai-safety | Added: 2026-03-22*
The coordination gap provides the mechanism explaining why voluntary commitments fail even beyond racing dynamics: coordination infrastructure investments have diffuse benefits but concentrated costs, creating a public goods problem. Labs won't build shared response infrastructure unilaterally because competitors free-ride on the benefits while the builder bears full costs. This is distinct from the competitive pressure argument — it's about why shared infrastructure doesn't get built even when racing isn't the primary concern. The coordination gap provides the mechanism explaining why voluntary commitments fail even beyond racing dynamics: coordination infrastructure investments have diffuse benefits but concentrated costs, creating a public goods problem. Labs won't build shared response infrastructure unilaterally because competitors free-ride on the benefits while the builder bears full costs. This is distinct from the competitive pressure argument — it's about why shared infrastructure doesn't get built even when racing isn't the primary concern.
### Additional Evidence (confirm) ### Additional Evidence (confirm)
*Source: [[2026-03-21-replibench-autonomous-replication-capabilities]] | Added: 2026-03-23* *Source: 2026-03-21-replibench-autonomous-replication-capabilities | Added: 2026-03-23*
RepliBench exists as a comprehensive self-replication evaluation tool but is not integrated into compliance frameworks despite EU AI Act Article 55 taking effect after its publication. Labs can voluntarily use it but face no enforcement mechanism requiring them to do so, creating competitive pressure to avoid evaluations that might reveal concerning capabilities. RepliBench exists as a comprehensive self-replication evaluation tool but is not integrated into compliance frameworks despite EU AI Act Article 55 taking effect after its publication. Labs can voluntarily use it but face no enforcement mechanism requiring them to do so, creating competitive pressure to avoid evaluations that might reveal concerning capabilities.