extract: 2025-08-00-mccaslin-stream-chembio-evaluation-reporting
Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA>
This commit is contained in:
parent
e12e22498b
commit
695b8542bb
4 changed files with 49 additions and 1 deletions
|
|
@ -27,6 +27,12 @@ The structural point is about threat proximity. AI takeover requires autonomy, r
|
|||
|
||||
The International AI Safety Report 2026 (multi-government committee, February 2026) confirms that 'biological/chemical weapons information accessible through AI systems' is a documented malicious use risk. While the report does not specify the expertise level required (PhD vs amateur), it categorizes bio/chem weapons information access alongside AI-generated persuasion and cyberattack capabilities as confirmed malicious use risks, giving institutional multi-government validation to the bioterrorism concern.
|
||||
|
||||
|
||||
### Additional Evidence (extend)
|
||||
*Source: [[2025-08-00-mccaslin-stream-chembio-evaluation-reporting]] | Added: 2026-03-19*
|
||||
|
||||
STREAM framework proposes standardized ChemBio evaluation reporting with 23-expert consensus on disclosure requirements. The focus on ChemBio as the initial domain for standardized dangerous capability reporting signals that this is recognized across government, civil society, academia, and frontier labs as the highest-priority risk domain requiring transparency infrastructure.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
|
|
|
|||
|
|
@ -35,6 +35,12 @@ The alignment implication: transparency is a prerequisite for external oversight
|
|||
|
||||
Expert consensus identifies 'external scrutiny, proactive evaluation and transparency' as the key principles for mitigating AI systemic risks, with third-party audits as the top-3 implementation priority. The transparency decline documented by Stanford FMTI is moving in the opposite direction from what 76 cross-domain experts identify as necessary.
|
||||
|
||||
|
||||
### Additional Evidence (extend)
|
||||
*Source: [[2025-08-00-mccaslin-stream-chembio-evaluation-reporting]] | Added: 2026-03-19*
|
||||
|
||||
STREAM proposal identifies that current model reports lack 'sufficient detail to enable meaningful independent assessment' of dangerous capability evaluations. The need for a standardized reporting framework confirms that transparency problems extend beyond general disclosure (FMTI scores) to the specific domain of dangerous capability evaluation where external verification is currently impossible.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
|
|
|
|||
|
|
@ -0,0 +1,24 @@
|
|||
{
|
||||
"rejected_claims": [
|
||||
{
|
||||
"filename": "ai-model-reports-lack-standardized-dangerous-capability-disclosure-preventing-independent-assessment.md",
|
||||
"issues": [
|
||||
"missing_attribution_extractor"
|
||||
]
|
||||
}
|
||||
],
|
||||
"validation_stats": {
|
||||
"total": 1,
|
||||
"kept": 0,
|
||||
"fixed": 1,
|
||||
"rejected": 1,
|
||||
"fixes_applied": [
|
||||
"ai-model-reports-lack-standardized-dangerous-capability-disclosure-preventing-independent-assessment.md:set_created:2026-03-19"
|
||||
],
|
||||
"rejections": [
|
||||
"ai-model-reports-lack-standardized-dangerous-capability-disclosure-preventing-independent-assessment.md:missing_attribution_extractor"
|
||||
]
|
||||
},
|
||||
"model": "anthropic/claude-sonnet-4.5",
|
||||
"date": "2026-03-19"
|
||||
}
|
||||
|
|
@ -7,9 +7,13 @@ date: 2025-08-01
|
|||
domain: ai-alignment
|
||||
secondary_domains: []
|
||||
format: paper
|
||||
status: unprocessed
|
||||
status: enrichment
|
||||
priority: medium
|
||||
tags: [evaluation-infrastructure, dangerous-capabilities, standardized-reporting, ChemBio, transparency, STREAM]
|
||||
processed_by: theseus
|
||||
processed_date: 2026-03-19
|
||||
enrichments_applied: ["AI lowers the expertise barrier for engineering biological weapons from PhD-level to amateur which makes bioterrorism the most proximate AI-enabled existential risk.md", "AI transparency is declining not improving because Stanford FMTI scores dropped 17 points in one year while frontier labs dissolved safety teams and removed safety language from mission statements.md"]
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
---
|
||||
|
||||
## Content
|
||||
|
|
@ -53,3 +57,11 @@ PRIMARY CONNECTION: [[AI lowers the expertise barrier for engineering biological
|
|||
WHY ARCHIVED: Provides evidence of emerging standardization for dangerous capability evaluation reporting. The multi-stakeholder process (government, academia, AI companies) signals potential for eventual adoption.
|
||||
|
||||
EXTRACTION HINT: Focus on the disclosure gap: labs currently report their own dangerous capability evaluations without standardized format, preventing independent assessment of rigor.
|
||||
|
||||
|
||||
## Key Facts
|
||||
- STREAM (Standard for Transparently Reporting Evaluations in AI Model Reports) proposed August 2025
|
||||
- STREAM developed by 23 experts from government, civil society, academia, and frontier AI companies
|
||||
- STREAM includes 3-page reporting template and gold standard examples
|
||||
- Initial STREAM focus is chemical and biological (ChemBio) dangerous capability evaluations
|
||||
- STREAM has two stated purposes: practical guidance for AI developers and enabling third-party assessment of evaluation rigor
|
||||
|
|
|
|||
Loading…
Reference in a new issue