theseus: extract claims from 2026-01-29-metr-frontier-ai-safety-regulations-reference
Some checks failed
Mirror PR to Forgejo / mirror (pull_request) Has been cancelled

- Source: inbox/queue/2026-01-29-metr-frontier-ai-safety-regulations-reference.md
- Domain: ai-alignment
- Claims: 0, Entities: 2
- Enrichments: 3
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Theseus <PIPELINE>
This commit is contained in:
Teleo Agents 2026-05-11 04:22:24 +00:00
parent e03015f06f
commit 12c7b94233
4 changed files with 70 additions and 1 deletions

View file

@ -42,3 +42,10 @@ The White House AI EO represents a shift from voluntary commitments (CAISI volun
**Source:** Breaking Defense, March 26, 2026 - Pentagon maintains ban despite injunction
The administration's apparent defiance of a federal court preliminary injunction demonstrates that even judicial enforcement mechanisms may be circumvented through jurisdictional challenges and institutional inertia. Federal contracting officers may continue treating the Anthropic ban as operative despite the court order, preserving the de facto ban through bureaucratic compliance resistance rather than formal legal authority.
## Supporting Evidence
**Source:** METR Frontier AI Safety Regulations Reference, January 2026
California SB 53 makes external evaluation voluntary (not mandatory) and accepts ISO/IEC 42001 as compliance evidence. METR's reference document identifies this as a 'self-reporting architecture' and notes the limitation was 'identified in prior Sessions as inadequate.' The voluntary third-party evaluation structure confirms that even statutory requirements can preserve voluntary compliance theater.

View file

@ -0,0 +1,33 @@
# California SB 53
**Type:** State AI safety legislation
**Status:** Effective January 1, 2026
**Jurisdiction:** California, United States
## Overview
California SB 53 is state-level frontier AI safety legislation that applies to developers of frontier AI models. The law establishes requirements for incident reporting, safety and security model evaluations, internal governance practices, and whistleblower protections.
## Key Provisions
**Scope:** Applies to developers of frontier AI models operating in California.
**Requirements:**
- Incident reporting obligations
- Safety and security model evaluations
- Internal governance practices
- Whistleblower protections
**External Evaluation:** Voluntary (not mandatory) under SB 53. The law accepts ISO/IEC 42001 (management system standard) as compliance evidence.
## Limitations
METR's regulatory reference identifies two key limitations:
1. Voluntary third-party evaluation structure (identified as inadequate)
2. ISO/IEC 42001 acceptance creates self-reporting architecture
Both limitations were noted as previously identified in prior analysis as inadequate for meaningful safety governance.
## Timeline
- **2026-01-01** — SB 53 becomes effective, establishing California as first US state with frontier AI safety requirements

View file

@ -0,0 +1,26 @@
# New York RAISE Act
**Type:** State AI safety legislation
**Status:** Legislative status unclear as of January 2026
**Jurisdiction:** New York, United States
## Overview
The New York RAISE Act is proposed state-level AI safety legislation with similar scope to California SB 53. The act has had contested legislative history.
## Key Provisions
**Scope:** Similar to California SB 53, targeting frontier AI model developers.
**Requirements:**
- Incident reporting
- Model evaluation requirements
- (Additional provisions not detailed in available reference)
## Status
As of January 2026, METR's regulatory reference notes the act's existence and contested legislative history but does not clarify current passage status or implementation timeline.
## Timeline
- **2025-2026** — Contested legislative process, status unclear as of January 2026

View file

@ -7,10 +7,13 @@ date: 2026-01-29
domain: ai-alignment
secondary_domains: []
format: article
status: unprocessed
status: processed
processed_by: theseus
processed_date: 2026-05-11
priority: medium
tags: [metr, frontier-ai, safety-regulations, eu-ai-act, gpai, california-sb53, new-york-raise, regulatory-reference]
intake_tier: research-task
extraction_model: "anthropic/claude-sonnet-4.5"
---
## Content