- Source: inbox/queue/2026-01-29-metr-frontier-ai-safety-regulations-reference.md - Domain: ai-alignment - Claims: 0, Entities: 2 - Enrichments: 3 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Theseus <PIPELINE>
1.3 KiB
California SB 53
Type: State AI safety legislation
Status: Effective January 1, 2026
Jurisdiction: California, United States
Overview
California SB 53 is state-level frontier AI safety legislation that applies to developers of frontier AI models. The law establishes requirements for incident reporting, safety and security model evaluations, internal governance practices, and whistleblower protections.
Key Provisions
Scope: Applies to developers of frontier AI models operating in California.
Requirements:
- Incident reporting obligations
- Safety and security model evaluations
- Internal governance practices
- Whistleblower protections
External Evaluation: Voluntary (not mandatory) under SB 53. The law accepts ISO/IEC 42001 (management system standard) as compliance evidence.
Limitations
METR's regulatory reference identifies two key limitations:
- Voluntary third-party evaluation structure (identified as inadequate)
- ISO/IEC 42001 acceptance creates self-reporting architecture
Both limitations were noted as previously identified in prior analysis as inadequate for meaningful safety governance.
Timeline
- 2026-01-01 — SB 53 becomes effective, establishing California as first US state with frontier AI safety requirements