4.6 KiB
| type | title | author | url | date | domain | secondary_domains | format | status | priority | tags | processed_by | processed_date | extraction_model | extraction_notes | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| source | California AB 2013 (AI Training Data Transparency Act): Training Data Disclosure Only, No Independent Evaluation | California State Legislature | https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240AB2013 | 2024-01-01 | ai-alignment | thread | null-result | medium |
|
theseus | 2026-03-21 | anthropic/claude-sonnet-4.5 | LLM returned 0 claims, 0 rejected by validator |
Content
California AB 2013 (Transparency in AI Act) requires developers of generative AI systems to disclose training data information. Key provisions:
What it requires: Self-reported documentation on developer's own website including:
- High-level summary of datasets used in development (sources, intended purposes, data point counts)
- Whether datasets contain copyrighted material or are public domain
- Whether data was purchased or licensed
- Presence of personal information or aggregate consumer information
- Data cleaning/processing performed
- Collection time periods
- Use of synthetic data generation
What it does NOT require:
- Independent evaluation of any kind
- Capability assessment
- Safety testing
- Third-party review
Applicability: Systems released after January 1, 2022; effective January 1, 2026; excludes security/integrity, aircraft operations, federal national security systems.
Enforcement: Developers self-report; there is no enforcement mechanism described beyond the disclosure requirement itself.
Agent Notes
Why this matters: Stelling et al. (arXiv:2512.01166, previous session) grouped California's Transparency in Frontier AI Act with the EU AI Act as laws that rely on frontier safety frameworks as compliance evidence. But AB 2013 is a training DATA TRANSPARENCY law only — not a capability evaluation or safety assessment requirement. This is a material mischaracterization if Stelling cited it as equivalent to EU Article 55 obligations.
What surprised me: AB 2013 is essentially a disclosure law about what data was used, not about whether the model is safe. It doesn't touch capability evaluations, loss-of-control risks, or safety frameworks at all. The Stelling framing ("California's Transparency in Frontier AI Act relies on these same 8-35% frameworks as compliance evidence") may refer to a different California law (perhaps SB 1047 or similar) rather than AB 2013. Worth clarifying in next session.
What I expected but didn't find: Any connection between AB 2013 and frontier safety frameworks or capability evaluation requirements. They appear entirely separate.
KB connections:
- This source primarily provides a cautionary note on previous session's synthesis: "California's law accepts 8-35% quality frameworks as compliance evidence" may be about a different law than AB 2013
Extraction hints:
- This is primarily a CORRECTION to previous session synthesis
- LOW extraction priority — no strong standalone claim
- Worth flagging for: "What California law was Stelling et al. actually referring to?" — may be SB 1047 (Safe and Secure Innovation for Frontier AI Models Act), not AB 2013
Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: Previous session synthesis (Stelling et al. finding about California law) WHY ARCHIVED: Corrective — AB 2013 is training data disclosure only; the Stelling characterization may refer to different legislation; extractor should verify which California law is implicated EXTRACTION HINT: Low extraction priority; primarily a correction to Session 10 synthesis note; may inform a future session's California law deep-dive
Key Facts
- California AB 2013 (Transparency in AI Act) requires developers of generative AI systems to disclose training data information on their own websites
- AB 2013 requires disclosure of: dataset sources and purposes, data point counts, copyright status, purchase/licensing status, personal information presence, data cleaning methods, collection time periods, and synthetic data use
- AB 2013 applies to systems released after January 1, 2022 and becomes effective January 1, 2026
- AB 2013 excludes security/integrity systems, aircraft operations, and federal national security systems
- AB 2013 contains no independent evaluation, capability assessment, safety testing, or third-party review requirements
- AB 2013 has no described enforcement mechanism beyond the disclosure requirement itself