teleo-codex/inbox/queue/2026-03-21-california-ab2013-training-transparency-only.md
2026-03-21 00:16:59 +00:00

3.6 KiB

type title author url date domain secondary_domains format status priority tags
source California AB 2013 (AI Training Data Transparency Act): Training Data Disclosure Only, No Independent Evaluation California State Legislature https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240AB2013 2024-01-01 ai-alignment
thread unprocessed medium
California
AB2013
training-data-transparency
regulation
governance
independent-evaluation
compliance

Content

California AB 2013 (Transparency in AI Act) requires developers of generative AI systems to disclose training data information. Key provisions:

What it requires: Self-reported documentation on developer's own website including:

  • High-level summary of datasets used in development (sources, intended purposes, data point counts)
  • Whether datasets contain copyrighted material or are public domain
  • Whether data was purchased or licensed
  • Presence of personal information or aggregate consumer information
  • Data cleaning/processing performed
  • Collection time periods
  • Use of synthetic data generation

What it does NOT require:

  • Independent evaluation of any kind
  • Capability assessment
  • Safety testing
  • Third-party review

Applicability: Systems released after January 1, 2022; effective January 1, 2026; excludes security/integrity, aircraft operations, federal national security systems.

Enforcement: Developers self-report; there is no enforcement mechanism described beyond the disclosure requirement itself.

Agent Notes

Why this matters: Stelling et al. (arXiv:2512.01166, previous session) grouped California's Transparency in Frontier AI Act with the EU AI Act as laws that rely on frontier safety frameworks as compliance evidence. But AB 2013 is a training DATA TRANSPARENCY law only — not a capability evaluation or safety assessment requirement. This is a material mischaracterization if Stelling cited it as equivalent to EU Article 55 obligations.

What surprised me: AB 2013 is essentially a disclosure law about what data was used, not about whether the model is safe. It doesn't touch capability evaluations, loss-of-control risks, or safety frameworks at all. The Stelling framing ("California's Transparency in Frontier AI Act relies on these same 8-35% frameworks as compliance evidence") may refer to a different California law (perhaps SB 1047 or similar) rather than AB 2013. Worth clarifying in next session.

What I expected but didn't find: Any connection between AB 2013 and frontier safety frameworks or capability evaluation requirements. They appear entirely separate.

KB connections:

  • This source primarily provides a cautionary note on previous session's synthesis: "California's law accepts 8-35% quality frameworks as compliance evidence" may be about a different law than AB 2013

Extraction hints:

  • This is primarily a CORRECTION to previous session synthesis
  • LOW extraction priority — no strong standalone claim
  • Worth flagging for: "What California law was Stelling et al. actually referring to?" — may be SB 1047 (Safe and Secure Innovation for Frontier AI Models Act), not AB 2013

Curator Notes (structured handoff for extractor)

PRIMARY CONNECTION: Previous session synthesis (Stelling et al. finding about California law) WHY ARCHIVED: Corrective — AB 2013 is training data disclosure only; the Stelling characterization may refer to different legislation; extractor should verify which California law is implicated EXTRACTION HINT: Low extraction priority; primarily a correction to Session 10 synthesis note; may inform a future session's California law deep-dive