The 2025 Rulebook for AI Compliance and Governance

Policymakers are busy codifying legislation on how AI gets built, labeled, used, and monitored, and this issue dives into both recently passed legislation and bills to watch focused on disclosure, documentation, and governance as the baseline. Here’s the roundup with links and frameworks you can act on now in preparation for the inevitable.


California: Scott Wiener’s SB 53 heads to Newsom

After last year’s SB 1047 veto, Senator Scott Wiener came back with SB 53 (Transparency in Frontier AI Act) – co-authored with Senator Susan Rubio – requiring the largest frontier model developers to publish safety frameworks and file incident summaries, plus standing up “CalCompute,” a public cloud cluster concept. It cleared both chambers and now sits on Governor Gavin Newsom‘s desk.

Insight: California has shifted from “kill-switch/stress-test” mandates (SB 1047, vetoed by Governor Newsom) toward disclosure-heavy governance with SB 53. If signed, expect public safety frameworks, incident summaries, and developer-side documentation to become table stakes for top labs operating in the state. SB 53 seems to follow a broader regulatory template emerging globally: risk-based, documentation-first rules with phased timelines – mirroring the EU AI Act’s staged rollout, as well as established privacy/security regimes like the California Consumer Privacy Act (CCPA) and the Gramm-Leach-Bliley Act (GLBA) Safeguards Rule.

Framework for action:

  1. AI risk management programs using frameworks like NIST AI RMF.
  2. Security & privacy controls (WISP, access restrictions, vendor oversight, breach-readiness).
  3. Content provenance & labeling (C2PA/Content Credentials).
  4. Incident logging & reporting workflows tied to model cards and monitoring.

California: Buffy Wicks’ AI Transparency push (AB 853)

Assemblymember Buffy Wicks‘ AB 853 (California AI Transparency Act) – co-authored by Senator Josh Becker and Assembymember Rick Chavez Zbur – targets platform-level provenance and labeling – think conspicuous notices when C2PA/content credentials are detected and free detection tools from very large GenAI providers. It advanced through Assembly, Senate committees, and appropriations and would delay the operation of the already enacted CA AI Transparency Act (SB 942, signed September 19, 2024) from January 1, 2026 to August 2, 2026 which aligns with the AI EU Act.

Insight: Companies are going to get more time to design a plan, and there is potential for certain large online platforms like internet providers, telecom services, and advertising networks to be excluded. This follows a similar framework for applying compliance based on whether your company is a “data controller” or “data processor” in existing privacy legislation. For everyone else included in the definition, provenance + consumer-visible labels are becoming the near-term compliance surface. It’s a lighter lift than frontier-model safety, but with real product work for platforms and toolmakers.

Framework for action:

  1. Technology – Integrate content credential standards like C2PA into media workflows.
  2. Policy – Draft labeling policies for generated vs. human content.
  3. Governance – Assign an AI provenance lead or working group.
  4. Training & Communication – Educate staff and customers on labeling meaning.
  5. Audit & Reporting – Track labeling coverage, exceptions, and compliance metrics.

Tennessee: The ELVIS Act (it’s real, and it’s live)

Signed March 21, 2024 and effective July 1, 2024, Tennessee’s ELVIS Act expands the right of publicity to cover voice explicitly and is aimed aimed squarely at AI voice clones. The law creates new liability exposure for services that use or host AI-generated voice likenesses without authorization.

Insight: Expect more “name, image, likeness, and voice” statutes plus takedown and damages playbooks for AI voice and likeness misuse.

Framework for action:

  1. Technology – Adopt voice fingerprinting and watermarking tools to flag AI-generated audio.
  2. Policy – Update terms of service to ban unauthorized AI voice use.
  3. Governance – Build a DMCA-style takedown workflow managed by trust & safety/legal.
  4. Licensing – Formalize contracts for synthetic voice use, modeled after music licensing.
  5. Training & Communication – Educate employees, creators, and users about compliance.
  6. Audit & Reporting – Maintain logs of flagged, reviewed, and removed voice content.

EU: AI Act—what’s in force now vs. later

The EU AI Act entered into force Aug 1, 2024 with phased application:

  • Feb 2, 2025: bans on prohibited uses + AI literacy obligations kick in.
  • Aug 2, 2025: GPAI (foundation-model) obligations and governance rules apply.
  • Aug 2, 2026: full applicability for most; Aug 2, 2027 for high-risk systems embedded in regulated products.
  • Official EU page: AI Act explainer
  • Timeline tracker: AI Act timeline

Insight: If you serve EU users, GPAI transparency + governance hit Aug 2025, regardless of your domestic posture. Harmonizing disclosures early will save refactors next summer.

Framework for action:

  1. Model Inventory & Classification – Create a registry of all AI systems and map them against EU risk categories.
  2. Risk Management – Adopt NIST AI RMF or ISO/IEC 42001 for continuous assessments.
  3. Documentation & Model Cards – Produce detailed cards outlining training data, limits, and intended use.
  4. Conformity Assessments – Rehearse technical testing and risk mitigations for high-risk systems.
  5. Governance – Appoint an EU AI compliance officer or cross-functional team.
  6. Monitoring & Incident Reporting – Build post-market monitoring workflows and event logging.

Your Takeaway This Week

Companies doing business globally have two tracks to watch when it comes to AI governance: (1) Frontier model governance (California’s SB 53; EU GPAI rules next August) and (2) Downstream application layer legislation (AB 853; entertainment-sector rights via the ELVIS Act), and many more largely state-led initiatives. If you operate globally, design once to satisfy EU 2025 GPAI obligations and reuse those disclosures for California transparency then layer provenance/labeling into your product pipeline to satisfy emerging U.S. state laws.

The enterprise playbook looks familiar across consumer and data protection legislation:

  • Build an AI risk management program (NIST AI RMF: govern, map, measure, manage).
  • Stand up security & privacy controls (WISP, access restrictions, vendor oversight, breach-readiness).
  • Implement content provenance & labeling via C2PA/Content Credentials.
  • Operationalize incident logging & reporting workflows tied to model cards and monitoring.

Think of it as investing in your AI compliance OS now so when regulators flip the switch in California, Brussels, or even perhaps one day in Washington D.C., you’re already ahead of the curve.

Comment On This Post: