A practical guide to the new regulatory landscape for businesses using or deploying AI
Here’s the change that should get your attention: as of today, if your AI causes harm and you get sued, you can no longer argue that the AI acted on its own. That defense is gone. AB 316 eliminated it, and it applies to every business in California using AI in any capacity.
That’s just one of eight AI-related laws taking effect this year. California has enacted a wave of artificial intelligence regulation covering everything from training data disclosure to frontier model safety reporting. Some laws target only the largest AI developers. Others – like the liability change above – apply to every organization deploying AI tools, regardless of size or revenue.
For IT leaders and MSPs, the challenge is cutting through the noise to identify which laws actually require action. This guide breaks down each law by who must comply, with specific thresholds and penalties. (And since I live and operate in California, I am going to do one article for California and one article later on other states and their AI Laws, except maybe New York)
Quick Reference: Who Must Comply?
The following table breaks down each major AI law by the type of organization it affects. Use this to quickly identify which regulations require your attention.
| Law | Effective | Small Business (Under $5M) | Mid-Market ($5M-$500M) | Enterprise ($500M+) | Key Threshold |
|---|---|---|---|---|---|
| AB 2013 Training Data Disclosure | Jan 1, 2026 | If developing GenAI | If developing GenAI | If developing GenAI | Any developer of generative AI available to CA residents |
| SB 942 AI Transparency Act | Jan 1, 2026 | No | Unlikely | If >1M users | >1 million monthly users of GenAI system (images/video/audio only) |
| AB 489 Healthcare AI Restrictions | Jan 1, 2026 | If healthcare AI | If healthcare AI | If healthcare AI | Any AI suggesting licensed healthcare professional oversight |
| AB 316 AI Liability (No Autonomous Defense) | Jan 1, 2026 | Yes | Yes | Yes | All defendants in civil actions involving AI-caused harm |
| AB 325 Algorithmic Pricing | Jan 1, 2026 | If using pricing AI | If using pricing AI | If using pricing AI | Any business using algorithms with competitor data for pricing |
| AB 621 Deepfake Protections | Jan 1, 2026 | Yes | Yes | Yes | Anyone creating or distributing non-consensual deepfakes |
| SB 53 Frontier AI Safety | Jan 1, 2026 | No | No | If frontier developer | >10²⁶ FLOPs training + $500M revenue for full requirements |
| SB 243 Chatbot Safety Reporting | Jul 1, 2026 | If operating chatbots | If operating chatbots | If operating chatbots | AI chatbot operators must report self-harm concerns |
| CCPA ADMT Automated Decision-Making | Jan 1, 2026 | Unlikely | If CCPA applies | Yes | Businesses meeting CCPA thresholds using automated decisions |
The Laws in Detail
AB 2013: Training Data Disclosure Jan 1, 2026
This law requires developers of generative AI systems to publicly disclose information about their training datasets. Unlike most other AI laws, AB 2013 has no revenue or user thresholds – if you develop GenAI and make it available to Californians, you must comply.
What must be disclosed:
Developers must post a “high-level summary” on their website covering the sources of training data, whether copyrighted material was used, whether personal information under CCPA was included, and licensing arrangements. The law applies retroactively to any GenAI system released or substantially modified since January 1, 2022.
Who this affects: Any company that designs, codes, produces, or substantially modifies a generative AI system available to California residents – including free tools and internal systems made publicly accessible.
Who this doesn’t affect: Companies that only use third-party AI tools (like Microsoft Copilot, Claude, or ChatGPT) without modifying them. You’re a user, not a developer.
SB 942: California AI Transparency Act Jan 1, 2026
This law targets large-scale generative AI providers, requiring them to offer free detection tools and content labeling capabilities. The threshold is specific: more than one million monthly users of a system that generates images, video, or audio content.
Requirements for covered providers:
Providers must offer a free, publicly accessible AI detection tool that can identify content created by their system. They must also enable users to add visible labels to AI-generated content and embed hidden watermarks containing the provider’s name, timestamp, and unique identifier.
Important limitation: SB 942 explicitly excludes text-only AI systems. Chatbots, writing assistants, and code generators are not covered – only systems producing multimedia content.
AB 489: Healthcare AI Misrepresentation Jan 1, 2026
This law prohibits AI systems from implying they provide services overseen by licensed healthcare professionals unless such oversight actually exists. The prohibition applies to both marketing materials and in-product functionality.
What’s prohibited: Using titles, icons, post-nominal letters (like “MD” or “RN”), or design elements that could suggest a licensed professional oversees the AI’s output when no such oversight exists. Each misleading representation can constitute a separate offense.
Who this affects: Developers and deployers of healthcare-related AI tools, including symptom checkers, mental health chatbots, diagnostic assistance tools, and patient communication systems.
AB 316: No Autonomous Harm Defense Jan 1, 2026
This law prevents defendants in civil lawsuits from claiming that AI acted autonomously as a defense against liability. If you develop, modify, or use AI that allegedly causes harm, you cannot shift blame to the technology’s independent decision-making.
Practical impact: Companies using AI in customer-facing applications, automated decision systems, or any context where AI outputs could cause harm should review their liability exposure. Standard practice should include human oversight checkpoints and clear documentation of AI involvement in business processes.
Who this affects: Every organization that uses AI in any capacity. This is a litigation defense issue, not a compliance requirement – but it changes the risk calculus for AI deployment.
AB 325: Algorithmic Pricing Restrictions Jan 1, 2026
This law amends California’s antitrust statute to address AI-driven price coordination. It prohibits using “common pricing algorithms” – methodologies that use competitor data to recommend, align, or influence prices – as part of anticompetitive conduct.
Two categories of liability: First, using a common pricing algorithm as part of a contract or conspiracy to restrain trade. Second, coercing others to adopt algorithm-recommended prices. The law targets the growing concern that AI pricing tools could enable tacit collusion without explicit agreements.
Who this affects: Retailers, hospitality businesses, and any company using third-party pricing optimization tools that incorporate competitor data. Review your pricing tool contracts and understand what data sources they use.
AB 621: Deepfake Protections Jan 1, 2026
This law strengthens legal protections against non-consensual, sexually explicit AI-generated content. It expands the definition of “digitized sexually explicit material” and creates new causes of action against those who create or distribute such content.
Key provisions: The law clarifies that minors cannot consent to the creation or distribution of deepfake pornography depicting them. It also creates liability for anyone who “knows or reasonably should know” that material depicts a minor, even if the material is entirely AI-generated rather than based on real images.
Who this affects: This applies broadly to individuals and organizations involved in creating or distributing non-consensual intimate imagery, including platforms that host such content. While most legitimate businesses won’t directly trigger this law, it’s relevant for content moderation policies and acceptable use terms.
SB 53: Frontier AI Safety Requirements Jan 1, 2026
This law establishes the first U.S. state-level safety framework for frontier AI models. It applies only to developers training models at extraordinary computational scales – specifically, models using more than 10²⁶ floating-point operations (FLOPs).
Two tiers of requirements:
All frontier developers must publish transparency reports before deploying models, report critical safety incidents to the California Office of Emergency Services within 15 days (24 hours for imminent threats), and maintain whistleblower protections.
“Large frontier developers” – those with annual revenue exceeding $500 million – face additional requirements including publishing comprehensive safety frameworks, documenting catastrophic risk assessments, and implementing third-party evaluations.
Who this affects: Currently, approximately five to eight companies globally meet the threshold, including OpenAI, Anthropic, Google DeepMind, Meta, and Microsoft. However, companies approaching these thresholds should begin building compliance infrastructure early.
SB 243: Chatbot Safety Reporting Jul 1, 2026
This law requires companies operating AI-powered chatbots to report safety concerns to appropriate authorities when users express thoughts of self-harm or harm to others. It takes effect July 1, 2026 – six months after the other laws on this list.
What’s required: Chatbot operators must have mechanisms in place to identify concerning user communications and processes for reporting them. The specific reporting requirements and thresholds are still being clarified through regulatory guidance.
Who this affects: Any organization operating customer-facing chatbots, mental health apps, companion AI, or other conversational AI systems where users might disclose personal distress. This includes both purpose-built mental health tools and general-purpose chatbots that users might confide in.
CCPA Automated Decision-Making Technology Regulations Jan 1, 2026
New regulations under the California Consumer Privacy Act create specific requirements for businesses using automated decision-making technology (ADMT). These apply to businesses already subject to CCPA thresholds.
CCPA thresholds reminder: Annual gross revenue over $25 million, or buying/selling/sharing personal information of 100,000+ California consumers annually, or deriving 50%+ of revenue from selling California consumer data.
New ADMT requirements: Businesses must provide consumers with meaningful information about their use of automated decision-making, including the logic involved and likely outcomes. Additional requirements may include access, opt-out, and human review rights for certain high-impact decisions.
The CCPA ADMT regulations deserve their own deep dive – we’ll cover the specific requirements, opt-out mechanisms, and implementation considerations in a separate post.
What MSPs Should Do Now
1. Audit your AI tooling. Document every AI system you deploy or recommend to clients. Distinguish between using third-party tools versus developing or modifying AI systems.
2. Review client AI exposure. Identify which clients develop GenAI, use healthcare AI, or deploy algorithmic pricing tools. These clients may need compliance assistance.
3. Update service agreements. Consider liability provisions related to AB 316’s elimination of the autonomous harm defense. Clarify responsibilities for AI-related compliance.
4. Monitor CCPA applicability. Clients approaching CCPA thresholds need to understand the new ADMT regulations taking effect.
Looking Ahead
California’s 2026 AI laws represent just the beginning. Additional requirements take effect in July 2026, including SB 243’s chatbot safety reporting requirements. New York’s RAISE Act may follow a similar path to SB 53. And federal legislation, while stalled, remains a possibility that could preempt or supplement state requirements.
For IT leaders, the practical approach is building flexibility into compliance programs. Document your AI usage, maintain clear audit trails, and establish review processes for new AI deployments. The regulatory landscape will continue evolving, but organizations with strong AI governance foundations will adapt more easily than those scrambling to catch up.
This article provides general information about California AI laws taking effect in 2026. It is not legal advice. Consult with qualified legal counsel for guidance specific to your organization’s situation.
Want a PDF copy to send out to your team? (Or in my case, download for ease of reading to your Remarkable or Kindle Scribe?) Click here for the PDF version of this file: California’s 2026 AI Laws: What IT Leaders Need to Know