California Governor Gavin Newsom speaks in the course of the 2025 Clinton Global Initiative (CGI) in New York City, U.S., September 24, 2025.
Kylie Cooper | Reuters
California Governor Gavin Newsom signed into state law on Monday a requirement that ChatGPT developer OpenAI and different huge gamers disclose how they plan to mitigate potential catastrophic dangers from their cutting-edge AI fashions.
California is the house to prime AI firms together with OpenAI, Alphabet’s Google, Meta Platforms Nvidia and Anthropic, and with this law seeks to guide on regulation of an business crucial to its economic system, Newsom stated.
“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” Newsom stated in a press launch.
Newsom’s workplace stated the law, often known as SB 53, fills a spot left by the U.S. Congress, which thus far has not handed broad AI laws, and offers a mannequin for the U.S. to observe.
If federal requirements are put in place, Newsom stated, the state legislature ought to “ensure alignment with those standards – all while maintaining the high bar established by SB 53.”
Last 12 months, Newsom vetoed California’s first try at AI laws, which had confronted fierce business pushback. The invoice would have required firms that spent greater than $100 million on their AI fashions to rent third-party auditors yearly to evaluate danger assessments and allowed the state to levy penalties within the a whole bunch of tens of millions of {dollars}.
The new law requires firms with greater than $500 million in income to evaluate the danger that their cutting-edge expertise might break freed from human management or assist the event of bioweapons, and disclose these assessments to the general public. It permits for fines of as much as $1 million per violation.
Jack Clark, co-founder of AI firm Anthropic, referred to as the law “a strong framework that balances public safety with continued innovation.”
The business nonetheless hopes for a federal framework that will change the California law, in addition to others prefer it enacted lately in Colorado and New York. Last 12 months, a bid by some Republicans within the U.S. Congress to dam states from regulating AI was voted down within the Senate 99-1.
“The biggest danger of SB 53 is that it sets a precedent for states, rather than the federal government, to take the lead in governing the national AI market – creating a patchwork of 50 compliance regimes that startups don’t have the resources to navigate,” stated Collin McCune, head of presidency affairs at Silicon Valley enterprise capital agency Andreessen Horowitz.
U.S. Representative Jay Obernolte, a California Republican, is engaged on AI laws that would preempt some state legal guidelines, his workplace stated, though it declined to remark additional on pending laws.
Some Democrats are additionally discussing methods to enact a federal normal.
“It’s not whether we’re gonna regulate AI, it’s do you want 17 states doing it, or do you want Congress to do it?” U.S. Representative Ted Lieu, a Democrat from Los Angeles, stated at a latest listening to on AI laws within the U.S. House of Representatives.