ARTICLE
11 September 2024

California AI Bill Would Make Companies Liable For "Critical Harms" To Humanity

California's legislature has passed AI-regulating legislation that, if signed, would directly take on developers of large AIs. The bill awaits signature by the governor; it is not clear whether he
United States Technology
To print this article, all you need is to be registered or login on Mondaq.com.

California's legislature has passed AI-regulating legislation that, if signed, would directly take on developers of large AIs. The bill awaits signature by the governor; it is not clear whether he will sign. The bill would create liability for developers of AI systems of a certain size (measured in computing power, among other things) that are used in incidents that create "critical harm" to humanity, such as mass casualties or widespread losses from a cyber attack.

WHY IT MATTERS

The bill takes a different tack than most existing AI regulatory frameworks. The EU and other jurisdictions that have passed AI laws so far rely on mostly self-enforcing risk-based models (with more internal risk controls required for higher-risk AIs). The California bill would set a size threshold and require covered AIs to implement, measure, and report on the safety measures they employ. It would also create a safety regulator for AI. Effectively, there would be an oversight agency ensuring that AIs of a certain size operate safely, just as there are agencies that promote physical workplace safety for workers in hazardous industries.

In addition to the oversight angle, there is a direct liability piece to the bill. Liability for downstream AIs built on open technology would attach to the original developer, unless the developer of the downstream AI spends more than $10M to develop its model. Thus, the bill would require large developers whose models are likely to be the basis of spin-off AI models to install safety measures in their systems that can flow downstream.

As of early September, the governor had not signed the bill.

When it comes to open source models and their derivatives, the bill determined the original developer is responsible unless another developer spends another $10 million creating a derivative of the original model. The bill also requires a safety protocol to prevent misuses of covered AI products, including an "emergency stop" button that shuts down the entire AI model. Developers must also create testing procedures that address risks posed by AI models, and must hire third-party auditors annually to assess their AI safety practices.

techcrunch.com/...

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More