ChatGPT maker OpenAI has announced its support for a US state bill that would protect artificial intelligence (AI) labs from liability in cases where AI models are used to cause mass harm or significant property damage. According to a report by Wired, the proposed Illinois bill, SB 3444, would shield AI developers if incidents result in the death or serious injury of 100 or more people or at least $1 billion in property damage, provided the company did not act intentionally or recklessly and has published safety and transparency reports. The move signals a shift from OpenAI’s earlier stance of pushing regulations that could increase liability for AI developers.The bill defines “critical harms” to include scenarios such as the use of AI to create chemical, biological, radiological, or nuclear weapons, or cases where an AI system independently engages in actions that would be considered criminal if performed by a human. Under SB 3444, companies behind such systems, including those like ChatGPT, may not be held liable if they meet the outlined conditions.
What OpenAI said about the new US bill that will protect AI labs
In a statement to Wired, OpenAI spokesperson Jamie Radice said, “We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois. They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”In testimony supporting the bill, OpenAI’s Caitlin Niedermeyer argued for a broader federal approach to AI regulation, stating the need to avoid “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.”“At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,” she added.The bill applies to “frontier models,” defined as systems trained with more than $100 million in computational resources, potentially covering companies such as Google, Anthropic, Meta, and others.However, the proposal has drawn criticism. Scott Wisor of the Secure AI project told Wired, “We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90% of people oppose it. There’s no reason existing AI companies should be facing reduced liability.”This controversy follows an unresolved issue regarding the responsibility for injuries arising from AI technology. While the bill addresses massive instances of AI injury, some companies have already been taken to court over smaller cases in which people are harmed through their interactions with AI technology.

