•OpenAI is supporting Illinois Senate Bill 3444, which seeks to limit the liability of 'frontier AI developers' for 'critical harms' caused by their models.
•The bill defines 'critical harms' as events like death/serious injury to 100+ people, $1 billion+ in property damage, or AI-facilitated creation of CBRN weapons.
•Exemption from liability is granted if the harm wasn't intentional or reckless, and the developer published safety, security, and transparency reports.
•OpenAI argues this approach reduces serious risks, avoids a patchwork of state laws, and preserves US leadership in AI innovation, marking a shift in their legislative strategy.
•This move highlights the industry's push for a federal regulatory framework to standardize AI liability, though the bill's passage is considered unlikely by some experts.
•OpenAI is supporting Illinois Senate Bill 3444, which seeks to limit the liability of 'frontier AI developers' for 'critical harms' caused by their models.
•The bill defines 'critical harms' as events like death/serious injury to 100+ people, $1 billion+ in property damage, or AI-facilitated creation of CBRN weapons.
•Exemption from liability is granted if the harm wasn't intentional or reckless, and the developer published safety, security, and transparency reports.
•OpenAI argues this approach reduces serious risks, avoids a patchwork of state laws, and preserves US leadership in AI innovation, marking a shift in their legislative strategy.
•This move highlights the industry's push for a federal regulatory framework to standardize AI liability, though the bill's passage is considered unlikely by some experts.