OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

Why it matters: AI labs could avoid lawsuits for incidents causing 100+ deaths or $1 billion in damage, shifting financial risk away from developers to victims and insurers.
- OpenAI backs SB 3444, saying it reduces serious‑harm risk and prevents a patchwork of state regulations.
- SB 3444 defines a “frontier model” as any AI system costing over $100 million in compute and shields developers from liability for “critical harms” (≥100 deaths or $1 billion damage) if they weren’t intentional/reckless and publish safety, security, and transparency reports.
- AI policy experts warn the bill is more extreme than previous proposals and could become the industry‑wide liability shield, contrasting with OpenAI’s earlier defensive stance.
- Other AI labs such as Google, xAI, Anthropic, and Meta would also fall under the bill’s protection, extending the shield to most major U.S. AI developers.
OpenAI has thrown its weight behind Illinois’ SB 3444, a bill that would give liability protection to developers of “frontier” AI models for mass‑death or billion‑dollar harms, provided they publish safety reports and aren’t reckless. The move marks a strategic pivot from opposing liability bills and could set a new industry standard, a shift AI policy experts view as extreme.




