Families of Tumbler Ridge shooting victims sue OpenAI over ChatGPT role
Tumbler Ridge shooting families sue OpenAI over ChatGPT

OpenAI is facing new lawsuits over the mass shooting in Tumbler Ridge, British Columbia, alleging the artificial intelligence company could have prevented the attack by failing to act on warnings from its ChatGPT chatbot. The lawsuits, filed Wednesday in federal court in San Francisco, target OpenAI and its CEO, Sam Altman.

One case was brought by a 12-year-old girl who was shot during the incident and remains in intensive care, along with her mother. Another lawsuit was filed by the mother of a girl killed in the shooting. The complaints claim that OpenAI knew Jesse Van Rootselaar, the 18-year-old suspect behind the February massacre at Tumbler Ridge Secondary School, was planning the attack due to his use of ChatGPT, but chose not to alert authorities.

OpenAI accused of prioritizing profits over safety

According to the lawsuits, OpenAI made a “conscious decision not to warn authorities” to avoid having to contact police each time its safety team detected a user planning violence. “ChatGPT played a role in the mass shooting and OpenAI could have, and should have, prevented it,” the complaints state.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

On February 10, Van Rootselaar allegedly killed eight people, including her mother and stepbrother, and five children at the school, while injuring more than two dozen others. Van Rootselaar was found dead from a self-inflicted wound. OpenAI said it banned Van Rootselaar last June for violating usage policies after her account flagged messages with potential for violence, but no police were notified. The Wall Street Journal reported that concerned employees urged the company to report the situation.

OpenAI responds with safety improvements

“The events in Tumbler Ridge are a tragedy. We have a zero-tolerance policy for using our tools to assist in committing violence,” OpenAI said in a statement. The company added that it has strengthened safeguards, including improving how ChatGPT responds to signs of distress, connecting users with mental health resources, and enhancing detection of repeat policy violators.

Since 2024, a series of lawsuits have targeted chatbot makers, mostly OpenAI and ChatGPT, alleging that the technology harms children and adults by fostering delusions, despair, and leading to suicide or murder-suicide. This latest case highlights ongoing concerns about AI safety and accountability.

Pickt after-article banner — collaborative shopping lists app with family illustration