Victims of the mass shooting in Tumbler Ridge, British Columbia, and their families have filed a wrongful death lawsuit against Sam Altman and his company, OpenAI. The lawsuit, brought forth by lawyers representing seven people affected by the February 10 shooting, accuses Altman and the artificial intelligence company of failing to warn authorities and aiding and abetting the shooting, among other allegations.
Plaintiffs seek landmark damages in California
The plaintiffs include 12-year-old Maya Gebala, who was severely injured after being shot in the head, and the father of Abel Mwansa Jr., also 12, who was killed in the shooting. Jesse Van Rootselaar, 18, shot and killed her mother and half-brother at their home before fatally shooting five children and a teacher at the secondary school. Numerous others were injured. Van Rootselaar died of a self-inflicted gunshot.
Lawyers with the firm Rice Parsons Leoni & Elliot LLP said Van Rootselaar's ChatGPT account was banned prior to the shooting for "disturbing content," which allegedly included planning violent scenarios. Despite some 12 different OpenAI employees imploring the company to notify Canadian law enforcement about the shooter's plans, nothing else was done, the firm said in a media release.
Why the lawsuit was filed in California
The law firm presented seven lawsuits filed by shooting victims. Litigating the case in Canada would be challenging as damages for pain and suffering cases are capped at about $470,000. That is why they are suing OpenAI in California "to pursue landmark damage awards." The firm also noted that a lawsuit filed in British Columbia by the family of Maya Gebala has been discontinued.
OpenAI's response to the shooting
After the shooting, OpenAI commented on the incident, stating Van Rootselaar's worrisome behavior while using ChatGPT was flagged by its staff, but they did not go to the police with their concerns. The artificial intelligence company admitted that Rootselaar got around a chatbot ban by creating a second account.
In a statement, OpenAI said it has a "zero-tolerance policy for using our tools to assist in committing violence." The company added that it has strengthened its safeguards, including improving how ChatGPT responds to signs of distress, connecting people with local support and mental health resources, strengthening how it assesses and escalates potential threats of violence, and improving detection of repeat policy violators.
Altman's apology
Last week, Altman apologized for OpenAI not alerting authorities about Van Rootselaar's behavior and account bans. "The pain your community has endured is unimaginable," Altman wrote in a letter. "I am deeply sorry we did not alert law enforcement to the account that was banned in June. While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered."



