OpenAI Confirms Enhanced Policies Would Have Triggered Police Referral for Shooting Suspect
In a significant disclosure to Canadian legislators, OpenAI Inc. stated that under its newly revised protocols, it would have proactively referred a banned ChatGPT user to law enforcement authorities. This user later emerged as the primary suspect in one of Canada's most devastating mass shootings in Tumbler Ridge, British Columbia.
Undetected Second Account and Missed Opportunities
The artificial intelligence giant further revealed on Thursday that the suspected perpetrator of the Tumbler Ridge tragedy, 18-year-old Jesse Van Rootselaar, maintained a secondary ChatGPT account that remained undetected by company systems until after police publicly released her identity. This admission follows OpenAI's previous acknowledgment that it had flagged another account belonging to Van Rootselaar in June 2025—approximately eight months before the February killings occurred.
The company employs sophisticated scanning systems designed to identify potential misuse, including indications of violent activity. While OpenAI considered referring the June 2025 account to law enforcement at that time, company officials determined the content did not present a credible or imminent threat, failing to meet their established threshold for reporting.
Political Fallout and Demands for Accountability
The revelation about Van Rootselaar's ChatGPT usage has ignited substantial anger and prompted serious questions from senior Canadian politicians, who summoned OpenAI executives to Ottawa this week to scrutinize the company's reporting policies and procedures.
In a formal letter to AI Minister Evan Solomon following their meeting, Ann O'Leary, OpenAI's vice-president of global policy, wrote: "With the benefit of our continued learnings, under our enhanced law enforcement referral protocol, we would refer the account banned in June 2025 to law enforcement if it were discovered today."
The tragic massacre claimed nine lives, including Van Rootselaar, who appears to have died by suicide following the violent incident.
Commitments to Improved Detection and Communication
OpenAI has committed to strengthening its detection capabilities to better identify attempts to circumvent its safety safeguards. The company additionally pledged to establish direct communication channels with Canadian law enforcement agencies to ensure rapid information sharing when potential real-world violence is identified.
An OpenAI spokesperson confirmed via email that CEO Sam Altman has offered to meet with both AI Minister Evan Solomon and British Columbia Premier David Eby to address concerns directly.
Political Leaders Express Grave Concerns
Premier Eby expressed profound disappointment with OpenAI's handling of the situation, telling reporters late Thursday: "Clearly, they tragically missed the mark in bringing this information forward. These are not small stakes, and it illustrates why these companies cannot be trusted to set their own reporting thresholds, especially when there are no apparent consequences for not meeting them."
Eby voiced particular concern about the information retention and reporting standards across other artificial intelligence companies, advocating for the establishment of a national reporting standard. He compared this potential requirement to the duty-of-care obligations that professionals like doctors and social workers must uphold regarding information disclosure to authorities.
"We don't want an AI company operating in Canada that says 'Hey, sign up with us, we're the company that doesn't tell the cops if you're planning a violent attack,'" Eby emphasized.
The Premier noted uncertainty about what OpenAI's rule modifications will ultimately achieve, as details about the alleged shooter's interactions with ChatGPT remain unclear—including whether the platform assisted in planning the attack. He indicated that such information would eventually become public through either a coroner's inquest or a formal public inquiry.
Eby concluded with a pointed assessment: "OpenAI had this information, and they could have prevented this incident if they had the right standard in place."
A spokesperson for Minister Solomon confirmed his office is "reviewing OpenAI's letter carefully and will have more to say in the coming days" as the government determines its response to these critical revelations about artificial intelligence safety protocols and their real-world implications.
