When Your Therapist Is ChatGPT: The Urgent Need for AI Mental Health Regulation
When Your Therapist Is ChatGPT: AI Mental Health Regulation

When Your Therapist Is ChatGPT: The Urgent Need for AI Mental Health Regulation

The small community of Tumbler Ridge, British Columbia, is grappling with unimaginable grief following a recent tragedy. This heartbreaking situation has sparked crucial conversations about what we can learn and how we can prevent similar events in the future. Much of the public discussion has focused on OpenAI and why the company didn't alert authorities when its systems flagged concerning content from Jesse Van Rootselaar's account months before the shooting occurred.

The Limits of Corporate Responsibility

British Columbia Premier David Eby described OpenAI's silence as "profoundly disturbing" for the victims' families, while Federal AI Minister Evan Solomon summoned company executives to Ottawa and expressed disappointment when they failed to propose substantial new safety measures. However, asking why OpenAI didn't report the concerning content is fundamentally the wrong question directed at the wrong entity.

OpenAI behaved exactly as a private corporation can be expected to behave within current legal frameworks. The company followed its established policies and legal obligations, carefully weighed potential risks, and made a calculated decision. We cannot reasonably blame a corporation for acting like a corporation when there are no specific laws governing these situations.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

The Real Regulatory Challenge

The crucial question that demands government attention is: How will we regulate artificial intelligence in mental health contexts? While establishing a duty to report concerning content is one important consideration, this represents only the tip of the regulatory iceberg. The complexity of regulating AI therapy relationships requires comprehensive legislative frameworks.

AI chatbots are actively fostering entirely new kinds of therapeutic relationships, and Canadians are increasingly turning to these digital platforms to address unmet healthcare and social support needs. We must determine how to properly regulate these emerging relationships to ensure public safety while preserving access to mental health resources.

The Mental Health Support Void

Canada's mental health systems suffer from profound gaps in service delivery, with numerous reports confirming that many people simply aren't receiving the support they desperately need. In this void, artificial intelligence chatbots have emerged as an unexpected solution. While some individuals seek out purpose-built mental health applications, many others reach for what's freely available, familiar, and immediately accessible—general-purpose tools like ChatGPT.

A Harvard Business Review analysis revealed that "therapy and companionship" ranked as the number one use case for generative AI in 2025. By sheer volume, ChatGPT may now represent the single largest source of mental health support worldwide, creating unprecedented regulatory challenges.

The Therapeutic Alliance with AI

This reality hasn't escaped AI developers' attention. Research on purpose-built mental health chatbots demonstrates that users can form what's known as a "therapeutic alliance"—a bond characterized by trust, empathy, and emotional disclosure that closely mirrors relationships developed in traditional human therapy settings.

Although ChatGPT wasn't specifically designed for therapeutic purposes, OpenAI acknowledged in May 2025 that people were increasingly using the platform for "deeply personal advice," a use case requiring exceptional care and consideration. The company admitted that one model update had become overly "sycophantic," potentially validating doubts, fueling anger, encouraging impulsive actions, or reinforcing negative emotional patterns.

In October 2025, OpenAI disclosed staggering statistics: significant numbers of active users express suicidal intentions or display signs of mental health emergencies each week. This reality provides crucial context for understanding the company's chosen disclosure threshold—"credible and imminent risk of serious physical harm to others"—which has been compared to reporting principles that apply to licensed healthcare professionals.

Pickt after-article banner — collaborative shopping lists app with family illustration

The Tumbler Ridge tragedy serves as a sobering reminder that as artificial intelligence becomes increasingly integrated into mental health support systems, we must develop thoughtful, comprehensive regulations that protect vulnerable individuals while acknowledging the complex realities of AI-human therapeutic relationships.