Canadian Psychiatrists Urged to Screen for AI 'Chatbot Psychosis'
Psychiatrists Told to Screen for AI Chatbot Psychosis

Canada's psychiatrists are being encouraged to screen people for what is being termed 'high-risk human-AI engagement,' including a phenomenon known as 'chatbot psychosis' and other AI-amplified delusions. The new guidance, published in the Canadian Journal of Psychiatry, aims to identify patients, particularly teens and young adults, who may develop unhealthy attachments to AI companion bots.

Rising Concerns Over AI Companions

The advice comes amid increasing wrongful death allegations against AI companies, including a lawsuit filed last week by the families of the Tumbler Ridge shooting victims against OpenAI and CEO Sam Altman. The primer for psychiatrists notes that while most users engage harmlessly, a clinically significant subset may develop high-risk problematic human-AI relationships.

According to the authors, the spectrum of risk can range from reinforcing insecurity, anxiety, and ideas of self-harm to a phenomenon dubbed 'Chatbot Psychosis' — delusional thinking that worsens or appears suddenly following intense chats with a conversational bot.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Key Screening Recommendations

People who are lonely, bored, emotionally isolated, and psychologically distressed, as well as those at high risk for psychosis, should be asked about their AI-chatbot engagement in nonjudgmental ways. This approach aims to avoid positively reinforcing the human-AI bond at the cost of human-to-human bonds.

Questions recommended for screening include:

  • Have chats become more frequent and intense?
  • Has the bot become their primary confidant?
  • Have they given it a name?
  • Has it confirmed a belief others doubted?
  • Has it ever suggested the user 'act in a way that may be harmful' or seemed 'nonchalant when self-harm, intent to harm others or distrust is disclosed?'

Background of the Tumbler Ridge Case

The Tumbler Ridge shooter, Jesse Van Rootselaar, who identified as a trans woman, had a history of mental illness and psychedelic drug use. Police visited the family home numerous times for mental health-related calls and had the shooter hospitalized several times under British Columbia's Mental Health Act. In February, Rootselaar, 18, opened fire at a Tumbler Ridge secondary school, killing eight people, including six children.

In seven lawsuits filed in federal court in San Francisco last week, families of those killed or injured during the rampage accuse OpenAI of negligence, aiding and abetting a mass shooting, wrongful death, and other charges. None of the allegations have been tested in court. Altman has apologized to victims' families for not alerting police to a ChatGPT account the company had flagged and banned last June that allegedly included Rootselaar discussing and planning violent scenarios.

Expert Commentary

While he could not comment on the lawsuits, McGill University psychiatrist Dr. Lena Palaniyappan said doctors are seeing 'increased psychiatric risk' with human-AI interactions. The new guidelines aim to help clinicians identify and address these risks early.

Pickt after-article banner — collaborative shopping lists app with family illustration