Ottawa Doctor Embraces AI as Tool for Better Patient Discussions
Ottawa Doctor Embraces AI for Better Patient Talks

When the doctor-patient relationship also includes Dr. Artificial Intelligence, one Ottawa physician says he embraces — or at least accepts — AI as a tool for better discussions with his patients.

Dr. Mark Nassim: A Pro-Technology Stance

Dr. Mark Nassim, a family physician in Ottawa, accepts — even embraces — patients' use of AI as a way to open helpful conversations that can build trust. He acknowledges that many patients now consult AI tools before seeking medical help, and he sees this as an opportunity rather than a threat.

The Rise of Patient AI Use

It is the first thing many patients do before seeking medical help: Consult Dr. AI. According to a recent Canadian Medical Association (CMA) survey, 90 per cent of Canadians use AI for health information, although only 27 per cent trust it to provide accurate health advice. The CMA warns that while AI represents the future of medicine, it comes with risks, particularly misinformation that can harm patients.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Risks of Misinformation

The CMA has actively raised awareness about medical misinformation and its potential harms. A survey from the CMA and Abacus Data found that 97 per cent of doctors had to intervene to prevent harm or address consequences after a patient followed false or misleading health information found online, including AI-generated advice.

Recent research in the journal Nature highlighted these concerns. A scientist in Gothenburg, Sweden, invented a fake skin condition called bixonimania, characterized by itchy eyes and pinkish eyelids, and uploaded two fake studies about it. Within weeks, major AI systems began repeating the invented condition as if it were real, though some later expressed skepticism.

Building Trust Despite Risks

Amid these concerns, there is positive news: 85 per cent of Canadians trust doctors to help them navigate health information. Dr. Nassim builds on this trust by being "pro technology." He accepts that patients will seek AI information before appointments and may arrive armed with that knowledge. While AI can cause harm if used incorrectly, Nassim believes it can also leverage and facilitate a person's ability to get information and have informed discussions.

"Like any tool that you pick up to try and fix a problem, it can cause more problems," Nassim said. "But, if used properly, it can also leverage and facilitate a person’s ability to get information and have informed discussions."

Conclusion

Dr. Nassim's approach reflects a balanced view of AI in healthcare: acknowledging its potential for harm while embracing its role in enhancing patient-doctor communication. As AI continues to evolve, physicians like Nassim are finding ways to integrate this technology into their practice to improve patient outcomes.

Pickt after-article banner — collaborative shopping lists app with family illustration