A joint investigation by Canadian federal and provincial privacy watchdogs has concluded that OpenAI violated privacy laws in the development of its ChatGPT artificial intelligence chatbot. The probe, which examined how OpenAI collected and used personal data to train its AI models, found the company failed to obtain proper consent and did not comply with transparency requirements under Canadian privacy legislation.
Key Findings of the Investigation
The report, released on May 6, 2026, highlights several breaches, including the collection of personal information without adequate disclosure and the lack of meaningful consent mechanisms. The privacy commissioners determined that OpenAI's practices contravened the Personal Information Protection and Electronic Documents Act (PIPEDA) and similar provincial laws.
Impact on Users
The investigation found that ChatGPT inadvertently stored and used personal data from users, including conversations, without clear policies on data retention or deletion. This raised concerns about the security of sensitive information shared with the AI system.
Recommendations and Next Steps
The privacy watchdogs have recommended that OpenAI implement comprehensive privacy measures, including transparent data collection policies, user consent options, and robust data deletion protocols. The company has been given a deadline to comply with the recommendations or face potential fines and legal action.
OpenAI has acknowledged the findings and stated its commitment to addressing the concerns. In a statement, the company said it is reviewing its data practices and working to enhance privacy protections for Canadian users.
The case underscores growing global scrutiny of AI companies and their handling of personal data, as regulators worldwide push for stricter enforcement of privacy laws in the digital age.



