AI in Police Reporting Raises Public Safety Concerns Amid Accuracy Issues
The Royal Canadian Mounted Police recently concluded a six-month pilot project that utilized artificial intelligence to draft police reports from audio captured by officers' body-worn cameras. This initiative, which ran from approximately August 2024 to January 2025 across eight British Columbia detachments, generated nearly 800 reports in mere seconds each. While presented as a way to reduce administrative burdens, this technological advancement introduces significant risks to public safety and judicial integrity that demand immediate scrutiny.
Questionable Justifications for AI Implementation
Police administrators argue that AI-generated reports free officers from paperwork, allowing them to focus on patrol work and other frontline duties. However, this rationale appears increasingly tenuous when examined against actual policing realities. The majority of police work already involves substantial periods of inactivity and routine tasks like traffic duty and patrols. Furthermore, evidence indicates that calls for service to RCMP detachments actually declined in some jurisdictions during 2025, suggesting officers already have adequate capacity for their current responsibilities.
The fundamental importance of police report writing cannot be overstated in modern law enforcement. These documents serve as crucial records when officers testify in court months or years after incidents occur. Officers rely on their reports to recall specific details accurately during testimony, and failures in this process can lead to charges of perjury, obstruction of justice, and ultimately unjust legal outcomes. Research consistently demonstrates that police reports play a central role in criminal cases, with errors potentially derailing justice entirely.
Documented Accuracy Problems with AI Systems
Across multiple industries, artificial intelligence systems have demonstrated concerning error rates, and policing applications prove no exception. In a particularly illustrative case from Utah earlier this year, an AI-generated police report matter-of-factly stated that an officer had transformed into an amphibian. This bizarre error occurred because the system misinterpreted background audio from Disney's The Princess and the Frog playing during the incident.
Beyond such obvious absurdities, more subtle inaccuracies in AI-generated reports could have devastating consequences for legal proceedings. What about unremarkable details that might be overlooked by officers reviewing AI drafts but could prove crucial in court? Evidence indicates that AI-generated police reports have already been used in plea bargaining processes in the United States, and since plea deals resolve most criminal cases in Canada, the stakes for accuracy couldn't be higher.
Proprietary Systems and Accountability Gaps
The RCMP has contracted with Axon Enterprise, a U.S.-based company, to supply both body cameras and the Draft One AI report-generating software used in their pilot project. This proprietary system creates additional concerns about transparency and accountability. When AI systems generate potentially flawed reports that influence legal outcomes, determining responsibility becomes increasingly complex. The Canadian Department of Justice emphasizes that for courts to accept guilty pleas, "the facts alleged by the prosecutor must be accepted by the accused as being substantially accurate"—a standard that AI-generated reports may struggle to meet consistently.
Given these substantial concerns about accuracy, accountability, and justice system integrity, the expanding use of artificial intelligence in police reporting requires immediate reevaluation. While technological innovation offers potential benefits, public safety must remain the paramount consideration in law enforcement practices. The RCMP's pilot project reveals that current AI applications in policing introduce more risks than they resolve, potentially undermining the very justice system they're meant to serve.



