A public servant recently expressed concerns about the federal government's push to integrate artificial intelligence into daily tasks. The hesitation is understandable and shared by many across the public service. Pressure to adopt AI is real, coming from top-level mandate letters that quickly translate into departmental plans and expectations. However, caution is not a sign of being behind; it reflects attentiveness to the stewardship of sensitive information and public trust.
The Balance Between Innovation and Risk
The tension between moving faster and being cautious is a central governance challenge. Poorly governed adoption creates risk regardless of pace, but ignoring AI leads to shadow use—employees turning to unapproved tools without safeguards. The riskiest AI use often occurs quietly in browser tabs, not within approved systems.
Ideal AI adoption should resemble controlled experimentation: testing low-risk use cases, learning where tools add value, and building governance as implementation evolves. This requires time, capacity, patience, and investment—resources that can feel scarce in an environment of doing more with less.
Concerns About Private Companies
Most AI tools are developed by a small number of large, foreign private-sector firms, raising questions about data sovereignty, vendor lock-in, procurement integrity, and long-term control. The federal government's AI strategy and Treasury Board guidance attempt to establish guardrails, but efforts remain uneven and may lag behind ambition.
It is important to distinguish between using a public chatbot for sensitive information and employing enterprise AI tools within secured government systems. The latter typically includes stricter contractual, technical, and policy safeguards designed to prevent data from being repurposed for training public-facing models.
Practical Principles for Responsible AI Use
Responsible public sector AI use follows common-sense principles: do not input sensitive information into unapproved systems, keep humans accountable for outputs, verify results carefully, be transparent about AI use, and treat AI as a support tool, not a decision-maker.
AI is less like an oracle and more like a self-assured intern: useful for drafting, summarizing, and brainstorming, but prone to errors, factual mistakes, and shallow reasoning. Judgment, accountability, and final decisions must remain with humans.
Advice for Public Servants
- Stick to approved tools.
- Avoid entering sensitive information into unvetted platforms.
- Start with low-risk administrative tasks.
- Treat outputs as first drafts, not final answers.
- Keep skepticism intact—not every AI use case adds value.
The public service does not need blind adopters of AI. It needs thoughtful professionals willing to engage carefully, understand limitations, and speak up when governance, privacy, or public trust may be compromised. Thoughtful skepticism may be one of the public service's greatest strengths.
Jacob Danto-Clancy is a senior policy analyst at the Institute on Governance, focusing on public sector governance and institutional performance. He has written and advised governments on AI, digital modernization, and emerging technology issues.
Public Service Confidential is an advice column for the Ottawa Citizen by guest contributors. The information is not legal advice.



