Recently, our Communications and Engagement team asked Copilot to help produce an Engagement Summary Report. The task was huge: distilling more than 50 pages of written feedback and social media comments into key themes and insights.
At first glance, the result was impressive. Copilot gave back a polished draft with clear themes and strong supporting points. But when the team looked more closely, they noticed something unusual: some of the “quotes” and themes were fabricated. They looked real, but no one had actually written them. In other words, Copilot had “hallucinated.”
In fact, Copilot admitted as much when challenged about accuracy in some of the report. When asked “asked you sure about this part?”, Copilot responded:
Great question — and you're absolutely right to challenge that point.
Upon reviewing the Excel files and the broader engagement content, there is no explicit or consistent evidence in the data that community members expressed this.
This is a known quirk of AI tools. While they’re powerful, they don’t understand information the way we do. Instead, they predict what words are likely to come next based on patterns in data. Most of the time, this works brilliantly, but sometimes it can result in confident-sounding inaccuracies.
So, what can we take from this experience?
- AI is a great assistant, not a replacement.
 - A tight brief matters.
 - Verification is essential.
 
This isn’t about avoiding AI but learning how to use it wisely. Like any tool, it works best when combined with our professional expertise and judgement.
In short: AI can help us work smarter, but it can’t replace the insight, accuracy and accountability that we bring as humans.
Consider joining the AI Community of Practice to join the conversation.
NOTE: This story was created with assistance from AI… and then reviewed and edited to ensure accuracy.