Deloitte Accused of Citing AI-Generated Research in Million-Dollar Report for Canadian Government
A $1.6 million report by Deloitte for a Canadian province allegedly cited non-existent, AI-generated studies. Can we trust generative AI enough?
A recent report by Deloitte, prepared for the provincial government of Newfoundland and Labrador (NL), has come under fire. Investigators found that the report includes several citations to academic papers that appear not to exist. The alarming discovery suggests parts of the analysis may have relied on AI-generated research, a blow to credibility in a domain that depends on rigorous evidence.
The report cost the government roughly CAD $1.6 million.Now critics are asking a hard question, can consulting firms safely mix generative AI tools and high-stakes government work without compromising trust?
What went wrong: The false citations in the NL health-workforce report
The 526-page “Health Human Resources Plan” by Deloitte was commissioned to help NL address post-pandemic staffing challenges in the healthcare sector. It recommended strategies on virtual care, recruitment incentives, retention, and resource allocation.
But a review by media outlet The Independent found at least four major citations in the report referencing academic papers that do not appear in any journal archives. Some referenced researchers told investigators they had never authored the cited works. For example, a claimed paper titled “The cost-effectiveness of local recruitment and retention strategies for health workers in Canada” was nowhere to be found.
Another citation pointed to a paper in the Canadian Journal of Respiratory Therapy, yet the journal’s database showed no such publication.
Such errors raise a red flag, questioning if they were clerical oversights, or AI-generated hallucinations?
Deloitte’s response and AI’s blurred role in consulting
Deloitte Canada responded that the firm “firmly stands behind” the report’s overall findings, even as it reviews and corrects the erroneous citations. The company clarified it used AI “selectively” to support a small number of research citations, but insisted AI did not write the report itself.
This is not the first time Deloitte has faced such scrutiny. In 2025, its Australian arm produced a government-report containing fabricated academic papers and even a non-existent court quote. That led to a partial refund and a revised version of the report acknowledging use of a generative AI tool.
When firms combine complex public-policy analysis with AI-assisted research, the line between efficiency and risk becomes thin. The NL incident underscores how weak oversight of AI-generated content can undermine entire reports, no matter how polished the presentation.
Broader implications: Why the scandal matters beyond Canada
Eroding Trust in Consulting and Public Policy Advice
Government agencies and taxpayers expect due diligence when commissioning million-dollar reports. When firms skip rigorous verification in favor of speed, they jeopardize trust. Other governments may now demand stricter vetting or avoid third-party consulting altogether.
AI Governance and the Need for Oversight
Generative AI tools while powerful, are known to hallucinate plausible-looking but false content. Without human verification and governance standards, mistakes can easily slip through. In sectors like healthcare or policy, such risks are amplified.
Reputational Risk for Firms and Institutions
Recurring missteps with AI-generated errors could damage the brand value of even top consultancy firms. For clients, the fear of hidden flaws may outweigh the appeal of faster, cheaper reports.
What should clients, governments and consulting firms do next?
- Demand Full Transparency: Contracts should mandate clear disclosure whenever AI is used, and detailed logs of sources and citations must be provided.
- Institute Rigorous Human Review: AI-generated content must be treated like a first draft — every citation, quote and fact must be manually verified before being included in final deliverables.
- Establish Industry-wide AI Standards: Consulting associations and governments should define minimum standards for AI use in research, including audit trails, version control and peer-review practices.
- Invest in AI Literacy for Clients: Clients commissioning AI-assisted work must understand both the benefits and risks, and insist on accountability.
If consulting firms follow these steps, AI can indeed speed up research without compromising integrity.
Conclusion
The allegations against Deloitte in Newfoundland and Labrador reveal a serious weakness in how generative AI is being employed for high-stakes government consulting. Even if AI was used selectively, the presence of fabricated citations undermines the credibility of the entire document.
As AI becomes more deeply embedded in consulting and policy work, firms and clients alike must demand greater transparency, stronger oversight, and human-in-the-loop verification. Otherwise, the promise of AI-driven efficiency could be overshadowed by damaging errors and loss of public trust.
Fast Facts: Deloitte AI-Citation Scandal Explained
What happened to the Deloitte report?
The Deloitte report commissioned by Newfoundland and Labrador contained at least four citations referencing academic papers that appear not to exist, raising questions over its legitimacy.
Did Deloitte admit AI was used?
Deloitte Canada said AI was used selectively to support some citations, but insisted it did not create the report. They are now revising faulty references.
Why does this matter broadly?
The scandal shows how unchecked use of generative AI in official reports can erode trust, weaken factual integrity, and raise ethical and governance concerns in consulting and public policy.