AI in Sustainability Reporting: Use Cases Auditors Will Challenge
Introduction
In this evolving landscape of sustainability reporting, where expectations are as high as the stakes, companies must navigate the dual reality of AI. The power of AI to enhance sustainability reporting; to collect and reconcile data, and drive efficiencies also comes with risks that must be managed. And so the question is no longer “How can we use AI?” It is now, “Where does AI increase audit, legal, or reputational risk, and how do we manage the risks?”
Herein lies the critical role of transparency - a challenge for AI. As corporate leaders embed AI into their operations, they must simultaneously lead with accountability and articulate their strategies with clarity.
And, sustainability leaders are right to be cautious. As sustainability disclosures become regulated, assured, and scrutinised, AI moves from more than just a productivity tool, into one that also becomes a governance issue.
At a time when both investors and regulators expect relevant and accurate data to be at par with financial disclosure, understanding which AI use cases fail under audit is now part of your responsibility.
Why AI Is Now an Audit Topic in Sustainability Reporting and Disclosure
Across regulated reporting regimes, disclosure is increasingly assessed as decision-relevant, auditable information. As a result, disclosures are now expected to be:
Explainable
Traceable
Governed
Defensible
AI directly challenges each of these expectations. For organizations that are not already disciplined in explaining how decisions are made, the use of AI can reduce transparency rather than enhance it. This makes it harder for auditors, regulators, and other stakeholders to understand where accountability sits or how unexpected outcomes were reached. Conversely, when leaders can clearly articulate how AI is used to support oversight, rather than replace judgment, it can reinforce credibility and build trust.
Auditors themselves are not opposed to AI. However, they are becoming increasingly sceptical of uncontrolled or opaque AI use, particularly where outputs influence regulated disclosures. In these situations, auditors expect decision criteria to be explicit and documented, especially when AI is shaping or informing human judgment.
That being said, here is the catch-22… It is worth noting that, as of summer 2025, most Big Four audit firms were found not to be systematically monitoring how automated tools and AI were affecting audit quality, themselves, despite using AI extensively across audit activities. I put this down to the pressure to rapidly adopt AI.
At the same time, regulators such as the UK’s Financial Reporting Council have begun urging audit firms to evaluate the impact of their own AI tools, signalling growing regulatory pressure for explainable, well-governed AI. And I expect that this will quickly extend to the organizations being audited.
Why AI Introduces Audit Risk in Sustainability Reporting
AI introduces audit risk because it can obscure how judgments are made. In regulated reporting, auditors are not primarily testing outputs. They are testing process integrity: how data was generated, who reviewed it, what assumptions were applied, and where accountability sits. AI becomes a risk factor when it interrupts that chain of evidence.
Auditors are commonly interested in the:
Inability to trace AI outputs back to accountable owners
Lack of documentation showing how AI-informed decisions were reviewed and approved
Opaque models that cannot explain how conclusions were reached
Inconsistent use of AI for reporting across business units, without central governance
From an audit readiness perspective, the question is whether AI use can be clearly explained, consistently applied, and credibly governed. Where organizations can demonstrate this, AI strengthens audit readiness. Where they cannot, it becomes a source of exposure.
Failed AI Use Cases for ESG Reporting
AI use cases that fail under audit, fail because they substitute automation for judgment, speed for governance, or output for accountability. What follows are five common AI use cases that create risk when used in regulated reporting and disclosure.
-
This is the highest-risk failure point.
Examples include:
AI-drafted transition plans
AI-generated targets or commitments
AI-written scenario narratives
Why This Fails Under Audit
Forward-looking statements require management judgment
AI cannot evidence intent, approval, or feasibility
Outputs cannot be tied to board decisions
Where forward-looking disclosures are management representations, AI cannot own those representations.
-
Some organizations use AI to rank or prioritise ESG topics automatically.
Why This Fails Under Audit
Double materiality is a judgment process, not an algorithmic one
Auditors expect to see:
How thresholds were chosen
How trade-offs were resolved
Who made final decisions
AI may support analysis, but cannot determine materiality.
-
This includes:
Automated policy descriptions
AI-written governance explanations
AI-drafted impact narratives
Why This Fails Under Audit
Language cannot be traced to internal owners
Outputs may not reflect actual controls or processes
Impact errors are difficult to detect and explain
The auditors are testing process reality, not writing quality.
-
AI tools that cannot explain, Data sources, logic, version control, or human review steps
Why This Fails Under Audit
Regulations require transparency of assumptions
Assurance depends on traceability
“The system generated it” is not evidence
Explainability is no longer optional. Robust efforts in regards to traceability must be demonstrable.
As before AI fails under audit when it replaces judgment, obscures decision-making, or lacks traceability. In regulated reporting, governance, not model sophistication, determines whether AI strengthens credibility or creates exposure.
Where AI Does Survive Audit Scrutiny
Used correctly, AI can strengthen reporting and disclosure. Audit-tolerant AI use cases share a common feature: they support human judgment without replacing it.
Low-risk, high-value applications typically include:
Mapping existing documentation and policies to reporting or disclosure requirements
Identifying data gaps and inconsistencies across business units, geographies or reports and marketing assets
Supporting internal reviews, cross-checks, and consistency analysis
Monitoring regulatory updates and changes in reporting expectations
These use cases are generally accepted because they do not introduce new judgments, claims, or interpretations. Instead, AI operates as an analytical and organisational layer, improving visibility and efficiency while leaving accountability firmly with management.
Assurance-led guidance from audit firms and regulators consistently emphasises this distinction: AI is acceptable where it enhances oversight, traceability, and control, but problematic where it substitutes for decision-making or obscures responsibility.
AI can also be used, with care, to support narrative coherence in annual or integrated reporting. When applied downstream of validated data, AI can help connect quantitative performance with qualitative context, improving clarity and readability. However, this use must remain tightly governed. In my view, AI-assisted storytelling should not introduce new claims, forward-looking commitments, or interpretations that management has not explicitly reviewed and approved.
To re-iterate, AI survives audit scrutiny when it clarifies how decisions are made, rather than becoming part of the decision itself.
What Sustainability Leaders Should Do Now
To use AI responsibly for ESG Reporting, leaders should:
Define where AI is explicitly prohibited
Craft a positioning statement, and document where AI is decision-support only
Ensure human review is mandatory and recorded
Align AI use with existing governance and controls
Be able to explain AI use to an auditor in plain language
Effective governance demands a structured understanding of AI-related risks, clear accountability across functions and alignment with strategic priorities, including sustainability objectives. Responsible leaders map how AI-governance for sustainability reporting links back to company AI-policy, also disclosing dedicated governance structures, oversight mechanisms and AI-specific risk processes. If you’re looking for more information on what makes AI responsible, check out my article here.
Final Thought
The takeaway is simple, poorly governed AI will undermine sustainability reporting and introduce risk for your company. If you cannot explain it, you cannot defend it.
As reporting and disclosure become more regulated, assured, and decision-relevant, AI cannot be treated as a neutral productivity tool, because it is part of your governance system. Leaders who succeed will be those who design its use deliberately, with clear boundaries, documented judgment, human oversight, and explicit accountability. That is the standard AI will increasingly be held to, by auditors, regulators, investors and broader stakeholders.