Let’s bridge the AI-human trust gap on ESG reporting
There’s a way to overcome sceptical stakeholder reactions to AI-generated ESG reports
are, respectively, an independent director on corporate boards, and a policy researcher and corporate advisor.
For every AI action, there is awe that goes with an equal and opposite reaction of scepticism.’ This is how Newton’s third law might be rewritten for artificial intelligence (AI). That there exists a trust gap on all things AI is a reality; equally, there is blind acceptance of much that AI delivers.
Among the early adopters of AI are organizations using it for Environmental, Social and Governance (ESG) reports. As regulatory demands rise to report detailed ESG metrics with qualitative management commentary, there is a scramble to train human talent in those aspects. However, much of the ESG assessment, by way of data collation and analysis for example, can be automated. This explains the debate on whether AI-generated reports can match the quality and trustworthiness of those made by humans.
The pro-AI point: Advocates of AI-generated ESG reports argue that AI offers unparalleled efficiency, accuracy and scalability. AI algorithms can process vast amounts of data, detect patterns and identify relevant ESG metrics with minimal human intervention. Further, AI-powered analytics can uncover hidden correlations and predictive insights, enabling organizations to address emerging ESG issues proactively. Moreover, AI eliminates biases inherent in human decision-making, enhancing objectivity and consistency in report generation. Of course, AI will carry any bias that human-made learning material has pre-loaded it with. Still, AI-driven insights enable organizations to identify ESG risks and opportunities more effectively, thereby enhancing transparency and accountability.
There is also potential for standardization and harmonization across industries and regions. AI-generated reports can adhere to pre-defined criteria and benchmarks, thereby allowing comparability of ESG disclosures. By automating routine tasks, AI also frees up human resources to focus on value-added activities such as strategy development and stakeholder engagement.
Counter views : Opponents argue that AI lacks the nuanced understanding and contextual insights that humans bring to ESG reporting. Human analysts can interpret complex data points, discern subtle nuances and provide qualitative assessments that AI algorithms may overlook. Also, human involvement adds a layer of accountability and credibility, as stakeholders may trust reports prepared by individuals with domain expertise and ethical judgement.
Critics raise concerns about the ‘black box’ nature of AI tools, which may obscure decision-making processes and erode trust among stakeholders. They also caution against a onesize-fits-all approach to ESG reporting, emphasizing the importance of cultural, social and contextual factors. Additionally, the dynamic nature of ESG challenges requires high levels of adaptability and creativity, qualities that AI may struggle to emulate without human guidance and intuition.
Creation of trust through transparency and explainability: Organizations should provide clear explanations of the AI algorithms and data sources used in ESG reporting. Transparency generates trust by demystifying how decisions are made and enabling stakeholders to assess the reliability and validity of AI-generated observations and insights.
Human oversight and validation: While leveraging AI for efficiency, organizations should maintain human oversight to validate results and ensure alignment with ethical standards. Human experts can review AI-generated reports, identify anomalies and provide contextual insights that enhance the credibility and relevance of ESG disclosures.
Stakeholder engagement: Engaging stakeholders in the ESG reporting process fosters trust and accountability. Organizations can solicit feedback, address concerns and co-create meaningful ESG narratives that reflect diverse perspectives and priorities. Collaborative approaches help build consensus and legitimacy, enhancing the perceived quality and relevance of AI-generated information.
Continuous learning and improvement: Embracing such a culture enables organizations to refine AI algorithms, enhance data quality and adapt to new ESG challenges.
Trust is key: While the latest AI tools offer unprecedented capabilities that have been widely discussed, the human touch remains indispensable to interpreting complex ESG issues and engaging diverse stakeholders. Building trust in AI-generated information requires a multi-dimensional approach that lays emphasis on the elements mentioned earlier. Of these, transparency is paramount, requiring organizations to provide clear explanations of the AI algorithms used, including their training, validation processes and limitations. Similarly, disclosing the sources of data utilized in ESG measurement, along with their quality and potential biases, enables stakeholders to assess the reliability of what AI has to say on matters of importance.
Engaging stakeholders in defining ESG metrics, setting reporting standards and interpreting results is a recommended way forward for the benefits it delivers. Ethical considerations, such as data privacy and algorithmic biases, must be integrated with AI frameworks to ensure fairness and accountability. As for continuous improvement and learning, which is no less essential, it is heartening to note that leading organizations are investing in research and development to refine AI algorithms and address emerging ESG challenges.