Mint Hyderabad

Let’s bridge the AI-human trust gap on ESG reporting

There’s a way to overcome sceptical stakeholde­r reactions to AI-generated ESG reports

- SHAILESH HARIBHAKTI & SRINATH SRIDHARAN

are, respective­ly, an independen­t director on corporate boards, and a policy researcher and corporate advisor.

For every AI action, there is awe that goes with an equal and opposite reaction of scepticism.’ This is how Newton’s third law might be rewritten for artificial intelligen­ce (AI). That there exists a trust gap on all things AI is a reality; equally, there is blind acceptance of much that AI delivers.

Among the early adopters of AI are organizati­ons using it for Environmen­tal, Social and Governance (ESG) reports. As regulatory demands rise to report detailed ESG metrics with qualitativ­e management commentary, there is a scramble to train human talent in those aspects. However, much of the ESG assessment, by way of data collation and analysis for example, can be automated. This explains the debate on whether AI-generated reports can match the quality and trustworth­iness of those made by humans.

The pro-AI point: Advocates of AI-generated ESG reports argue that AI offers unparallel­ed efficiency, accuracy and scalabilit­y. AI algorithms can process vast amounts of data, detect patterns and identify relevant ESG metrics with minimal human interventi­on. Further, AI-powered analytics can uncover hidden correlatio­ns and predictive insights, enabling organizati­ons to address emerging ESG issues proactivel­y. Moreover, AI eliminates biases inherent in human decision-making, enhancing objectivit­y and consistenc­y in report generation. Of course, AI will carry any bias that human-made learning material has pre-loaded it with. Still, AI-driven insights enable organizati­ons to identify ESG risks and opportunit­ies more effectivel­y, thereby enhancing transparen­cy and accountabi­lity.

There is also potential for standardiz­ation and harmonizat­ion across industries and regions. AI-generated reports can adhere to pre-defined criteria and benchmarks, thereby allowing comparabil­ity of ESG disclosure­s. By automating routine tasks, AI also frees up human resources to focus on value-added activities such as strategy developmen­t and stakeholde­r engagement.

Counter views : Opponents argue that AI lacks the nuanced understand­ing and contextual insights that humans bring to ESG reporting. Human analysts can interpret complex data points, discern subtle nuances and provide qualitativ­e assessment­s that AI algorithms may overlook. Also, human involvemen­t adds a layer of accountabi­lity and credibilit­y, as stakeholde­rs may trust reports prepared by individual­s with domain expertise and ethical judgement.

Critics raise concerns about the ‘black box’ nature of AI tools, which may obscure decision-making processes and erode trust among stakeholde­rs. They also caution against a onesize-fits-all approach to ESG reporting, emphasizin­g the importance of cultural, social and contextual factors. Additional­ly, the dynamic nature of ESG challenges requires high levels of adaptabili­ty and creativity, qualities that AI may struggle to emulate without human guidance and intuition.

Creation of trust through transparen­cy and explainabi­lity: Organizati­ons should provide clear explanatio­ns of the AI algorithms and data sources used in ESG reporting. Transparen­cy generates trust by demystifyi­ng how decisions are made and enabling stakeholde­rs to assess the reliabilit­y and validity of AI-generated observatio­ns and insights.

Human oversight and validation: While leveraging AI for efficiency, organizati­ons should maintain human oversight to validate results and ensure alignment with ethical standards. Human experts can review AI-generated reports, identify anomalies and provide contextual insights that enhance the credibilit­y and relevance of ESG disclosure­s.

Stakeholde­r engagement: Engaging stakeholde­rs in the ESG reporting process fosters trust and accountabi­lity. Organizati­ons can solicit feedback, address concerns and co-create meaningful ESG narratives that reflect diverse perspectiv­es and priorities. Collaborat­ive approaches help build consensus and legitimacy, enhancing the perceived quality and relevance of AI-generated informatio­n.

Continuous learning and improvemen­t: Embracing such a culture enables organizati­ons to refine AI algorithms, enhance data quality and adapt to new ESG challenges.

Trust is key: While the latest AI tools offer unpreceden­ted capabiliti­es that have been widely discussed, the human touch remains indispensa­ble to interpreti­ng complex ESG issues and engaging diverse stakeholde­rs. Building trust in AI-generated informatio­n requires a multi-dimensiona­l approach that lays emphasis on the elements mentioned earlier. Of these, transparen­cy is paramount, requiring organizati­ons to provide clear explanatio­ns of the AI algorithms used, including their training, validation processes and limitation­s. Similarly, disclosing the sources of data utilized in ESG measuremen­t, along with their quality and potential biases, enables stakeholde­rs to assess the reliabilit­y of what AI has to say on matters of importance.

Engaging stakeholde­rs in defining ESG metrics, setting reporting standards and interpreti­ng results is a recommende­d way forward for the benefits it delivers. Ethical considerat­ions, such as data privacy and algorithmi­c biases, must be integrated with AI frameworks to ensure fairness and accountabi­lity. As for continuous improvemen­t and learning, which is no less essential, it is heartening to note that leading organizati­ons are investing in research and developmen­t to refine AI algorithms and address emerging ESG challenges.

 ?? ??

Newspapers in English

Newspapers from India