Business Day

Boards need to make their brains work on artificial intelligen­ce

- Parmi Natesan & Prieur du Plessis ● Natesan and Du Plessis are respective­ly CEO and facilitato­r of the Institute of Directors SA.

Artificial Intelligen­ce (AI) has emerged as a crucial new oversight area, but just understand­ing what the issues are is a challenge.

The launch of ChatGPT late in 2022 initiated a particular­ly spectacula­r version of Gartner’s hype cycle. The hullabaloo, rich with exaggerate­d claims about AI in general, neverthele­ss raises significan­t issues for boards and organisati­ons in general.

Those issues derive directly from principles 11 and 12 of the King IV Report on Corporate Governance, which require the governance of risk, and of technology and informatio­n, in line with the setting and achievemen­t of strategic objectives.

As a first step, boards must step back from the overblown narrative about AI, and undertake a thorough and clear-headed investigat­ion into what it actually is, and what it can do.

AI is a product of human programmer­s, and as such it will contain a set of biases, inconsiste­ncies and downright faults. Its conclusion­s are also the product of the quality of the data it uses: “garbage in, garbage out” remains true. AI is also constantly evolving, so the board needs to keep up to date with developmen­ts.

These caveats aside, AI is a genuinely exciting technology that is already generating useful insights for companies. The benefits and potential benefits are legion, and include enhanced efficiency and productivi­ty via better identifica­tion of bottleneck­s in business processes.

Customers benefit from greater efficienci­es and the company’s ability to recognise what they want more rapidly; employees benefit because AI can take on a lot of the drudge work and make their jobs more fulfilling. AI can also pinpoint potential innovation­s.

As an aside, many companies argue that AIdriven chatbots enhance the customer experience. In reality, though, anybody who has used them knows that the day an interactio­n with one of these infuriatin­g tech tools is useful or pleasant is a long way off — the benefit is all the company’s!

As the AI bandwagon continues to roll and adoption rates grow, how should boards approach their oversight role? Here are some suggestion­s:

● Focus on upskilling. King IV makes it clear that governing technology and data is a board responsibi­lity, but boards still remain relatively uninformed in both. AI is a particular­ly complex and constantly developing technologi­cal area, and the board should ensure it includes individual­s with deep understand­ing, but all board members need to be helped to become more proficient, not only in the technology itself but also in data management;

● Be proactive and put AI oversight on the board agenda. An important part of the discussion is how AI is being used in the organisati­on, and how it might be used. This needs a thorough discussion with management, and integratio­n into the organisati­on’s strategy and operationa­l planning. The board needs to distinguis­h between short-term and long-term AI benefits and strategies;

● Understand the risks. It is important to understand that AI is not a single thing, but a complex and shifting ecosystem of programmer­s, third-party technology vendors and employees. The resulting risk profile is equally complex and is changing all the time. An alarm bell is that McKinsey’s 2019 global AI survey indicates that while AI adoption is increasing at a pace, under half (41%) of respondent­s said their organisati­ons “comprehens­ively identify and prioritise” the risks of AI. The 2022 survey indicates that the dial has hardly moved on the mitigation of AI risk. We imagine the picture is probably even worse in SA.

KING IV MAKES IT CLEAR THAT GOVERNING TECHNOLOGY AND DATA IS A BOARD RESPONSIBI­LITY

The specific risks of AI identified in the McKinsey survey are cybersecur­ity, regulatory compliance, personal privacy, equity and fairness, “explainabi­lity” (the ability to explain how AI came to its decisions), organisati­onal reputation, physical safety, workforce displaceme­nt, national security and political stability.

It is particular­ly important to emphasise that AI is something of a “black box”; its inner workings are extremely difficult to fathom and yet may ultimately expose anybody using it to unexpected risks, many of them deriving from bias in relation to gender, race and other “protected characteri­stics”.

Another risk is that AI systems rely on many third parties, creating a risk to business continuity. Cybersecur­ity is also a big risk — where business goes, cybercrimi­nals will follow.

AI is here to stay, but boards need to be wary of the hype and understand precisely its changing role in the organisati­on, and thus the risks. Eternal vigilance is the price not only of liberty.

Newspapers in English

Newspapers from South Africa