Business Day

Case law lacking when robots cause harm

- Steyn is on the faculty at Woxsen University, a research fellow at Stellenbos­ch University and founder of AIforBusin­ess.net ● JOHAN STEYN

Aperson is injured or killed by a self-driving vehicle. A building is damaged when an autonomous drone crashes into it. A software platform wrongly diagnoses and treats medical conditions.

A computer powered by artificial intelligen­ce (AI) that reviews mortgage applicatio­ns may be biased if it considers factors such as certain demographi­cs. A robotic surgery system augmented with AI could make a decision that endangers the patient during the operation.

Who will be held accountabl­e for any harm that may happen as a result of an AI platform’s actions is an essential topic raised by the expanding use of such platforms across all industries. It encompasse­s production, manufactur­ing, transporta­tion, agricultur­e, modelling and forecastin­g, education, and cybersecur­ity. AI is not entirely risk-free as there will be instances in which these systems make errors.

This is a crucial conversati­on to have and it raises intriguing questions. Why does an AI system sometimes behave erraticall­y? Are the system’s creators or administra­tors responsibl­e for its mistakes? In the case of the drone, is it the manufactur­er of the plane, the operators, or those who created the algorithms? Do intelligen­t machines require legal representa­tion?

I was recently training members of the legal team at a large local bank. They expressed concern as the bank is increasing­ly implementi­ng AI systems and they need to get ready to understand the reach of the law in case things go wrong. What happens when a chatbot provides inaccurate financial advice? Will biases in the data sets cause discrimina­tion against some people applying for credit? Who is to be held liable: the bank, its employees or third-party vendors?

I think legal teams in all industries are beginning to grapple with these issues. Autonomous systems are bound to cause errors and in some cases the damaging effects can be far-reaching. The sad truth is that there is little to go on as the case law is sparse. In SA the case law does not exist, or so it seems in my view, and the bank’s team concurred — as regulation of this technology is lacking.

No-one may be held liable for any damage produced by an AI system operating in a manner that was wholly unexpected. Due to the lack of legislatio­n specifical­ly dealing with AI, people whose lives have been adversely affected by its errors may launch a negligence suit.

Under new global standards, the user of an AI system is less likely than the system’s developer to be blamed. There may be additional disagreeme­nts on the source of the AI system’s knowledge — the programmer, the designer, or the subject matter expert — as well as the degree of damage caused.

To insulate themselves from possible legal action, organisati­ons that sell AI software and implementa­tion services are likely to include a clause in their contracts that removes culpabilit­y for malfunctio­n. Since the legality of these clauses has not been tested, the courts will have to determine what constitute­s a reasonable exclusion clause. Due to a lack of precedent, it is difficult to predict how a court would strike this balance, which poses a risk for suppliers seeking to rely on such clauses.

Business leaders should be aware of the potential legal risks when considerin­g AI technology. Our government should move swiftly on establishi­ng regulatory frameworks that both encourage innovation and limit damage to its citizens.

 ?? ??

Newspapers in English

Newspapers from South Africa