SA needs laws to reg­u­late self­driv­ing ve­hi­cles

Ar­ti­fi­cial in­tel­li­gence should be em­bed­ded with de­ci­sion­mak­ing val­ues that are in line with our val­ues

The Sunday Independent - - Dispatches -

LAST WEEK, I pre­sented a talk in Par­lia­ment about ar­ti­fi­cial in­tel­li­gence (AI), ethics and the law. For starters, AI is in­creas­ingly be­ing used to per­form tasks pre­vi­ously done by hu­man be­ings. Doc­tors, for ex­am­ple, look at the elec­troen­cephalog­ra­phy (EEG) sig­nal to de­tect epilepsy. For a va­ri­ety of med­i­cal, in­ter­pre­tive and so­cial rea­sons, epilepsy is mis­di­ag­nosed. The process of look­ing at the EEG sig­nal to de­tect epilepsy can now be per­formed by an AI doc­tor.

The AI doc­tor can­not be­come tired or bi­ased and has been found to be more con­sis­tent than a hu­man doc­tor. De­spite these im­pres­sive re­sults, some se­ri­ous eth­i­cal is­sues need to be tack­led.

The com­pany Uber makes more than R80 bil­lion a year. Com­pa­nies like Uber are man­aged ser­vices providers (MSPs) that con­nect cus­tomers to sup­pli­ers.

Uber runs a taxi busi­ness, but does not own any taxis. Be­cause these MSPs are based on an IT plat­form and are domi­ciled in the cloud, they can eas­ily avoid lo­cal reg­u­la­tions and con­trol. When one uses these taxis, for in­stance, a cus­tomer in Joburg of­ten pays a taxi driver through a plat­form based in San Fran­cisco.

Uber has been mak­ing huge in­vest­ments into self-driv­ing cars. They drive on our roads and, there­fore, are sub­jected to our rules and reg­u­la­tions, such as speed lim­its. So who is re­spon­si­ble for the fine if the self-driv­ing car runs a red light or drives over the speed limit? Ac­cord­ing to our laws, if a driver is caught driv­ing over the speed limit, he or she – and not the owner of the car – is li­able for a fine. Given the fact that a self-driv­ing car drives au­tonomously of its hu­man owner, do we still charge the owner?

A few weeks ago, a self-driv­ing Uber killed a pedes­trian in Ari­zona. Ac­cord­ing to pre­lim­i­nary in­ves­ti­ga­tion re­ports, this self­driv­ing car ac­tu­ally no­ticed her, be­fore run­ning her over. If this car had had a driver, he or she would have been charged with in­vol­un­tary man­slaugh­ter, but as this was a self­driv­ing car, no one was ar­rested for the crime.

For Uber to have re­leased the car on to the roads, the chief tech­ni­cal of­fi­cer (CTO) would have had to give per­mis­sion. Is Uber’s CTO li­able for this al­leged crime?

It is time that the South African Par­lia­ment cre­ated laws to gov­ern au­ton­o­mous robots. Sup­pose a self-driv­ing car is car­ry­ing four pas­sen­gers. If it reaches a point where it has to ei­ther hit a pedes­trian or go over a cliff to avoid them, killing all of the pas­sen­gers, what should this car do?

The philoso­pher Jeremy Ben­tham came up with the the­ory of util­i­tar­i­an­ism. If the self-driv­ing car ap­plies util­i­tar­i­an­ism, it will do that which will bring the “great­est amount of hap­pi­ness to the great­est num­ber of peo­ple”. So if it saves the four pas­sen­gers and kills the pedes­trian, then the pas­sen­gers will be happy to be alive. If it kills the pas­sen­gers and saves the pedes­trian, the pedes­trian (one per­son) will be happy to be alive. So, to get the most peo­ple happy, it should kill the pedes­trian and save the pas­sen­gers.

Now, if we are to move away from util­i­tar­i­an­ism to ubuntu phi­los­o­phy and the pedes­trian is a 3-year-old child and the four pas­sen­gers are all over 60, then the car should kill the four pas­sen­gers and save the child. Our leg­is­la­ture should en­act laws that will en­sure that these self-driv­ing cars and any in­tel­li­gent ma­chine in our fac­to­ries op­er­ate ac­cord­ing to our val­ues, which are based on the prin­ci­ples of ubuntu.

To do this, our leg­is­la­tor will need to un­der­stand the prin­ci­ples of AI and its im­pli­ca­tions. Our en­gi­neers will have to de­velop the ca­pa­bil­ity to be able to re­model these robots, so that they are em­bed­ded with de­ci­sion-mak­ing ca­pa­bil­i­ties in line with our val­ues.

An­other ex­am­ple of ethics in AI is the area of so­cial net­work­ing. The busi­ness model of Face­book, Twit­ter and In­sta­gram is based on the prin­ci­ple that they will give you an ac­count in ex­change for your data (where you go, what in­for­ma­tion you searched for, etc). This is sold to ad­ver­tis­ers. When Mark Zucker­berg was asked about this in the US Congress, he replied that he be­lieved peo­ple should have the right to do what they want with their data.

The prob­lem with his re­sponse is that many peo­ple have no idea where their data ends up when they sign up for these ap­pli­ca­tions. With South Africa, there is an added se­cu­rity is­sue in that much of the data col­lected in the coun­try is ware­housed in Cal­i­for­nia.

Par­lia­ment should con­sider in­tro­duc­ing a law that guar­an­tees the ba­sic right to pri­vacy that can­not be taken away be­cause of some le­gal con­tract. One ex­am­ple of this is the right to life that can­not be waived away be­cause of some le­gal con­tract. As of May 25, 2018, the EU in­tro­duced the Gen­eral Data Pro­tec­tion Reg­u­la­tion (GDPR) and this will have an im­pact on South African busi­nesses that col­lab­o­rate with Euro­pean en­ti­ties.

One of the ma­jor in­ven­tions in bio­met­ric se­cu­rity is face recognition tech­nol­ogy, based on AI. Face recognition data­bases and al­go­rithms are trained with many fa­cial images of peo­ple and their names. The AI al­go­rithm then learns the re­la­tion­ships be­tween the faces and their names. These sys­tems are now in our phones. It turns out that the faces used to train these AI ma­chines were pre­dom­i­nantly of Cau­casian peo­ple and those least rep­re­sented were of peo­ple of African de­scent.

The con­se­quence of this is that the face recognition sys­tem then dis­crim­i­nates against African peo­ple. This, of course, is not eth­i­cal and our laws must in­ter­vene to make sure that such dis­crim­i­na­tion does not per­sist. Our leg­is­la­ture should de­velop laws to en­sure that prod­ucts im­ported into South Africa com­ply with our con­sti­tu­tion and do not dis­crim­i­nate.

The other ex­am­ple that has a sig­nif­i­cant eth­i­cal di­men­sion is the is­sue of clin­i­cal tri­als. A few years ago, we regis­tered a patent in the US on a “ro­bot voice”. This voice is used by a per­son who has had her voice box sur­gi­cally re­moved be­cause of can­cer. A big in­ter­na­tional med­i­cal ap­pli­ances com­pany took in­ter­est in our patent.

As we were dis­cussing this patent, an is­sue on where these de­vices would be tested arose. Our in­ter­na­tional coun­ter­part in­di­cated that their laws of clin­i­cal tri­als were stricter than ours and there­fore the clin­i­cal tri­als would have to be done in South Africa, de­spite the fact that the de­vice was to be sold on the in­ter­na­tional mar­ket. Stud­ies have shown that Africa is be­com­ing a home for clin­i­cal tri­als.

How­ever, the South African reg­u­la­tory frame­work is strong and per­haps, through the Pan African Par­lia­ment, it should help other African coun­tries to de­velop a ro­bust pol­icy to pro­tect hu­man lives.

To be able to reg­u­late tech­nol­ogy so that we can pro­tect hu­man lives and hu­man dig­nity, we need to un­der­stand it.

IN OR­DER TO PRO­TECT LIVES, WE MUST BE ABLE TO UN­DER­STAND TECH­NOL­OGY.

■ Marwala is the Vice-Chan­cel­lor and Prin­ci­pal of the Univer­sity of Jo­han­nes­burg and the co-au­thor of the book Smart Com­put­ing Ap­pli­ca­tions in Crowd­fund­ing. He writes in his per­sonal ca­pac­ity.

PIC­TURE: AP PHOTO/ERIC RISBERG/ARCHIVE

ETH­I­CAL MINE­FIELD: If a self-driv­ing car breaks the law or causes a per­son’s death, who is li­able? asks the writer.

Tshilidzi Marwala

Newspapers in English

Newspapers from South Africa

© PressReader. All rights reserved.