As busi­nesses in­cor­po­rate ar­ti­fi­cial in­tel­li­gence into their prac­tices, eth­i­cal – and le­gal – dilem­mas arise. Simp­son Gri­er­son se­nior as­so­ci­ate Louise Tay­lor an­swers some of our burn­ing AI-re­lated ques­tions.


Idea­log: Law is an area set to be changed by AI and ma­chine learn­ing. Is it a good thing or a bad thing for your busi­ness?

Louise Tay­lor: A good thing! There are some very ex­cit­ing AI and ma­chine learn­ing prod­ucts al­ready avail­able, which are hav­ing a ma­jor im­pact on the way law firms op­er­ate and pro­vide ad­vice to their clients. These in­clude doc­u­ment au­to­ma­tion, con­tract anal­y­sis and in­for­ma­tion ex­trac­tion prod­ucts like RAVN and Kira.By au­tomat­ing time-con­sum­ing and low­er­level tasks, such as search­ing through large vol­umes of doc­u­men­ta­tion for cer­tain phrases or clauses, these prod­ucts im­prove speed, ac­cu­racy and cost-ef­fi­ciency. These ben­e­fits can help im­prove client ser­vice sig­nif­i­cantly, which is why so many law firms are ei­ther us­ing or eval­u­at­ing them. These tech­nolo­gies also look set to have a pos­i­tive im­pact on pub­lic ac­cess to le­gal ad­vice. For in­stance, chat­bots – pow­ered by ma­chine learn­ing – are be­ing used to pro­vide free ad­vice; LawBot pro­vides ini­tial guid­ance to vic­tims of crime us­ing a chat­bot in­ter­face; and DoNotPay is a "ro­bot lawyer", which – among other things – helps refugees com­plete im­mi­gra­tion ap­pli­ca­tions and ob­tain fi­nan­cial sup­port from the gov­ern­ment.

What hap­pens if AI goes wrong? Whose fault will it be?

It de­pends on the prod­uct and how the tech­nol­ogy is used, and who is re­spon­si­ble for pro­vid­ing the data from which the ma­chine learns. For B2B ap­pli­ca­tions, the al­lo­ca­tion of li­a­bil­ity will gen­er­ally be fo­cused on com­mer­cial risk and an assess­ment of the quan­tum of fi­nan­cial loss po­ten­tially suf­fered by the user. In these cases, li­a­bil­ity will be ne­go­ti­ated, where pos­si­ble, or will form a part of the cus­tomer’s assess­ment of the busi­ness case.

How­ever, where a fail­ure of the prod­uct has the po­ten­tial to harm in­di­vid­u­als, the ques­tion of who should be li­able is en­tirely dif­fer­ent. For ex­am­ple, au­ton­o­mous ve­hi­cles are touted as be­ing far safer than hu­man driv­ers, but there has been a lot of dis­cus­sion over­seas about who takes the rap if one causes an ac­ci­dent and some­one is harmed as a re­sult. Some car mak­ers, but not all, have an­nounced that they would ac­cept all li­a­bil­ity in any crash in­volv­ing their au­ton­o­mous cars.

AI is not at the stage where it is "think­ing" for it­self, and it is pre­dicted that this could still be 25 years away. How­ever, the EU is al­ready con­sid­er­ing po­ten­tial ways to mit­i­gate the risk of de­fec­tive or rogue machines (for ex­am­ple, by re­quir­ing a manda­tory “kill switch” for robots). De­bate on these is­sues should be wel­comed, but con­crete steps to con­trol rogue machines act­ing au­tonomously are per­haps pre­ma­ture.

AI is set to re­move the need for hu­mans in many other sec­tors. What hap­pens when you lose your job to a ma­chine?

There's no ques­tion that the so-called “au­to­ma­tion bomb” could lead to mas­sive job dis­place­ment and dis­rup­tion. The ques­tion is not whether our jobs will be af­fected, but how.

With the pre­dicted ex­po­nen­tial speed of tech­nol­ogy de­vel­op­ment, “life­long learn­ing” is go­ing to be crit­i­cal. Em­ploy­ers will need to pro­vide con­tin­u­ing ed­u­ca­tion op­por­tu­ni­ties for staff to mit­i­gate the ef­fects of job dis­place­ment. We will also need to shift our mind set to en­sure that we are re­cep­tive to change, and to en­sure that our skills can be adapted. Tay­lor is a se­nior as­so­ci­ate in Simp­son Gri­er­son's com­mer­cial group, spe­cial­is­ing in all as­pects of tech­nol­ogy law. For an­swers to your le­gal ques­tions, head to www.simp­songri­er­

Newspapers in English

Newspapers from New Zealand

© PressReader. All rights reserved.