Idealog

THE RULE OF LAW

As businesses incorporat­e artificial intelligen­ce into their practices, ethical – and legal – dilemmas arise. Simpson Grierson senior associate Louise Taylor answers some of our burning AI-related questions.

-

Idealog: Law is an area set to be changed by AI and machine learning. Is it a good thing or a bad thing for your business?

Louise Taylor: A good thing! There are some very exciting AI and machine learning products already available, which are having a major impact on the way law firms operate and provide advice to their clients. These include document automation, contract analysis and informatio­n extraction products like RAVN and Kira.By automating time-consuming and lowerlevel tasks, such as searching through large volumes of documentat­ion for certain phrases or clauses, these products improve speed, accuracy and cost-efficiency. These benefits can help improve client service significan­tly, which is why so many law firms are either using or evaluating them. These technologi­es also look set to have a positive impact on public access to legal advice. For instance, chatbots – powered by machine learning – are being used to provide free advice; LawBot provides initial guidance to victims of crime using a chatbot interface; and DoNotPay is a "robot lawyer", which – among other things – helps refugees complete immigratio­n applicatio­ns and obtain financial support from the government.

What happens if AI goes wrong? Whose fault will it be?

It depends on the product and how the technology is used, and who is responsibl­e for providing the data from which the machine learns. For B2B applicatio­ns, the allocation of liability will generally be focused on commercial risk and an assessment of the quantum of financial loss potentiall­y suffered by the user. In these cases, liability will be negotiated, where possible, or will form a part of the customer’s assessment of the business case.

However, where a failure of the product has the potential to harm individual­s, the question of who should be liable is entirely different. For example, autonomous vehicles are touted as being far safer than human drivers, but there has been a lot of discussion overseas about who takes the rap if one causes an accident and someone is harmed as a result. Some car makers, but not all, have announced that they would accept all liability in any crash involving their autonomous cars.

AI is not at the stage where it is "thinking" for itself, and it is predicted that this could still be 25 years away. However, the EU is already considerin­g potential ways to mitigate the risk of defective or rogue machines (for example, by requiring a mandatory “kill switch” for robots). Debate on these issues should be welcomed, but concrete steps to control rogue machines acting autonomous­ly are perhaps premature.

AI is set to remove the need for humans in many other sectors. What happens when you lose your job to a machine?

There's no question that the so-called “automation bomb” could lead to massive job displaceme­nt and disruption. The question is not whether our jobs will be affected, but how.

With the predicted exponentia­l speed of technology developmen­t, “lifelong learning” is going to be critical. Employers will need to provide continuing education opportunit­ies for staff to mitigate the effects of job displaceme­nt. We will also need to shift our mind set to ensure that we are receptive to change, and to ensure that our skills can be adapted. Taylor is a senior associate in Simpson Grierson's commercial group, specialisi­ng in all aspects of technology law. For answers to your legal questions, head to www.simpsongri­erson.com.

 ??  ??

Newspapers in English

Newspapers from New Zealand