THE RULE OF LAW
As businesses incorporate artificial intelligence into their practices, ethical – and legal – dilemmas arise. Simpson Grierson senior associate Louise Taylor answers some of our burning AI-related questions.
Idealog: Law is an area set to be changed by AI and machine learning. Is it a good thing or a bad thing for your business?
Louise Taylor: A good thing! There are some very exciting AI and machine learning products already available, which are having a major impact on the way law firms operate and provide advice to their clients. These include document automation, contract analysis and information extraction products like RAVN and Kira.By automating time-consuming and lowerlevel tasks, such as searching through large volumes of documentation for certain phrases or clauses, these products improve speed, accuracy and cost-efficiency. These benefits can help improve client service significantly, which is why so many law firms are either using or evaluating them. These technologies also look set to have a positive impact on public access to legal advice. For instance, chatbots – powered by machine learning – are being used to provide free advice; LawBot provides initial guidance to victims of crime using a chatbot interface; and DoNotPay is a "robot lawyer", which – among other things – helps refugees complete immigration applications and obtain financial support from the government.
What happens if AI goes wrong? Whose fault will it be?
It depends on the product and how the technology is used, and who is responsible for providing the data from which the machine learns. For B2B applications, the allocation of liability will generally be focused on commercial risk and an assessment of the quantum of financial loss potentially suffered by the user. In these cases, liability will be negotiated, where possible, or will form a part of the customer’s assessment of the business case.
However, where a failure of the product has the potential to harm individuals, the question of who should be liable is entirely different. For example, autonomous vehicles are touted as being far safer than human drivers, but there has been a lot of discussion overseas about who takes the rap if one causes an accident and someone is harmed as a result. Some car makers, but not all, have announced that they would accept all liability in any crash involving their autonomous cars.
AI is not at the stage where it is "thinking" for itself, and it is predicted that this could still be 25 years away. However, the EU is already considering potential ways to mitigate the risk of defective or rogue machines (for example, by requiring a mandatory “kill switch” for robots). Debate on these issues should be welcomed, but concrete steps to control rogue machines acting autonomously are perhaps premature.
AI is set to remove the need for humans in many other sectors. What happens when you lose your job to a machine?
There's no question that the so-called “automation bomb” could lead to massive job displacement and disruption. The question is not whether our jobs will be affected, but how.
With the predicted exponential speed of technology development, “lifelong learning” is going to be critical. Employers will need to provide continuing education opportunities for staff to mitigate the effects of job displacement. We will also need to shift our mind set to ensure that we are receptive to change, and to ensure that our skills can be adapted. Taylor is a senior associate in Simpson Grierson's commercial group, specialising in all aspects of technology law. For answers to your legal questions, head to www.simpsongrierson.com.