Effective oversight of AI essential for public acceptance and trust
With regulation reform imminent, it’s an chance to future-proof effects of AI, says Vicky Crichton
omputer systems able to perform tasks normally requiring human intelligence’. That’s the commonly used definition of artificial intelligence (AI), and it’s both a familiar part of our world today, and a leap into an unknown future.
How confident would any of us feel in identifying when decisions might have been made about us using AI? How comfortable are we with that?
For the legal sector, AI offers clear potential benefits to both clients and lawyers. It could open up new legal advice and support services for consumers, leading to improved access, choice and cost. Inequalities could be reduced through smarter translation support. For lawyers and firms, bringing AI to tasks like document management could reduce costs and improve due diligence. This could
leave lawyers more time to focus on client contact, legal reasoning, negotiation, and all the parts of the role where human intelligence and human contact is vital.
In its consultation on an AI strategy for Scotland, the Scottish Government asks how AI could benefit Scotland’s people. We think the legal sector has a good and ever-developing story to tell here.
Public benefit isn’t the only part of the puzzle. The consultation also asks how public confidence can be built in AI as a trusted, responsible and ethical tool. That’s a tougher challenge.
The public will expect and believe there is some effective oversight of AI, and its application in the services and products that they use. Confidence in this oversight – or regulation – will be a key part of believing AI can be trusted, responsible and ethical.