Jamaica Gleaner

AI creating fake legal cases, making way into real courtrooms, with disastrous results

- Michael Legg and Vicki McNamara/Contributo­rs

WE’VE SEEN deepfake, explicit i mages of celebritie­s, created by artificial intelligen­ce (AI). AI has also played a hand in creating music, driverless race cars and spreading misinforma­tion, among other things.

It’s hardly surprising, then, that AI also has a strong impact on our legal systems.

It’s well known that courts must decide disputes based on the law, which is presented by lawyers to the court as part of a client’s case. It’s therefore highly concerning that fake law, invented by AI, is being used in legal disputes.

Not only does this pose issues of legality and ethics, it also threatens to undermine faith and trust in global legal systems.

HOW DO FAKE LAWS COME ABOUT?

There is little doubt that generative AI is a powerful tool with transforma­tive potential for society, including many aspects of the legal system. But its use comes with responsibi­lities and risks.

Lawyers are trained to carefully apply profession­al knowledge and experience, and are generally not big risk-takers. However, some unwary lawyers (and selfrepres­ented litigants) have been caught out by artificial intelligen­ce.

AI models are trained on massive data sets. When prompted by a user, they can create new content (both text and audiovisua­l).

Although content generated this way can look very convincing, it can also be inaccurate. This is the result of the AI model attempting to “fill in the gaps”when its training data is inadequate or flawed, and is commonly referred to as ‘hallucinat­ion’.

In some contexts, generative AI hallucinat­ion is not a problem. Indeed, it can be seen as an example of creativity.

But if AI hallucinat­ed or created inaccurate content that is then used in legal processes, that’s a problem – particular­ly when combined with time pressures on lawyers and a lack of access to legal services for many.

This potent combinatio­n can result in carelessne­ss and shortcuts in legal research and document preparatio­n, potentiall­y creating reputation­al issues for the legal profession and a lack of public trust in the administra­tion of justice.

IT’S HAPPENING ALREADY

The best known generative AI ‘fake case’ is the 2023 US case Mata v Avianca, in which lawyers submitted a brief containing fake extracts and case citations to a New York court. The brief was researched using ChatGPT.

The l awyers, unaware that ChatGPT can hallucinat­e, failed to check that the cases actually existed. The consequenc­es were disastrous. Once the error was uncovered, the court dismissed their client’s case, sanctioned the lawyers for acting in bad faith, fined them and their firm, and exposed their actions to public scrutiny.

Despite adverse publicity, other fake case examples continue to surface. Michael Cohen, Donald Trump’s former lawyer, gave his own lawyer cases generated by Google Bard, another generative AI chatbot. He believed they were real (they were not) and that his lawyer would fact-check them (he did not). His lawyer included the cases in a brief filed with the US Federal Court.

Fake cases have also surfaced in recent matters in Canada and the United Kingdom.

If this trend goes unchecked, how can we ensure that the careless use of generative AI does not undermine the public’s trust in the legal system? Consistent failures by lawyers to exercise due care when using these tools has the potential to mislead and congest the courts, harm clients’ interests, and generally undermine the rule of law.

WHAT’S BEING DONE ABOUT IT?

Around the world, legal regulators and courts have responded in various ways.

Several US state bars and courts have issued guidance, opinions or orders on generative AI use, ranging from responsibl­e adoption to an outright ban.

Law societies in the UK and British Columbia, and the courts of New Zealand, have also developed guidelines.

In Australia, the NSW Bar Associatio­n has a generative AI guide for barristers. The Law Society of NSW and the Law Institute of Victoria have released articles on responsibl­e use in line with solicitors’ conduct rules.

Many l awyers and j udges, like the public, will have some understand­ing of generative AI and can recognise both its limits and benefits. But there are others who may not be as aware. Guidance undoubtedl­y helps.

But a mandatory approach is needed. Lawyers who use generative AI tools cannot treat it as a substitute for exercising their own judgement and diligence, and must check the accuracy and reliabilit­y of the informatio­n they receive.

In Australia, courts should adopt practice notes or rules that set out expectatio­ns when generative AI is used in litigation. Court rules can also guide self-represente­d litigants, and would communicat­e to the public that our courts are aware of the problem and are addressing it.

The l egal profession could also adopt formal guidance to promote the responsibl­e use of AI by lawyers. At the very least, technology competence should become a requiremen­t of lawyers’ continuing legal education in Australia.

Setting clear requiremen­ts for the responsibl­e and ethical use of generative AI by lawyers in Australia will encourage appropriat­e adoption and shore up public confidence i n our lawyers, our courts, and the overall administra­tion of justice in this country.

Michael Legg is professor of law, UNSW Sydney; Vicki McNamara is senior research associate, Centre for the Future of the Legal Profession, UNSW Sydney. This article is republishe­d from https://theconvers­ation.com under a Creative Commons licence. Read the full article here: https://theconvers­ation.com/ ai-is-creating-fake-legal-casesand-making-its-way-into-realcourtr­ooms-with-disastrous­results-225080

 ?? ??

Newspapers in English

Newspapers from Jamaica