The Guardian (USA)

Canada lawyer under fire for submitting fake cases created by AI chatbot

- Leyland Cecco in Toronto

A lawyer in Canada is under fire after the artificial intelligen­ce chatbot use she used for legal research created “fictitious” cases, in the latest episode to expose the perils of using untested technologi­es in the courtroom.

The Vancouver lawyer Chong Ke, who now faces an investigat­ion into her conduct, allegedly used ChatGPT to develop legal submission­s during a child custody case at the British Columbia supreme court.

According to court documents, Ke was representi­ng a father who wanted to take his children overseas on a trip but was locked in a separation dispute with the children’s mother. Ke is alleged to have asked ChatGPT for instances of previous case law that might apply to her client’s circumstan­ces. The chatbot, developed by OpenAI, produced three results, two of which she submitted to the court.

The lawyers for the children’s mother, however, could not find any record of the cases, despite multiple requests.

When confronted with the discrepanc­ies, Ke backtracke­d.

“I had no idea that these two cases could be erroneous. After my colleague pointed out the fact that these could not be located, I did research of my own and could not detect the issues either,” Ke wrote in an email to the court. “I had no intention to mislead the opposing counsel or the court and sincerely apologize for the mistake that I made.”

Despite the popularity of chatbots, which are trained on extensive troves of data, the programs are also prone to errors, known as “hallucinat­ions”.

Lawyers representi­ng the mother called Ke’s conduct “reprehensi­ble and deserving of rebuke” because it led to “considerab­le time and expense” to determine if the cases she cited were real.

They asked for special costs to be awarded, but the judge overseeing the case rejected the request, saying such an “extraordin­ary step” would require “a finding of reprehensi­ble conduct or an abuse of process” by the lawyer.

“Citing fake cases in court filings and other materials handed up to the court is an abuse of process and is tantamount to making a false statement to the court,” wrote Justice David Masuhara. “Unchecked, it can lead to a miscarriag­e of justice.”

He found that opposing counsel was “well-resourced” and had already produced “volumes” of materials in the case. “There was no chance here that the two fake cases would have slipped through.”

Masuhara said Ke’s actions produced “significan­t negative publicity” and she was “naive about the risks of using ChatGPT”, but he found she took steps to correct her errors.

“I do not find that she had the intention to deceive or misdirect. I accept the sincerity of Ms Ke’s apology to counsel and the court. Her regret was clearly evident during her appearance and oral submission­s in court.”

Despite Masuhara’s refusal to award special costs, the law society of British Columbia is now investigat­ing Ke’s conduct.

“While recognizin­g the potential benefits of using AI in the delivery of legal services, the Law Society has also issued guidance to lawyers on the appropriat­e use of AI, and expects lawyers to comply with the standards of conduct expected of a competent lawyer if they do rely on AI in serving their clients,” a spokespers­on, Christine Tam, said in a statement.

 ?? ?? The judge found Ke took steps to correct her errors. Photograph: Michael Dwyer/AP
The judge found Ke took steps to correct her errors. Photograph: Michael Dwyer/AP

Newspapers in English

Newspapers from United States