A lawyer in Canada is facing criticism for presenting fictional cases generated by an AI chatbot.

A lawyer in Canada is facing criticism for presenting fictional cases generated by an AI chatbot.

A Canadian attorney is facing backlash for utilizing an AI chatbot for legal research, which resulted in the generation of fabricated cases. This incident further highlights the dangers of using untested technologies in the courtroom.

The conduct of Vancouver lawyer Chong Ke is being investigated after she reportedly utilized ChatGPT to create legal documents for a child custody hearing at the British Columbia supreme court.

Court records state that Ke was advocating for a father who sought to bring his children on a trip outside of the country but was engaged in a legal disagreement with the children’s mother. It is claimed that Ke consulted ChatGPT for relevant examples of prior legal cases that could pertain to her client’s situation. The virtual assistant, created by OpenAI, provided three potential choices, out of which Ke presented two to the court.

Despite numerous attempts, the attorneys representing the mother of the children were unable to locate any documentation related to the cases.

After being presented with the inconsistencies, Ke changed his position.

“I was unaware of the potential errors in these two cases. Upon my colleague’s observation that they could not be found, I conducted my own investigation and was unable to identify any issues,” Ke explained in an email to the court. “My intention was not to deceive the other party’s attorney or the court, and I deeply apologize for my error.”

Although chatbots have gained a significant following due to their reliance on vast amounts of data, they are also susceptible to mistakes, referred to as “hallucinations”.

The lawyers for the mother referred to Ke’s behavior as “disgraceful and deserving of reprimand” as it resulted in a significant amount of time and money being spent to verify the validity of the cases she mentioned.

The judge denied the request for special fees to be granted, stating that it would only be justified in cases of serious wrongdoing or misconduct by the lawyer.

Justice David Masuhara stated that presenting fraudulent cases in legal submissions and other court-related documents is an act of misconduct and is equivalent to providing false information to the court. Failure to address this issue can result in a wrongful conviction.

He discovered that the opposing lawyer had significant resources and had already provided a large amount of evidence in the case. He concluded that there was no possibility for the two false cases to go unnoticed.

Masuhara commented that Ke’s behaviors resulted in a substantial amount of negative attention and she had a lack of awareness about the hazards of utilizing ChatGPT. However, he acknowledged that she made efforts to rectify her mistakes.

Based on Ms. Ke’s apology to the counsel and court, I believe she did not intend to deceive or misdirect. Her regret was clearly shown through her oral statements and appearance in court.

Although Masuhara declined to give special costs, the law society of British Columbia is currently looking into Ke’s actions.

“In acknowledging the potential advantages of incorporating AI into the provision of legal services, the Law Society has additionally released guidelines for lawyers regarding the proper utilization of AI. It is expected that lawyers adhere to the ethical standards required of a skilled lawyer when implementing AI in their clients’ representation,” a representative, Christine Tam, affirmed in a declaration.

Source: theguardian.com