Partager:

JAKARTA - A lawyer in New York, United States of America (USA) used ChatGPT for legal research purposes, but now he is facing problems in court due to false information provided by the chatbot.

The case began when a man sued an airline for alleged personal injury. Plaintiff Peter LoDuca's legal team are attorneys facing trial and his partner, who uses ChatGPT for research, is Steven A Schwartz of the law firm Levidow, Levidow & Oberman.

The two then submitted summaries citing several previous court cases to support the argument that using ChatGPT, they seek to set a legal precedent in their claims.

However, airline lawyers found some of the cases referred did not exist. Schwartz has been an advocate for more than 30 years.

He regretted using the tool by OpenAI, stating, "Never used AI for legal research before and was not aware that the content might be false," said Schwartz.

It is known, ChatGPT does work to create original text on request, but comes with a warning that can produce inaccurate information.

The judge presiding over the case, Kevin Castel, expressed his shock at the situation and described it as an "unprecedented" situation. In his order, he asked for an explanation from the plaintiff's legal team.

"Six of the cases filed appear to be bogus court decisions with fake citations and fake internal citations," Judge Castel said.

The screenshot attached to the archive shows the conversation between Schwartz and ChatGPT. In the prompt, Schwartz asks if certain cases, such as Varghese v. China Southern Airlines Co Ltd, authentic.

ChatGPT later confirmed its authenticity, indicating the case could be found in legal reference databases such as LexisNexis and Westlaw.

However, subsequent investigations declared the case non-existent, causing further doubts about the other cases provided by ChatGPT.

Due to this incident, both LoDuca and Schwartz have been summoned to the upcoming disciplinary hearing on June 8 to explain their actions.

"(We) will never use AI to complete legal research in the future without absolute verification of its authenticity," LoDuca said.

Citing IndiaToday, Monday, May 29, millions of people have used ChatGPT since its launch in November 2022. Even so, this is not the first time ChatGPT has caused problems by creating events that never happened.

Exactly a month ago, the chatbot accused a professor of sexually harassing a student and ran a news article about it. In fact, news articles are never written.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)