A US lawyer sought precedents on ChatGPT to support the lawsuit. The answers did not convince the judge.
Colourbox
A lawyer in the United States Stephen A. Schwartz his colleague got into an embarrassing situation Peter LoDucan with after searching legal precedents with the help of ChatGPT artificial intelligence. Told about the incident New York Times.
The trial went through Roberto Matan brought against the Avianca airline. According to the indictment, he would have injured his knee when a serving cart hit him.
In support of the prosecution, the lawyers hired by Mata presented a large number of precedents in court, of which Schwartz had managed to find a long litany – as many as ten pages.
The case was reviewed in federal court. Schwartz is not qualified to act at the federal level, so Schwartz’s colleague LoDuca acted as Mata’s representative in court.
The judge of the court P. Kevin Castel asked in court whether Mata’s representatives had familiarized themselves with the precedents in more detail and verified that they were correct. The answer from Mata’s representatives was short, but to the point.
– We do not.
It turned out that the precedents Schwartz found were generated by the ChatGPT AI.
Quotes ok – Facts not
Schwartz thought ChatGPT was a super search engine. In reality, however, ChatGPT combines the various information it finds, what it finds on the Internet, and its claims may not have a factual basis.
“I didn’t know ChatGPT could falsify precedents,” Schwartz told Judge Castel.
The judge was left to consider whether Mata’s lawyers would face sanctions because of the case. The lawyers took full responsibility for the matter and pleaded that their client should not be punished for their snub.
Schwartz presented the answers given by the artificial intelligence to the judge, which seemed quite authentic with the quotes. At the end of one answer, the AI had also given an encouraging statement:
– Hopefully this will help!