A lawyer used ChatGPT in a brief and the AI ​​invented non-existent precedents

An American lawyer faces possible sanctions after using the popular ChatGPT to write a letter and discover that the application of Artificial Intelligence had invented a whole series of supposed legal precedents. “The Court is facing an unprecedented situation,” said the judge Kevin Castle.

As published this Saturday by The New York Times, the lawyer Steven Schwarzlawyer in a case that is being resolved in a New York court, filed a lawsuit against the airline Avianca Submitted by a passenger who claims he was injured when he was struck with a service cart during a flight. The plaintiff’s representative used ChatGPT to prepare the brief in which he opposed a defense request to have the case dismissed.

In the 10-page document, the lawyer cited several judicial decisions to support his theses, but it did not take long to discover that the well-known OpenAI company chatbot he had made them up. “The Court is facing an unprecedented situation. A filing submitted by the plaintiff’s attorney in opposition to a motion to dismiss the case is replete with citations to non-existent cases,” the trial judge said.

The magistrate issued an order calling a hearing on June 8, in which Schwartz must try to explain why he should not be sanctioned after having tried to use completely false precedent assumptions. In this regard, the American lawyer justified his actions by saying that he had never used a tool of this type and promised not to use ChatGPT anymore without fully verifying the authenticity of the cases.

When confronted, the lawyer himself presented an affidavit in which he admitted to having used ChatGPT to prepare the brief and acknowledged that the only verification he had carried out was to ask the application if the cases he cited were real. “I was not aware of the possibility that its content could be false,” Schwartz confessed.

The lawyer also stressed that he had no intention of misleading the court and fully exonerated another lawyer from the firm who also exposes himself to possible sanctions. The document, seen by the EFE news agency, closes with an apology in which Schwartz deeply regrets having used artificial intelligence to support his investigation and promises never to do so again without fully checking its authenticity.

Image gallery

ttn-25