A lawyer admits that he used ChatGPT for a brief and the program invented precedents

A lawyer American faces possible sanctions after having used the popular ChatGPT to write a brief and discover that the artificial intelligence (AI) application had made up a whole series of assumptions legal precedents.

As published this Saturday by The New York Times, the lawyer in trouble is Steven Schwartz, a lawyer in a case that is being resolved in a New York court, a lawsuit against the airline Avianca filed by a passenger who claims he suffered an injury while being hit with a service cart during a flight.

Schwartz represents the plaintiff and used ChatGPT to write a brief opposing a defense request to have the case dismissed. In the ten-page document, the lawyer cited several judicial decisions to support his thesis, but it was soon discovered that the well-known chatbot from the OpenAI company had invented them.

Discovered by the judge

“The Court is faced with an unprecedented situation. A submission submitted by the plaintiff’s attorney in opposition to a motion to dismiss (the case) is replete with citations to non-existent cases,” Judge Kevin Castel wrote this month.

This Friday, Castel issued an order calling a hearing on June 8 in which Schwartz must try to explain why he should not be sanctioned after having tried to use completely false precedents. He did so a day after the lawyer himself submitted an affidavit in which he admitted to having used ChatGPT to prepare the brief and acknowledged that the only verification he had carried out was asking the application if the cases he cited they were real.

Related news

Schwartz justified himself by assuring that he had never used a tool of this type before and that, therefore, “he was not aware of the possibility that its content could be false.” The lawyer stressed that he had no intention of misleading the court and totally exonerated another lawyer from the firm who is also exposed to possible sanctions.

The document, seen by EFE, closes with an apology in which Schwartz deeply regrets having used artificial intelligence to support his investigation and promises never to do so again without fully verifying its authenticity.

ttn-24