Donald Trump’s former lawyer revealed what he was doing on his computer – He was laughed at

The fabrications of artificial intelligence went to the former lawyer completely. Since they were not questioned even by his own lawyer, the artificial intelligence-created bunny was finally presented to the judge as well.

Michael Cohen admitted to confusing an artificial intelligence bot with a search engine. MICHAEL REYNOLDS

President Donald Trump’s a former lawyer Michael Cohen is a news agency Reuters including getting caught in an embarrassing glitch related to the use of artificial intelligence.

Cohen was sentenced to prison in 2018 for his admitted tax and bank fraud and misuse of campaign financing, but has since been released. However, he is still under the supervision of the authorities.

Cohen, who lost his lawyer’s license, recently tried to appeal to a judge to end the surveillance, but made a mistake that may become more and more common in the future.

Misused AI

Cohen tried to justify his claim with three precedents he found online. The problem, however, was that these precedents did not actually exist, but were invented by artificial intelligence.

When the judge asked for an explanation for the invented precedents, Cohen said he had enlisted the help of Google’s Bard AI bot while doing background work for his application. Cohen said he thought Bard was an “enhanced search engine,” admitting he was wrong.

According to Cohen, he has not considered himself on the map of technological development and related threats because he is not currently a lawyer.

– I didn’t know that Google Bard is a generative text service that can produce content that looks authentic, but isn’t, Cohen said.

In addition, Cohen defended himself by saying that he did not believe that his own lawyer would use content created by artificial intelligence in documents submitted to the court without checking the facts first.

Trump’s ex-lawyer said he had used Bard in the past with good success and assured that it was not his intention to mislead the court.

Similar mistakes before

We told you last summer how lawyers Stephen A. Schwartz and Peter LoDuca got into an embarrassing situation because of ChatGPT. They had thought they were looking for information about precedents using artificial intelligence, not realizing that the chatbot was feeding them willow.

The judge was presented with six “precedent cases” found by the lawyers, which were created by artificial intelligence.

The lawyers admitted that they had not studied the precedents in more detail or verified their correctness.

– I didn’t know that ChatGPT could falsify precedents, Schwartz stated at the time.

ttn-54