Judge lets ChatGPT write verdict

Why should you make it difficult for yourself when – thanks to artificial intelligence – it can also be easy? This is how it could be interpreted that a judge from Colombia had the text generator ChatGPT write a judgment for him. He intends to continue doing so in the future. This is legally permissible, but there are also some arguments against it. Read all about it at TECHBOOK.

The case, heard in the port city of Cartagena, involved an autistic child and the question of whether his insurance company should cover the costs of his medical treatment. The judge put them to ChatGPT, Open AI’s chatbot, and received a perfectly formed response based on a corresponding legal text. Accordingly, minors diagnosed with autism in Colombia are exempt from paying therapy fees. So the process was decided. But the man opened up a completely different jar: Is it really okay for judges to let ChatGPT write their judgments?

Richter wants to continue using ChatGPT for judgments in the future

The judge was transparent in his actions, he published the “conversation” with the chatbot. And in an interview with the radio station “Bluradio“ he finds arguments for using ChatGPT in his job. Especially since it is only logical. He explains that his secretary normally has to look for legal texts. With ChatGPT it is faster and you also have the advantage of a finished verdict formulated in clear and understandable language. He is convinced that colleagues will follow suit.

Also interesting: New religion wants to worship godlike artificial intelligence

The law not only allows, but demands the use of AI

Legally, the man seems to be on the safe side. Because according to a Colombian law from 2022, it is permissible to use artificial intelligence to make judgments. It is even required to make work more efficient. The advantages are supposedly obvious. Because ChatGPT is able to call up the exact wording of laws and precedents in seconds. This is to ensure a fair judgement. But the whole thing is not free of hooks.

Do not overestimate the skills of ChatGPT

In order for ChatGPT to have so many answers ready, Open AI was fed with countless information via a specially developed language model. But this “training” was completed at the end of 2021 – no new content has arrived since 2022. It may well be that the text generator falls back on outdated knowledge.

Incorrect information 80 percent of the time

According to the service “NewsGuard“ ChatGPT is not only less reliable than often advertised, but rather a “superspreader for misinformation”. After a random examination of answers using the tool, the analysts speak of an error rate of at least 80 percent. However, the lecture is always eloquent and correspondingly convincing.

ChatGPT itself is critical of the use of AI for judges

Even the bot itself would recommend its use for judgments by judges, at best, with reservations, TECHBOOK asked him. There is “no direct rule prohibiting judges from using an AI model like ChatGPT to assist in their decisions. However, it may be important that such a process meets the requirements of impartiality and independence in the judiciary and that decisions are taken on a fair and lawful basis”. It would therefore make sense, according to the tool, to carry out a legal review before an AI-generated verdict is read out.

ttn-35