‘Intelligent software does not see your tears in the courtroom’

Will artificial intelligence help the judge decide? And partially replaced? That chance seems to have increased slightly now that a criminal court has itself conducted scientific research into machine justice in daily practice. With a positive outcome. Her conclusion is that software that produces a draft decision can play a constructive role within a high-quality case law. And she also expects it to happen.

An earlier initiative to settle online disputes with self-learning software, in the form of digital arbitration, encountered strong resistance from the judiciary and from critical media in 2018. The verdict was not transparent, secretive and ‘commercial’. About this debacle, litigated since then† An attempt by the judiciary to digitize one of the larger jurisdictions already failed in 2017 – since then IT has not been a popular topic with judges.

Simple traffic matters

But graduated at the end of May so the Bossche senior criminal judge Manuella van der Put (57) at Tilburg University with Corien Prins, professor of law and computerization, on artificial intelligence in the judiciary. She used her own court in East Brabant as a testing ground. Van der Put chose this theme because her then president was looking for solutions to the growing workload, ‘as opposed to hiring more people’. Can intelligent software make the administration of justice work more efficiently: in what type of cases and against which legal political, philosophical and management questions do you run into questions? And would such a system work?

Van der Put conducted literature research, interviewed experts and built a test system in six months. A tool with which simple traffic cases for subdistrict court judges were prepared. He himself checked whether the appeal had been submitted on time and whether payment had been made. Who recognized the place, date and type of violation. Which delivered similar cases and ‘most common decisions’. She then tested this system with judges and legal counsel at the courts of Den Bosch and Eindhoven.

I’m not afraid that we’re going to become some kind of robots

“The secretaries were enthusiastic about it,” says Van der Put, mainly because it freed up time. “Looking for comparable cases, the court lawyer does not always have time for that. As a judge, you often do that yourself.” And if citizens argue that they were speeding because they were on their way to the hospital? “Yes, then you can enter the search terms ‘hospital’ and ‘speeding’”. After which it is clear whether and how often judges took this into account before. Isn’t the temptation great to just put a stamp on it?

“I have not experienced it that way and I am not afraid that we will now become a kind of robots. In the end, the system’s ‘most likely decision’ turned out to be the least valued. The court lawyer also used it, but ultimately preferred to use his own knowledge and experience. It is very nice to know what is usually decided. Now we also look at the case law. Only we got that automatically delivered. Jurisprudence remains customised: who do you have in front of you, what was your situation? That’s what it’s about.”

Extra motivation obligation

Is there no risk that artificial intelligence will freeze legal development? “Certainly, that risk is simply there. This is easy to correct in knowledge systems. If the Supreme Court changes its mind, we will adapt that knowledge and thus take the new reality with us. But at machine learning that is more difficult because it is fed with historical data. It is important to keep that in mind. Judges also need to learn how such a system works.”

And what’s the point of an appeal if the same predictive algorithm is running at the court? That can only have the same result, right? “You can agree that you will not use any or other systems on appeal. But in principle, if all goes well, the outcome is the same. Only, we humans don’t always decide the same. So if you do want to do it differently, you should regard such an equal outcome as an extra motivation obligation. That is also quality.”

Judges do experience emotions, that’s important, it’s part of good justice

Artificial intelligence (AI) is without emotions, has no conscience, has no empathy. Should the litigant be happy with that? “Judges do experience emotions, that’s important, it’s part of good justice. But it can also be a pitfall. Machines really don’t have that. Everything is the same for an AI system. Your tears don’t see that. That’s also good, because it can’t be influenced either. If we realize as judges that we are under the influence of who we are, of emotions, of the subconscious, then you can be alert to this. A machine without emotion says: that and that comes out. A judge can then say – fine, thank you, but I do it differently.”

Van der Put has been working on her research since 2017, initially with a one-day exemption. And later with a subsidized researcher, to build the pilot system. Now that it is finished, she would like to enter the judiciary to have a debate about it.

Weaknesses and strengths

Her dissertation confronts the human judge with the machine, analyzes everyone’s weaknesses and strengths and lists dozens of questions, from practical-legal to ethical-philosophical. Can a decision with AI be cheaper for the litigant than without? Is a decision algorithm really ‘explainable’ to the judge and to the litigant? Is lawyer assistance still mandatory in AI cases? Can a citizen consult the system in advance – how his type of case is usually decided? Who is allowed to build and maintain the system – the judiciary itself, or is it a (foreign) supplier? Can citizens also refuse the application of AI? Can the court also demand or refuse support from AI? Should AI matters always be limited to a ‘small interest’? Such a system also makes the judges themselves transparent – ​​who is often challenged and whose verdicts are often overturned? Many dogmas, principles and sacred houses are being targeted.

Van der Put recommends small steps, a growth process that citizens must be able and willing to participate in. Above all, it has now been proven that a working system can be built in a short time for a well-arranged legal area. After the wrong parking, red light and speeding cases, the traffic fines could be supported with a decision algorithm. Van der Put considers it conceivable that for some ‘bulk cases’ the computer might one day be allowed to decide for itself. “If we want an autonomous decision from a machine that is deaf and blind, that might be fine for some things. Then you still have to decide whether it needs a signature from a judge. Or that you only see a judge on appeal. But then we first have to agree on that in case law.”

ttn-32