The death penalty for a robot? That’s not just science fiction anymore, says lawyer Alice Giannini

The suspect of a crime will soon no longer necessarily be a human being of flesh and blood. The rapidly advancing artificial intelligence is often seen as super efficient and goal-oriented, but it also turns out to be fallible and can harm people. Think of self-driving cars that make waves. And then criminal liability for artificial intelligence seems conceivable. Or is that exaggerated and is it unthinkable that robots and systems will ever end up in the dock?

Italian lawyer Alice Giannini (30) has conducted research into this subject in recent years. It resulted in her dissertation Criminal Behavior and Accountability of Artificial Intelligence Systemsafter which she received her PhD on November 24 from Maastricht University, where she has been a university lecturer since this year.

The theme presented itself naturally, she says: “After my law studies, I combined an internship at the Public Prosecution Service with a job at a start-up that developed AI.” Her childhood also played a role: “In my younger years I watched a lot of science fiction films with my father. It’s funny to see that scenarios that were thought impossible at the time are now becoming reality. Maybe also a little scary, although I’m not afraid myself.”

In what areas has AI already progressed to such an extent that criminal law issues could come into play?

“I would say: traffic, health care. And don’t forget aviation.”

And where is the thinking in criminal law on this?

“When I started my research in 2019, it was really a niche, but in recent years a lot of papers and articles about AI and criminal law have been published.”

How did you proceed?

“It was a lot to take in. A little technical knowledge about AI is also essential. And I had to define AI. That’s like trying to nail pudding to a tree. Not only lawyers, even technicians cannot agree on this. After finally establishing a definition, my research also had a significant philosophical component: can AI be a legal entity? Does she have free will? Can she feel guilty? It’s almost endless: you start with a hundred questions and you end up with a hundred thousand questions.”

Yet you eventually arrived at the main topic: criminal liability.

“Roughly three schools of thought can be distinguished: ‘expansionists’ who strongly advocate liability and changes to criminal law, skeptics who consider this unrealistic nonsense and the moderates. They are in the middle and say: ‘It is still too early, but when the time comes, criminal law can provide a solution’.”

“Research into AI is almost endless: you start with a hundred questions and you end up with a hundred thousand questions.”
Photo Alice Giannini

And what do you think?

“I started out as a skeptic, as someone who thought it was nonsense. After all my research, I count myself among the moderates. In the long term it could really get to the point where it is necessary.”

Why not now?

“The existing law is sufficient for the time being. If something happens now, you can prosecute the developers or companies behind the AI. And if you want to regulate too much in advance, you will stop innovation at an early stage. No one wants to invent anything if they immediately risk enormous liability.”

Don’t you run the risk that the legislator will continue to lag behind the facts?

“Criminal law should not be one of the first to arrive at a party. It is intended as a last resort. Research the topic and debate it, but take your time and rest. Moreover, it is impossible to predict what could go wrong with AI in a number of years, because we cannot predict what is possible then. For current applications, for example in healthcare and self-driving cars, the existing law is still sufficient.”

We are already getting AI judges and AI lawyers much faster than criminal liability for AI

When is new legislation necessary?

“As soon as we start developing machines that we also consider part of our community. At that moment they can also threaten our community and commit crimes in the legal sense of the word. To prevent this, they must be able to learn moral and legal rules.

That’s not possible yet?

“Work is being done on the morality of AI, but it is currently impossible to think of every dilemma in advance or to have a machine make the trade-offs that we sometimes make in the blink of an eye. A classic example: a friend has to rush to the emergency room due to acute, dangerous complaints. Then it is not simply a case of the utmost urgency or respecting the traffic rules. No, most people will often try to maintain the balance between getting to the emergency room quickly and keeping the risk of a traffic accident as small as possible.”

What could the penalties for AI look like?

“The death penalty is the most drastic: destroying a robot or shutting down a system. But in many places we have passed the stage of punishing people with death. And AI can’t really suffer. Fining is difficult. AI cannot currently own money on its own. Sentencing to reprogramming seems to me to make the most sense. With people you also hope that after serving a sentence they will emerge as a better person, that they will not make the same mistake again.”

Will human judges then pass judgment or will AI soon judge AI?

“The latter seems very likely. AI judges and AI lawyers are coming a lot faster than AI criminal liability. Artificial intelligence already has such applications, not as extreme as robot judges, but for example calculating the chance of recidivism and the most appropriate punishment based on the crime, the situation and the circumstances.”

Also read
Human rights also play a role in AI, which is why Jan Kleijssen is working on a treaty

<strong>Jan Kleijssen</strong> in a park in Strasbourg, where the Council of Europe is located.” class=”dmt-article-suggestion__image” src=”https://images.nrc.nl/6dr8mPmCnK2sSJ6PVZ3HnYK61Po=/160×96/smart/filters:no_upscale()/s3/static.nrc.nl/bvhw/files/2023/01/data94831917-9d3d4a.jpg”/></p><aside data-article-id=

ttn-32