With AI and killer robots in our armies, a new, terrifying era is upon us

An unmanned drone flies over Ukraine in September 2022, looking for enemy activity from Russia.Image Getty Images

During a visit to the United Nations, Foreign Minister Wopke Hoekstra recently called the use of artificial intelligence (AI) in the military domain one of the ‘greatest future challenges in the field of international security and arms control’. If we do not take responsibility for the development and deployment of this now, according to the minister, ‘we will regret it later’. This attitude is welcome, but there is still work to be done, including by the Netherlands. Hopefully the House of Representatives, where the Defense Committee will be discussing this subject this week, will insist on this.

About the authors:

Daan Kayser is project leader for autonomous weapons and Jip van Dort is a political advisor, both at PAX.

The trend is clear: armies are applying more and more autonomy to weapon systems. A new, terrifying era – where we leave the selection and attack of targets to sensors and algorithms, without human intervention – is upon us. We must hurry to set boundaries and get things right while we still can.

Prohibitions and Warranties

First of all, a ban on fully autonomous weapons, so-called killer robots, is urgently needed. These are unacceptable weapons that cannot be controlled by human beings. In addition, the government must regulate the use of partially autonomous weapons.

Two government advisory committees, the Advisory Council on International Affairs (AIV) and the Advisory Committee on International Affairs (CAVV), already stated at the end of last year that fully autonomous weapons cannot be used under the law of war. The international community should therefore prohibit its use. Both committees also believe that human judgment should be guaranteed when deploying partially autonomous weapons.

After all, the law of war imposes the obligation to use weapons on the person who plans, takes the decision or carries out an attack. Therefore, the government is recommended to adhere to the concept of meaningful human control as a starting point for the regulation of partially autonomous weapon systems. In response to the report of the advisory committees, the government has fully adopted both recommendations. The government has indicated that the Netherlands will make a case for these principles in an international context.

wicked way

So far all good, you would think. But there is a catch. In order to realize the policy mentioned above, the Netherlands seems to be focusing mainly on the Convention on Certain Conventional Weapons (CCW). This is a permanent consultation of the United Nations with the aim of curbing weapon systems that can cause disproportionate injury. The big problem, however, is that with the CCW, a single country (e.g. Russia, India or Israel) can block the will of the majority. This route is therefore a deadly road.

In view of the major risks associated with the unregulated use of autonomous weapons, the Netherlands must not lose any time and must work on a separate treaty outside the CCW. This is possible together with the large number of countries that also want new international rules, including Belgium, Switzerland, Ireland, Brazil, Mexico and New Zealand.

Preventing civil distress

It is encouraging that in the past the world has managed to avoid obstacles at the CCW by drawing up important, standard-setting treaties via a different route. Previously, prohibitions on the use of landmines and cluster bombs were agreed – two treaties that the Netherlands has signed and which have prevented much civilian suffering.

It is also important that a treaty on autonomous weapons states that humans may not be targets. This is a fundamental ethical principle, which is also endorsed by the Red Cross. It is unethical to reduce the decision to kill a human being to zeros and ones. Killing robots undermine human dignity.

The law of war regulates that civilians may never be targets in acts of war. Soldiers may be targeted, but not once they have been put out of action. This changing legal status of people in a conflict zone is another reason not to leave the choice of death or life to machines.

Warning

It is striking that the AIRCW, an advisory committee for the Ministry of Defense that assesses the use of weapons against international law, is also critical of the use of autonomous weapons against human targets. According to the committee, it is doubtful whether autonomous weapons will be able to independently distinguish between civilians and military personnel, and between military personnel and military personnel who have been extinguished. It’s a clear warning.

Finally, a note about investing in these weapons. The Netherlands, which is increasing the Defense budget by an unprecedented five billion euros per year, seems very interested in the application of autonomous weapons. But without regulation, investing is problematic.

The Advisory Council on International Affairs (AIV) and the Advisory Committee on International Affairs (CAVV) also warn against this. By focusing on research and development, which other NATO countries, but also China, for example, are currently doing, a new arms race is given free rein, governments contribute to the proliferation of mass weapons and the threshold to warfare is lowered.

The urgency and the risks are clear. All the more so because the Netherlands is organizing a conference on military AI in February next year, it is essential that Minister Hoekstra now takes responsibility for preventing autonomous weapons from being deployed unregulated.

ttn-23