What limits should be given to AI to protect fundamental rights?

The National Consultative Commission on Human Rights (CNCDH) shared April 7 a notice (pdf)relayed by Next INpact, bringing together several recommendations to minimize the impact of the use of artificial intelligence on fundamental rights. These risks are already at the center of concerns, the European Commission has been working since 2021 on the development of the Artificial Intelligence Act.

“Social scoring”, the great enemy of individual freedoms

Remote biometric identification, social rating, automated medical diagnoses, the CNCDH would like to ban certain uses of AI that it considers dangerous. According to her, this type of use could directly infringe fundamental rights. It recommends implementing some twenty recommendations when an AI would contravene human dignity, respect for privacy, equality and non-discrimination or even access to justice and rights. social.

In the same category

Google Cloud

Data Cloud Summit 22′: what’s new presented by Google Cloud?

The Artificial Intelligence Act had already prioritized the use of AI according to the risks it represents. The introduction of “social scoring” by the public authorities is, for example, considered unacceptable. The CNIL already mentioned in 2021 the dangers of such a system, the use of which is made of it in China frightens citizens. On this point, the CNCDH wants to go further by prohibiting its use by private companies. The same applies to voice recognition tools found on toys that can exploit children’s vulnerability.

The institution recommends banning AI for biometric identification remotely and in public spaces. Applications in terms of security and use by the police raise questions about the limits to be set. In the event of a serious and imminent threat to the life or safety of persons, it could be used in an appropriate and proportionate way. The risk of hijacking this type of device cannot be excluded, as the Russian authorities did in 2021 to repress demonstrators in Moscow.

The CNCDH also agrees with the European Commission on the dangers that AI can represent in the justice sector. She specifies that beyond the risks of the application of the law by an algorithm, it is necessary to deepen the reflections in the contributions and the limits of the AI ​​in the jurisdictional procedures. In France, justice had experimented with the DataJust algorithm to calculate the amount of compensation for victims of damage. Its functioning was not considered satisfactory according to The cross.

Prevent and educate on AI issues

Emphasis is also placed on prevention and information. The report highlights the importance for National Education of strengthening the training of students on issues related to the technical, societal and political issues of artificial intelligence. Teachers should have the necessary teaching materials to address these specific issues.

Citizens should be informed systematically when they are exposed to an AI system, and when it performs decision-making concerning them, based on algorithmic processing. The Artificial Intelligence Act cited, for example, the danger of using automatic CV sorting tools in recruitment procedures. Rather than “artificial intelligence”, the CNCDH also recommends the use of the expression “algorithmic decision support system”.

Finally, the CNCDH’s opinion proposes promoting public investment to inform and train individuals through tools accessible to as many people as possible. She also mentions the establishment of national consultations based on the model of the Estates General of Bioethics set up by the National Advisory Council on Ethics. AI will undoubtedly be a central issue in Emmanuel Macron’s new five-year term. France, President of the Council of the European Union, wishes to defend its point of view on the use of AI by law enforcement.

ttn-4