Provisional political agreement EU requirements for the development of artificial intelligence | News item

News item | 09-12-2023 | 12:34

The EU has reached a provisional political agreement in Brussels on the AI Act. These are European requirements and frameworks for artificial intelligence: a digital technology that plays an increasingly important role in society and the economy. This will give the right development of AI – aimed at economic opportunities and safeguarding public values ​​- an important boost in the EU and the Netherlands.

The AI Act offers clear opportunities for both developers and entrepreneurs and also guarantees for European consumers. This includes basic agreements on the operation of AI in products and services, requirements for potentially risky applications and support for developers such as SMEs. This way, Europeans can rely on AI.

This is how the AI Act direction for developers to meet the requirements. Inside so-called regulatory sandboxes they can innovate with this. In this way, AI applications can be developed in the EU that both improve our technological position on the global market and ensure high-quality AI. AI, for example, in design programs, applications in the manufacturing industry and in video games can be stimulated in this way.

Agreements in provisional political agreement

The political agreement also includes agreements on what ‘high-risk AI systems are and what requirements are imposed on them. Certain types of undesirable AI will also soon be banned, such as: social scoring or manipulative AI techniques. Providers of chatbots in online stores, deepfake techniques or generative AI systems such as the popular one ChatGPT must ensure that it is clear to people that they are talking to an AI or that content is created by AI. AI systems with little or no risk to humans do not have to meet additional requirements. These are, for example, AI systems for customer service, for inventory management or for low-risk data.

Minister Micky Adriaansens (Economic Affairs and Climate): “Artificial intelligence is currently moving at lightning speed worldwide. It is important that we in Europe position ourselves better in this regard. Then we can catch up with this technology, we can benefit more economically from the opportunities that AI offers and we can focus on the development of AI systems that work according to the desirable standards in our society. We must therefore continue to strongly encourage innovation by researchers and entrepreneurs. At the same time, we must ensure well-functioning and safe AI systems to create trust. The future AI Act promises to strike the right balance in this regard.”

State Secretary Alexandra van Huffelen (Kingdom Relations and Digitalization): “Dealing with AI is about a good balance between utilizing possibilities and meeting objections. There are many sectors in which the Netherlands is strong and in which AI will play a major role. Think of agriculture, education, health care, peace and security. Internationally, the Netherlands is committed to ensuring that we can trust this technology. For example, to prevent discrimination when using AI for granting allowances or transparency of algorithms in the financial sector. I am happy with an EU agreement. But this does not absolve us from the obligation to continue to look at the opportunities and risks of using AI technology and enforcing the rules.”

Clarity for developers and users

The AI Act provides clarity for AI developers when they want to offer their products on the European market. Users of AI are also bound by rules to prevent risks. The legal obligations, such as a risk management system, will soon apply to high-risk AI systems that, for example, influence people’s chances on the labor market, education, financial services or medical devices. About 15% of all AI systems fall under this. There will be separate requirements and European supervision for the really large AI models.

Consumers, governments and companies that purchase or come into contact with AI products can assume that these systems are safe. Supervision must ensure that unsafe systems are removed from the market and that fines can be imposed on developers who do not meet the requirements. At the same time, European rules ensure a level playing field; Non-European providers of AI products and services must also comply with this if they sell their products here. Organizations that use AI, such as SMEs, can soon be confident that it will work well and will receive support from the developer.

The agreement must still be approved by all EU member states and the entire European Parliament. The law will come into effect after two years.

ttn-17