If the effort with which the AI agreement was reached is illustrative of the major political and commercial interests behind it, you can conclude: they are enormous.
After marathon sessions lasting a total of 36 hours of negotiations, the European Union reached an agreement on the so-called AI Act on Friday evening.
It means that Europe is now at the forefront of regulating artificial intelligence, something that prominent AI experts worldwide have been calling for for some time. In a statement President of the European Commission Ursula von der Leyen praised the law that “provides people-centred, transparent and responsible […] AI in the EU” but also “substantially contributes to the development of global guardrails for trustworthy AI”.
Not all details about the concluded agreement are yet known and the European Parliament and EU member states must also pronounce again on the end result. But the frameworks have been clearly defined, which makes it clear how Europe wants to restrict AI in the coming years and thus set a worldwide standard.
Also read
What will the world look like in 2040 if humanity allows AI without regulation?
The greater the risks, the stricter the rules
The basic principle of the law is that the greater the risks associated with an AI system, the stricter the rules are. It means a ban on systems with an unacceptable risk, for example so-called ‘social scoring systems’ by governments, which categorize citizens based on personal characteristics or behavior, and the use of emotion recognition software by employers or educational institutions. At the insistence of the European Parliament, it will also be prohibited to predict whether an individual will make a mistake based on personal data – something that played a role in the Benefits Affair in the Netherlands.
But the law also contains exceptions, some of which are still unclear as to how far-reaching they are. During the negotiations, EU member states strongly insisted on broad AI options for security and investigation services. The absolute ban on real time-facial recognition, which the European Parliament wanted, will not be implemented for this reason: government services may use such systems in the detection of murder suspects and the prevention of terrorist acts, among other things.
Disappointing, among others, is Amnesty International, which emphasized in a statement that an “absolute ban is really necessary” because “no guarantee can prevent the damage that facial recognition causes to human rights.
‘It was a huge battle’
MEP Kim van Sparrentak, who was one of the main negotiators for GroenLinks, speaks of an “enormous battle” that had to be fought with EU governments on this point. “I really would have liked more, but at the same time I am happy that we have been able to severely limit use.” She points out that, for example, facial recognition may only be used in very specific cases aimed at one suspect, where a judge must always give permission in advance.
At least as great was the battle to regulate the most powerful AI models, also called ‘general purpose AI’ (GPAI), including the well-known chatbot ChatGPT. Under pressure from a strong tech lobby, Germany, France and Italy have advocated removing these models from the law in recent weeks, only imposing ‘self-regulation’ on them. Main argument: too strict rules would hinder innovation in Europe.
That lobby was not (completely) successful: the GPAI models will also soon be subject to the law. This means that underlying companies must, among other things, be transparent about the data with which the models have been trained. They must also comply with European copyright law and, in case of text, images or sound, make it clear that they were created with AI.
For the largest systems, including ChatGPT, the rules will become even stricter: they must, among other things, carry out regular checks on their possible social impact, limit any risks and report on their energy consumption.
At the same time, there are exceptions here, specifically for systems that are ‘open source’, which means that anyone can view the technology and use it to train their own model. Because these systems are naturally more transparent, they have fewer obligations. Unless they are qualified as ‘high risk’, a definition that is not yet very clear.
It means that European AI companies that lobbied the hardest against the regulations in recent weeks, Germany’s Aleph Alpha and France’s Mistral AI, are likely to remain partly unaffected as open source companies.
It could not prevent the tech lobby from reacting tepidly to the agreement. Main lobby club DigitalEurope expressed concerns in a statement about how difficult it will be for companies to comply with the rules, forcing them to “spend more on lawyers, instead of hiring new AI developers.”
Not a ‘rulebook’, but a ‘springboard’
It points to a sensitive point in the discussion about AI legislation, namely the accusation that laws and regulations hinder innovation. In an initial statement on X European Commissioner Thierry Breton (Internal Market) was already ahead of that criticism, by emphasizing that the law is not only a “rulebook”, but also “a springboard for EU start-ups and researchers to lead the global AI race”.
The hope in Europe is that the rules, as previously happened with those regarding privacy, will eventually be (partly) adopted globally. But whether that will happen is uncertain. It is already clear that the EU is still lagging far behind the United States and China in the global race for AI. The largest models that have to deal with the strictest regulations may therefore have been developed exclusively outside Europe. It may fuel suspicions of protectionism. At the same time, it is said in Brussels that the additional administrative burden for these types of large-scale models is dwarfed by the millions that these types of large tech companies – such as OpenAI or Google – are already investing.
Next spring, MEPs and EU member states will have to vote on the final proposal again, after which the implementation phase will start. Bans will then apply after six months, but the vast majority of the rest of the legislation will only come into effect two years later: probably in 2026.
Reading list