We wrote, in an article dated Thursday 7 December, that after 22 hours of talks, unanimity of views had still not been reached, and the AI Act had not yet been approved.
We had arranged to meet on Friday the 9th. And in fact it took 36 hours to resolve all the differences in ideas. After that the AI Act, the European law on artificial intelligence, has finally received the green light from Brussels.
Need for regulation
The AI Act was born from the increasingly stringent need to regulate artificial intelligence, especially since generative AI has entered our lives forcefully.
This is first demonstrated by the document, signed by 18 countries including Italy, containing some (non-binding) rules on the use of AI. And then the appeal signed by 34 Italian cultural associations in response to our government’s idea (shared by France and Germany) to let software producers rely on self-regulation.
The agreement on the AI Act arrived on Saturday 9 December. The trilogue (i.e. the joint meeting of representatives of the Council, Parliament and Commission) has resolved the points of divergence and there is now unanimity of views on the regulation.
The AI Act, as we read on the official website, is “a European proposed law on artificial intelligence, the first law in the world on AI by a major regulatory body. The law assigns AI applications to three risk categories.”
The points of divergence
The agreement on the AI Act was found after 36 hours of talks, because some points kept the more rigid European Parliament and the Council (expression of the countries) away.
Above all, on two fronts the positions were not aligned: that of biometric recognition in real time and that of predictive policing. That is, the possibility, through AI tools, to probabilistically determine by whom, how and where a crime could be committed.
The hardest line won: no predictive policing, and the use of artificial intelligence based on sensitive characteristics such as political, religious and racial beliefs will be banned.
Law enforcement agencies will be able to use biometric recognition systems only in exceptional cases. That is, the imminent terrorist threat, the search for victims of serious crimes or the prosecution of serious crimes.
And facial recognition can be “rigorously used in the targeted search for a person convicted or suspected of having committed a serious crime”.
Emotion recognition in workplaces and educational institutions is prohibited. And stop techniques that aim to manipulate human behavior.
For high-impact AI systems, such as GPT-4, manufacturers will need to ensure maximum transparency on training processes.
Levels of attention on copyright and privacy will increasewith the introduction of an invisible digital label that will certify whether a work was created by artificial intelligence.
An ad hoc office on artificial intelligence will then be created, based in Brussels.
Companies that fail to comply with their obligations under the AI Act will receive fines starting from €35 million or 7% of global annual turnover (whichever is higher) for violations of prohibited AI applications; 15 million euros or 3% for other violations, and 7.5 million euros or 1.5% for dissemination of incorrect information.
What was reached at the Brussels trilogue was a general agreement on the AI Act. Now we will have to work on the final text, which should come into force within two years.
Within 6 months, however, the most dangerous uses of AI will already be prohibited. In the meantime, the Commission will launch an AI Pact (AI Pact), between European and global developers, who will commit to implementing the obligations of the AI Act before the legal deadlines.
We remind you that the European regulation on AI, proposed by the European Commission in April 2021, received the OK from the European Parliament in June, which led to the trialogue of recent days.