We would like to inform you that on June 14, 2023, the EU Parliament approved the Artificial Intelligence Act (“AI Act”), which introduces uniform regulations on the development, commercialization, and use of artificial intelligence systems in compliance with the rights and values of the European Union.
The aims of the EU legislator include safety, transparency, traceability and the non-discriminatory nature of activities carried out through AI systems, as well as the respect for the environment.
THE LEVELS OF RISK
AI Act adopts an approach based on the risk represented by AI systems: for each level of risk there are some obligations, both for manufacturer and user companies. The Regulation provides for three different levels:
Unacceptable risk: this refers to AI system considered a threat to individuals, for which a general prohibition has been introduced for the placing on and the circulation within the EU market.
This includes AI systems that:
- use manipulation techniques designed to distort behavior;
- exploit the vulnerabilities of individuals or specific groups;
- carry out remote and real-time biometric identification;
- classify people according to their social behavior or personal characteristics (e.g. social scoring).
High risk: this includes AI systems that negatively impact security or fundamental rights, which are, for example, those used in products regulated by the EU product safety law (e.g. toys, cars, medical devices, etc.) and systems designed to assess credit risk or those for analyzing CV for recruitment purposes.
The main obligations to be followed for the release of the above systems are:
- the provision of an adequate risk assessment and mitigation process;
- the provision of clear and adequate disclosure to be given to the user and detailed documentation that provides all the necessary information about the system and its purpose to facilitate the supervisory authorities’ activities.
Remote biometric identification systems are classified as high risk and subject to severe restrictions.
Low or minimal risk: minimum transparency requirements must be ensured for these AI systems, in order allow users to make informed decisions. Such systems include video games, spam filters and systems that generate or manipulate images, audio or video content (called deepfakes).
THE AI ACT COMPLIANCE ACTIVITY
According to the latest research, the use of AI systems by Italian companies is increasing significantly, so they will be required to create processes to ensure compliance with the Regulation.
First of all, companies will have to be sure that their AI systems are not among those with unacceptable risk, otherwise they would not be admissible in the EU market. Through a compliance assessment, companies will have to be sure that they are not subject to the requirements for high risk AI systems.
And finally, once compliance with the AI Act is found, the AI system will be allowed to circulate in the market after the CE marking of conformity has been affixed.
We point out that it will be necessary to carry out a conformity assessment not only when the AI system is released on the EU market, but also in the case of relevant modifications.
It will also be necessary to adapt the data governance process for data management in accordance with the non- discrimination obligation in the Regulation, as well as to adjust privacy policies in accordance with the principle of transparency and to implement procedures to ensure the exercise of data subjects’ rights.
THE FINAL STEPS
We are now waiting for the final version of the Regulation, which is presumed to be approved by the end of this year, as a result of interinstitutional negotiations. The first Regulation in the world on Artificial Intelligence will enter into force, indicatively, between 2024 and 2025, and it will apply 24 months after its entry into force, except for specific exceptions.
We remind you that our firm's professionals offer assistance on integrated compliance and AI and remain available for any clarifications and insights.