Artificial Intelligence Act: an update
On Thursday 11 May 2023, the Internal Market Committee and the Civil Liberties Committee of the European Parliament agreed to a draft negotiating mandate on the Artificial Intelligence Act (the AI Act). The AI Act is the first attempt to enact a regulation of artificial intelligence in the European Union and follows a risk-based approach. Risk is measured on the basis of the intended purpose of using the technology and the sector in which it is deployed. Depending on the risk posed by an AI system, providers and users are subject to certain obligations. See our earlier blog on the European Commission’s proposal in 2021.
Changes and additions
The changes proposed by the European Parliament aim to ensure that AI systems are safe, transparent, traceable, non-discriminatory and environmentally friendly. They are also intended to include a uniform definition of artificial intelligence that is technology-neutral, in order to include future forms of artificial intelligence.
The AI Act prohibits the use of AI systems that pose an unacceptable risk. The European Parliament significantly amended the list of types of AI that are deemed to carry an unacceptable risk, in part to expand a ban on discriminatory use of AI systems. An example is the use of AI systems for social scoring: evaluating or classifying individuals based on their social behaviour, socioeconomic status, or known or predicted personality traits. The European Parliament added a number of AI applications to the list, including biometric categorisation systems that use sensitive data to categorise natural persons, the untargeted scraping of facial images from the internet or CCTV footage for the creation or expansion of facial recognition databases, and AI systems that infer a person's emotions for purposes such as law enforcement and border management, in the workplace and in educational institutions.
Moreover, it was initially determined that an AI system was considered high risk based on its intended purpose. This definition has now been expanded to include the requirement that the system must also pose a significant risk to health, safety or fundamental rights in order to be considered high risk. Recommender systems of social media platforms that are designated as very large online platforms under the Digital Services Act (DSA) have also been added to the list of high-risk AI systems.
General Purpose AI
The amendments made by the European Parliament address a gap that was already starting to appear in the draft regulatory framework, pertaining to generative AI. The general use of ChatGPT and other generative AI has prompted European lawmakers to reconsider the legislative proposal for the professional use of AI systems. Generative AI for consumers is often deployed widely and across multiple sectors, which is not entirely in keeping with the risk-based approach used in the original proposal of the AI Act, where factors such as purpose and sector determine the level of risk. Generative AI models designed for general use, were not covered in the original proposal due to its tiered approach. The compromised amendments now classifies these models as high-risk AI systems and, consequently, they will be subject to additional transparency requirements. One of those requirements is to clearly indicate whether content has been generated by AI. Another notable change is that it will be mandatory to record all copyrighted data used to train generative AI systems and to disclose its use in a detailed overview.
What’s next?
The European Parliament will hold a plenary vote in mid-June, where the mandate needs to be endorsed by the whole Parliament. Following this, negotiations will begin between the European Commission, the European Council and the European Parliament to agree on a final text, which are expected to continue under the upcoming Spanish Presidency of the European Council, from June to December 2023. Once the final regulation enters into force, it will apply after a period of 24 months. That is currently expected to be in the first half of 2026.