The Current Status of the AI Act: Navigating the Future of AI Regulation in the EU

Article
NL Law
BE Law
EU Law

This article is part of a series of articles we will publish on AI and the AI Act. In the following episodes in this AI series, we will delve deeper into specific topics, aspects of the AI Act and their interaction with other rules, sectors and practices. In this episode, we discuss the current status of the AI Act since the ban on unacceptable AI systems and the AI literacy obligation in February 2025, and the challenges that are already arising in light of the balance between the rapid advancement of technologies and the slow pace of regulation.

As the European Union (EU) continues to take significant strides in regulating emerging technologies, the Artificial Intelligence Act (AI Act) stands out as a landmark legislative effort aimed at regulating AI systems. As the EU moves forward with the AI Act – supervisory authorities are already stating that preparation for complying with the AI Act should start now – several key areas of contention have emerged. These debates highlight the challenge of balancing innovation with ethical and legal considerations, particularly in light of recent political developments (e.g. pressure from the US administration). Reportedly, the European Commission (EC) is already thinking of making more of the AI Act’s requirements voluntary, which proposal is facing significant pushback from the European Parliament (EP).

Timeline for Implementation

The AI Act is set to become fully applicable on August 1, 2026, following a two-year implementation period. However, as with all complex legislation, there are exceptions, with some rules coming into force earlier. 

Since February 2, 2025, AI systems categorised as 'unacceptable risk' (such as AI systems that enable social scoring or untargeted scraping of the internet to create facial recognition databases) are banned, marking a significant step towards safeguarding fundamental rights. Furthermore, organisations developing or using AI systems must ensure that their employees are AI-literate, meaning that they must foster a sufficient knowledge of AI among their employees.

By May 2, 2025, providers of AI systems will need to have their codes of practice ready to demonstrate compliance with the Act's requirements. 

Moreover, high-risk systems will have additional time to comply, with the deadline extended to August 2, 2027, allowing stakeholders to adapt to the new regulatory landscape.

Current Status and Developments

Since 2021, the (then proposed) AI Act is undergoing scrutiny and debate within the EU legislative process. The EP and the Council of the EU have been actively engaged in discussions to refine the AI Act's provisions and address concerns raised by various stakeholders. The AI Act was formally adopted on 13 March 2024. The EC now aims to provide guidance in these discussions. In February 2025, the EC has published draft guidelines on prohibited AI practices and on the definition of AI systems – with critics stating that the documents lead to more confusion than clarity. Currently, key areas of discussion include the definition of high-risk AI systems, the scope of transparency requirements, and the balance between innovation and regulation.

Definition of High-Risk AI Systems

One of the most significant areas of debate revolves around the definition and categorisation of high-risk AI systems. The AI Act seeks to impose stringent requirements on systems deemed high-risk, such as those used in law enforcement, critical infrastructure, and employment. However, stakeholders have raised concerns about the criteria used to determine what constitutes high risk. For instance, some argue that the current definitions may be too broad, potentially stifling innovation by imposing excessive regulatory burdens on technologies that do not in fact pose significant risks. Others advocate for more precise criteria to ensure that high-risk applications are adequately regulated while allowing less risky technologies to flourish.

Transparency and Accountability

Transparency and accountability are central tenets of the AI Act, yet they remain contentious issues. The AI Act mandates that AI systems, particularly those classified as high risk, must be transparent in their operations and subject to human oversight. However, the specifics of these requirements are under debate. Industry representatives express concerns that overly prescriptive transparency obligations could hinder the development of proprietary technologies and compromise competitive advantage. Conversely, consumer advocacy groups emphasise the need for robust transparency measures to protect users and ensure ethical AI deployment.

Copyright legal gap

In February 2025, in a letter to the EC, 15 cultural organisations have highlighted the need for new legislation to protect writers, musicians, and creatives who are vulnerable due to an alleged "legal gap" in the AI Act. The AI Act does not adequately address copyright challenges posed by generative AI models, according to a copyright expert. The text and data mining exemption in the AI Act, originally intended for limited private use, allegedly has been misinterpreted in a way that could allow large tech companies to process vast amounts of intellectual property. This has sparked alarm and lawsuits from authors and musicians. The EC has acknowledged these challenges and is considering additional measures to balance innovation with the protection of human creativity.

Hungary's Use of Facial Recognition Technology

A concrete example of the challenges faced in implementing the AI Act is Hungary's use of facial recognition technology, where Hungary is proposing to use AI-based facial recognition to fine Gay Budapest’s Pride participants. Reports indicate that Hungary's deployment of this technology violates the provisions of the AI Act, where an EC spokesperson states that the assessment of legality would depend on whether the facial recognition would be administered real time or afterwards. Members of the EP are urging the EC to look into the issue. The case underscores the difficulties in enforcing the AI Act's requirements across member states – the liability portion of the AI Act is still not clear, especially since the withdrawal of the AI liability Directive – and highlights the need for clear guidelines and enforcement mechanisms. 

Protection of Minors

The AI Act also addresses the protection of minors, yet this area remains fraught with challenges. Ensuring that AI systems do not exploit or harm minors is a priority, but the guidelines for effectively achieving this are still being refined. The complexity of regulating AI in contexts involving minors, such as educational technologies and social media platforms, requires careful consideration to balance protection with access to beneficial technologies.

Implications for Stakeholders

Although the AI Act still is quite theoretical and the liability portion of the act is still not clear, it is important that organisations making use of AI systems are aware of the rules and start preparing their compliance with the AI Act. For further insights into the implications of the AI Act and the necessary steps developers and users must undertake on their way to compliance, the Dutch and Belgian Data Protection Authority provide resources and guidance on their website, respectively here and here. This includes conducting thorough risk assessments, implementing transparency measures, and enhancing AI literacy.

Conclusion

All in all, the AI Act is poised to have far-reaching implications for businesses, developers, and users of AI technologies across the EU, implications that are often unforeseen. The AI Act represents a pivotal moment in the regulation of AI within the EU. As becomes clear, it is no “silver bullet” legislation, resolving all unclarities for organisations using and developing AI systems. Guidelines of the EC attempting to provide guidance partly fail to do just that. Moreover, the AI Act is relevant across all sectors and practice areas and AI systems will be regulated by other legislation besides the AI Act, such as the GDPR and anti-discrimination laws. These complexities highlight the need for comprehensive legal guidance. In upcoming episodes of this AI-series, we will delve deeper into specific topics and aspects of the AI Act, navigating between these sectors and practice areas, attempting to create awareness and provide hands-on cross-disciplinary guidance.