The Forefront of AI Legislation: A Guidance to the New EU AI Act

Light surfaces, LIEKE L-symbol

In December 2023, EU Parliament’s negotiators and the Council reached an agreement on the future of EU AI Act. Although the scope of the regulation is limited, each company should carefully assess the applicability of the Act in each case. In October 2023, our experts spoke at Luxid’s webinar ‘The legal landscape of generative AI’ on the future EU AI Act, the AI Liability Directive and AI copyright issues. You can watch the recording here. In the blog post below, we have summarized the changes brought by the AI Act and what to expect from it.

The EU AI Act Is Coming – Are You Ready?

In a rapidly evolving digital landscape, the European Union (EU) has become the first to introduce the world’s first comprehensive legislation for Artificial Intelligence (AI) – The EU AI Act. The Council and the Parliament have reached a provisional agreement on the Commission proposal for harmonized rules on AI. Although the final regulation has not yet been adopted, with this provisional agreement, businesses can be almost certain that regulation on AI is coming.

The EU AI Act – A Summary

No official document is available yet for the final text of the EU AI Act, which is currently in the process of being finalized on the basis of the provisional agreement reached. However, based the official notices from the EU institutions from December 2023, the EU AI Act will be following the proportionate regulatory system proposed earlier, where the level of obligations is based on the level of risks. The risk levels the Act classifies action into are the following:

  1. Unacceptable risk – concerns AI systems that pose excessive risks, such as cognitive behavioral manipulation, untargeted scraping of facial images from the internet or CCTV footage, social scoring and real time facial recognition. These systems are banned and cannot be used on the EU market.
  2. High risk – concerns AI systems that affect the fundamental rights of users, for example systems used in law enforcement, critical infrastructure, or management of applications. Systems falling under the high-risk category will face a vast number of new obligations, such as affixing of CE marking, risk management systems, rules for collection of data, and various documentation requirements. However, in the newly accepted legislation, the co-legislators have emphasized that these requirements are brought to more technically feasible and less burdensome level. In practice this means that the quality of data and required technical documentation are brought to lower level for SMEs.
  3. Limited and minimal risk – concerns other AI based systems. The AI Act will introduce new transparency requirements for systems interacting with humans. Additional obligations will also be put to place for foundation models, such as large language models. Systems like and similar to the Chat GPT will have to comply with specific obligations before they are placed in the market. Providers of other AI systems may follow voluntary codes of conduct built on the obligations under the AI Act.

EU AI Liability Directive – Next Up in Line?

The preparation of the EU AI Liability Directive (AILD) has followed a parallel path to the AI Act. While the AI Act has been proposed to protect the users of AI systems, not all risk can be eliminated. For this purpose, the EU has been drafting the AILD jointly with the AI Act. The AILD will aim to guarantee that users that have been harmed by AI systems, will get reasonable compensation for the harm that was caused. Without the AILD, the AI Act will only be an additional burden on businesses, without offering a real protection to individuals. Therefore, it can be expected that the AILD will be next up in line to become finalized.

The AILD is built on a similar idea than the product liability laws in the EU, and it aims to ensure that persons harmed by AI systems will enjoy the same level of protection as persons harmed by other technologies. The providers of AI systems shall be discharged from the liability to pay damages only in cases where they are able to prove that the harm was not caused by the AI system.

Expected Timeline

Although the AI Act has taken a step further, it is unlikely that it will take effect before 2025. The agreement reached until now has only been provisional, meaning that the details are currently being hammered out in technical meetings. The Act will be voted on once more – likely in Spring before the Parliament elections, and after that, we will still be facing the national transition periods. However, it is recommended that businesses take the waiting time as time for preparation – for each company, it is essential to recognize whether or not the regulation will concern your business.

As businesses navigate this new regulatory terrain, adapting and complying becomes paramount. For a more in-depth look and discussion on the proposed regulation on AI as well as the copyright rules concerning AI created material, we encourage you to watch the webinar recording ’The legal landscape of generative AI’ here.

Our experts are closely monitoring the proposed regulation and will update you on its progress.

For more information please contact