How to be compliant with the AI Act?

BI & Data Science Analyst at 10 Senses

The European Union Artificial Intelligence Act (EU AI Act) is the world’s first concrete initiative to regulate AI. The AI Act aims to turn Europe into a global hub with trustworthy and responsible AI.

In the digital age, with AI models interfering with our lives more and more, a legal framework for regulating AI is, de facto, essential to protecting fundamental rights of EU citizens and democratic processes.

To accomplish this, the EU AI Act focuses on providing harmonized rules for legislating the development, advertising, and use of Artificial Intelligence in the European Union.

All these rules will ensure that AI systems in the EU:

  • are safe to use,
  • follow fundamental values and rights,
  • foster investment and innovation in AI,
  • enhance enforcement and governance,
  • encourage a single EU market for AI.


Consequently, the Artificial Intelligence Act is a milestone in the regulation of Artificial Intelligence and its impact on organizations globally.

In fact, currently, there are negotiations between the European Council, European Parliament, and European Commission to reach a common ground on the final form of the Act. Nevertheless, there are already certain requirements that organizations must respect and that also provide guidance on how to prepare for the AI Act, which will come into force soon.

In this article, we will check:

  1. Who is affected by the EU AI Act,
  2. What are the requirements of the EU AI Act,
  3. How to be compliant with the AI Act,
  4. When is the EU AI Act coming?


Who is affected by the EU AI Act?

In the Act, there are clear definitions for various actors involved in Artificial Intelligence development, usage, import, distribution, or manufacturing. As a result, these are not only providers of AI systems but also deployers, importers, distributors, or product manufacturers.

What is more, the Act also takes into account providers and users of AI systems who reside outside the European Union if their AI systems are intended to be used within the borders of the EU.

What are the requirements of the EU AI Act?

The AI legislation preparation can be broken down into a few steps:

  1. Understanding the current state by performing an AI model inventory.
  2. Risk classification of AI models.
  3. Application of rules to the specific risk class.


1: Understanding the current state by performing an AI model inventory

First, organizations should assess their inventories. Therefore, they should check if they have AI systems in usage, deployment, or procurement from third-party providers and scrupulously enlist such AI models.

Therefore, you should start by assessing your potential exposure to AI. Even if your organization doesn’t use AI currently, it is likely it will start in the upcoming years. Remember that Artificial Intelligence market is growing at an accelerating pace, and, according to IDC, 90% of commercial enterprise apps will use AI by 2025.

Therefore, start with initial identification by looking into an existing software catalog or conducting surveys with various business units.

2: Risk classification of AI models in the EU AI Act

Once you have investigated a model repository, extended it, or created it, you should classify each AI model by the risk it carries. The Act mentions four different categories of risk, such as:

  • Unacceptable risk
  • High risk
  • Limited risk
  • Minimal risk.
How to be compliant with the AI Act - risk categories

Source: Regulatory framework proposal on artificial intelligence | Shaping Europe’s digital future (

Let’s get into more details on each category of the EU approach to the use of AI.

Unacceptable risk = prohibited systems

The unacceptable risk category refers to Artificial Intelligence systems that involve:

  • subliminal techniques aimed at distorting behavior and causing users significant harm,
  • exploiting the vulnerabilities of users aiming at distorting behavior causing or likely to cause significant harm,
  • biometric categorization systems categorizing individuals according to sensitive or protected attributes or characteristics,
  • social scoring, evaluation, or classification of natural persons based on their social behavior or personality characteristics leading to detrimental or unfavorable treatment,
  • facial recognition databases using the untargeted scraping of facial images from the internet or CCTV footage,
  • inferring emotions in the areas of law enforcement, border control, and workplace and education institutions,
  • analyzing recorded footage of public spaces through post-remote biometric identification unless subject to a pre-judicial authorization and strictly necessary for the targeted search connected to specific serious crimes.


Such systems are illegal in the EU. If organizations use or develop such systems, they will be have to face heftiest fine of the available, up to 40,000.00 EUR or 7% turnover. (Source: Key Issue 1: Fines/Penalties – EU AI Act).

High-risk = additional obligations

High-risk categories refer to AI systems entailing:

  • safety element of a product or falling under EU health and safety harmonization legislation (for example, toys, medical devices, lifts),
  • real-time and post-remote biometric identification systems of human beings,
  • management and operation of critical infrastructure (road traffic, electricity, supply of water),
  • education and vocational training employment systems to assign individuals or admit participants in tests to specific institutions,
  • recruitment for advertising vacancies, filtering applications, evaluating candidates, promoting or terminating work-related relationships, employee management (for example, evaluating performance in tests or interviews),
  • evaluating the eligibility of humans for public assistance benefits and services, creditworthiness, or priorities in the dispatching of emergency first response services (firefighters and medical aid),
  • making individual assessments of human beings for offense, using polygraphs and similar tools to detect the emotional state of natural persons, detecting deep fakes,
  • evaluating the reliability of evidence during investigation, predicting the occurrence of an actual criminal offense, profiling individuals, or searching large datasets to identify unknown patterns by law enforcement authorities,
  • detecting the emotional state of people, using polygraphs, assessing security, irregular immigration, the health risk of entering the territory of a member state, verifying the authenticity of travel documents, detecting non-authentic documents, assisting public authorities to examine applications for asylum, visa, and residence permits,
  • assisting a judicial authority in researching and interpreting facts and the law through the administration of justice.


AI models classified as high-risk must comply with strict requirements, including risk assessment, data quality, documentation, transparency, human oversight, and accuracy. The EU AI Act lists specific obligations for providers, deployers, distributors, importers, authorized representatives, and foundation model providers.

Failure to comply with these obligations can result in a fine up to 20,000.00 EUR, or 4% of turnover.

Limited risk = transparency obligations

Limited-risk classes entail:

  • interaction with people (for example, chatbots),
  • emotion recognition systems,
  • biometric categorization,
  • image, audio, or video content generation.


Limited-risk systems only must adhere to specific transparency obligations so that users know that they are interacting with machines, not humans.

Minimal risk = no obligations

Finally, all the other AI applications have insignificant risks and can be freely used. This category entails, for example, games or spam filters.

3: Application of rules to the specific risk class

As you can see, the strict requirements are imposed on high-risk systems only, whereas limited-risk systems have only transparency obligations, and those with the lowest risk have none.

Therefore, to be compliant with the EU Act, you need to go through all the above steps and apply specific rules depending on the risk category your AI system falls into.

Do you need help with AI projects?

Let’s check if we can help you

How to be compliant with the AI Act?

Let’s start with the most extensive category, which is high-risk AI models.

If you are a provider, deployer, distributer, importer, authoritative representative, or foundation model provider, for example, of generative AI systems, you need to ensure that AI practices in your organizations are in line with the requirements concerning high-risk models.

The EU AI Act requirements for the high-risk category

Obligations of providers

Firstly, providers in the high-risk category are required to:

  • establish, implement, document, and maintain a risk management system,
  • develop AI technology based on training, validation, and testing datasets; if they use techniques involving the training of models with data,
  • prepare technical documentation and relevant conformity assessment procedures before the system is on the market,
  • design and develop AI models enabling:
    • the automatic recording of events, traceability of functioning, and automatically generated logs,
    • transparent operations to allow users to interpret the system’s output and use it appropriately,
    • effective oversight by human beings during the period in which the AI system is in use,
    • reaching an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle,
  • establish a quality management system,
  • comply with the registration obligations,
  • take necessary corrective actions and inform the national competent authorities of non-compliance and any corrective actions taken,
  • affix the CE marketing to their AI systems to indicate the conformity,
  • demonstrate the conformity of AI systems upon request of a national competent authority.


Obligations of deployers

In brief, deployers of AI systems must take appropriate technical and organizational measures to ensure they use such systems:

  • implement human oversight and ensure that humans are assigned,
  • ensure that relevant and appropriate robustness and cybersecurity measures are regularly monitored for effectiveness and are regularly adjusted.


Obligations of distributers

Before releasing high-risk systems, distributors need to verify that they:

  • bear the required CE conformity marking,
  • have required documentation and instructions for use,
  • are compliant with requirements for importers and providers.


Obligations of importers

Importers of high-risk systems need to ensure that such systems are in conformity by ensuring that they:

  • have gone through relevant conformity assessment procedures by the provider,
  • have technical documentation in accordance with the EU AI Act,
  • bear the required conformity marking, documentation, and instructions of use.


Obligations of authorized representatives

Before AI models of providers outside the European Union can launch their AI technology in the EU, they need to, by written mandate, appoint an authorized representative established in the Union.

The mandate shall empower authorized representatives to:

  • ensure that providers have done an appropriate conformity assessment and prepared the technical documentation,
  • upon a request, provide a national competent authority with all the information and documentation necessary to demonstrate the conformity of the AI system,
  • cooperate with national supervision on any action the authority takes to reduce and mitigate the risks posed by high-risk AI systems.


Obligations of foundation model providers

Finally, providers of foundation models need to:

  • demonstrate through appropriate design, testing, and analysis the identification, reduction and mitigation of foreseeable risks to health, safety, fundamental rights, the environment, and democracy,
  • process and implement only datasets that are subject to appropriate data governance measures for foundation models, especially measures to verify the suitability of the data source and possible biases and prevent it from producing illegal content,
  • design and develop such models to achieve appropriate levels of performance, predictability, interpretability, corrigibility, safety, and cybersecurity assessments throughout their lifecycle,
  • design and develop such models, making use of applicable standards to reduce energy use, resource use, and waste,
  • create extensive technical documentation and intelligible instructions for use,
  • establish a quality management system,
  • register the foundation model in the EU database.


As you can see, for each actor connected to high-risk AI systems, there are specific requirements. The above points are just the AI Act summary. It is advisable that you read the full AI Act text concerning certain obligations if you fall into any of the above categories.

The EU AI Act requirements for the limited-risk category

As already mentioned, limited-risk AI system providers don’t have any strict requirements imposed. Nevertheless, the Artificial Intelligence Act imposes a few transparency obligations on them.

As a result, they need to ensure that AI systems that interact with people are designed in a way that users know they are exposed to AI in a timely, clear, and intelligible manner (unless it is obvious from the circumstance or context of use). It is also advisable to include the following information:

  • which AI functions are enabled,
  • if there is human oversight,
  • who is responsible for the decision-making process.


The above rules also apply to providers of emotion recognition and biometric categorization systems.

Moreover, AI systems that generate or manipulate image, text, audio, or other visual content that would falsely appear to be authentic and which features depictions of people appearing to say or do things they don’t, or without their consent (deepfakes), shall disclose in a timely, clear, and visible manner that the content has been artificially generated or manipulated. Moreover, if possible, the name of the natural or legal person who generated or manipulated it should be added.

Minimal-risk category

It is also worth mentioning that if you provide an AI system that falls into the low-risk category, you can voluntarily comply with the requirements of the European Union AI Act.

The European Commission, the AI Office, and Member States shall encourage voluntary codes of conduct that will provide technical solutions for how AI systems can meet the requirements according to their intended purpose. What is more, they will take into consideration other goals, like environmental sustainability, accessibility, stakeholder participation, and diversity of development.

When is the EU AI Act coming?

In fact, the AI Act timeline doesn’t provide any exact dates but rather estimations of when certain actions will take place. It is sure that:

  • in April 2021, the European Commission presented its proposal for the AI Act;
  • in December 2022, the European Council adopted a general approach to the AI Act;
  • in June 2023, the European Parliament members adopted their negotiating position on the AI Act.

EU institutions shall reach political agreement on the final form of the AI Act in late 2023 and finalize it in early 2024. Nevertheless, taking into account the transition period, the AI Act will come into force in late 2025 or early 2026.

Although these dates may seem distant, it is high time to take action now. Understanding the current state of AI models, raising awareness within organizations, designing ethical AI technology, assigning responsibility, and establishing formal governance may take time to settle down within organizations and massive technology companies.

It is advisable to start right away. You can help yourself and your organization with this by implementing the right AI governance tool. You can check out the top 5 AI governance online platforms that will help you prepare for EU law. The article will be published in the following days.

All in all, the European Union Artificial Intelligence Act is a significant milestone in regulating AI and will have a significant impact on AI development worldwide. It will affect many AI systems, including highly popular generative AI technologies like ChatGPT or Bard.

Therefore, it is crucial for companies to act now, assess their risks, and start preparing for the changes in the coming AI regulation. By taking action, they can move towards a more responsible and trustworthy AI environment and foster investment in the private sector.

Talk to our expert

Are you looking for expert skills for your next data project?

Or maybe you need seasoned data scientists to extract value from data?

Fill out the contact form and we will respond as soon as possible.