EU – Introduction to the new Artificial Intelligence Regulation

EU - Introduction to the new Artificial Intelligence Regulation

As of the 12th July 2024, the European Union’s Artificial Intelligence Act, Regulation (EU) 2024/1689 (”EU AI Regulation”) was published in the EU Official Journal, and is modifies or amends a number of existing Regulations and Directives of the EU, aiming to create a unified legal framework across the EU member states relating to AI systems.

The EU AI Regulation shall enter into force across all Member States on 1 August 2024, and the enforcement of the majority of its provisions will commence on 2 August 2026.

It is worth noting that the new Regulation takes a risk-based approach to regulating the entire lifecycle of different types of AI systems, and non-compliance with its provisions may potentially result to a maximum financial penalty of up to EUR 35,000,000  or 7% of worldwide annual turnover, whichever is higher.

Scope

As to what represents an “AI System”, Article 3(1) of the said Regulation provides the following definition:

a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The Regulation also establishes obligations for providers, deployers, importers, distributors, and product manufacturers of AI systems, who have  inks to the EU market. This applies to:

  • providers which place on the EU market or put into service AI systems, or place on the EU market general-purpose AI models;
  • deployers of AI systems who have a place of establishment/are located in the EU; and
  • providers and deployers of AI systems in third countries, if the output produced by the AI system is being used in the EU.

There are exceptions to the above scope (e.g. not being applicable to open-source AI systems or for systems which sole purpose of is connected to scientific research and development. However, even on these exceptions, there are limitations, as for example, if they are classified as a high-risk AI system the exception would not apply.

High-Risk AI Systems

The EU has established a risk-based approach to the Regulation and has as such categorised AI systems based on the intensity and scope of the risks each AI system can generate.

”High-risk AI Systems”, which are systems that present a ”high” risk, fall within two categories:

  1. AI systems used as a safety component of a product (or otherwise subject to EU health and safety harmonisation legislation); and
  2. AI systems deployed in eight specific areas, including (among others) education, employment, access to essential public and private services, law enforcement, migration, and the administration of justice.

With reference to AI Systems falling withing the eight areas indicated in (B) above, the Regulation has allowed for exceptions to apply in which case the AI system would not be deemed as “High-Risk”. Such exceptions are connected to the intended use od the AI Systems and as such, the AI system may be exempt from the “high-risk” designation if its intended use is limited to:

  • Performing narrow procedural tasks;
  • Making improvements to the results of previously completed human activities;
  • Detecting decision-making patterns or deviations from prior decision-making patterns without replacing or influencing human assessments;
  • Mere preparatory tasks to a risk-assessment.

A worthy note is the specific reference may in Article 6(3) of the Regulation, which indicates that an AI System which is performing the profiling of natural persons, will always be deemed to fall withing the high-risk designation.

The Regulation imposes a wide range of obligations on the various actors in the lifecycle of a high-risk AI system, which include requirements on data training and data governance, technical documentation, recordkeeping, technical robustness, transparency, human oversight, and cybersecurity. These matters will need to be taken into account and procedures modified accordingly to meet these new obligations.

Prohibited AI Systems

AI Systems which fall within this category are completely banned under the Regulation. Characterization of such AI Systems as Prohibited relies on decisions made by the EU in that certain AI practices may be considered as harmful, abusive and in contradiction with EU values.

The prohibited AI practices include the deployment of subliminal AI techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of, materially distorting human behaviour.

The Regulation however provides for a few exceptions to this rule for law enforcement purposes relating to the use of ’real time’ remote biometric identification in publicly accessible spaces.

Deep Fakes

With the recent prominence of issues caused by the so-called Deep Fakes, the Regulation seeks to establish relevant controls over such AI systems.

Deep Fakes have been defined in the Regulations as:

AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful

Under the Regulation, deployers who use AI systems to create deep fakes are required to clearly disclose that the content has been artificially created or manipulated by labelling the AI output as such and disclosing its artificial origin (unless the use is authorised by law to detect, prevent, investigate, and prosecute a criminal offence). Where the content forms part of evidently artistic work, the transparency obligations are limited to disclosure of the existence of such generated or manipulated content in a way that does not hamper the display or enjoyment of the work.

General Purpose AI Models

The Regulation devotes considerable length to the General Purpose AI models, and as such this matter will be considered in a separate Circular.

Penalties

As discussed above, the maximum penalty for non-compliance with the said Regulation’s rules on prohibited uses of AI is the higher of an administrative monetary fine of up to EUR 35,000,000 or 7% of worldwide annual turnover. However, it should be noted that penalties for breaches of certain other provisions of the Regulation are subject to a maximum monetary fine of EUR 15,000,000 or 3% of worldwide annual turnover, whichever is higher.

The maximum penalty for the provision of incorrect, incomplete, or misleading information to notified bodies or national competent authorities is EUR 7,500,000 or 1% of worldwide annual turnover, whichever is the higher. For SMEs and start-ups, the fines for all the above are subject to the same maximum percentages or amounts, but whichever is lower.

There is also a separate penalty clause for providers of General Purpose AI Models under Article 101 of the Regulation, which will be discussed in more detail in a separate circular relating to the General Purpose AI Models specifically.

For any further guidance regarding this procedure or if you require an initial consultation, please do not hesitate to contact our Law Firm at [email protected], +357 22 251 777 or +357 25 261 777 or please visit our office in Nicosia or Limassol.

Categories