AI legislation in Europe and prohibited AI practices: Commission’s new guidelines

Views: 30

In the beginning of February 2025, the Commission published guidelines providing an overview of AI practices that are regarded as unacceptable and prohibited due to their potential risks to the EU-wide values and fundamental rights. As soon as the prohibited practices are too complicated to comprehend, the Commission has prepared draft guidelines, which are going to be effective after completion of formal legislative procedures.

Background
The guidelines are designed to ensure the consistent, effective, and uniform application of the AI Act across the EU-27 (adopted in June 2024). While they offer valuable insights into the Commission’s interpretation of the prohibitions, they are non-binding, with authoritative interpretations reserved for the Court of Justice of the European Union (CJEU). The guidelines provide legal explanations and practical examples to help stakeholders understand and comply with the AI Act’s requirements. This initiative underscores the EU’s commitment to fostering a safe and ethical AI landscape.
More on the issue in: https://www.integrin.dk/2025/03/17/ai-in-europe-and-the-world-legal-and-executive-factors/

The AI Act is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally. The AI Regulation 2024/1689 has laid down harmonised rules on artificial intelligence as the first in the world (regional-wide) comprehensive legal framework on AI with the general aim of making the rules to foster trustworthy AI in Europe.
More on the AI law in: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.

More on the AI act
The European AI act sets out a set of risk-based rules for AI developers and deployers regarding specific uses of AI. The AI law is part of a wider package of policy measures to support the development of trustworthy AI, which also includes the AI Innovation Package, the launch of AI Factories and the Coordinated Plan on AI. Together, these measures guarantee safety, fundamental rights and human-centric AI, and strengthen uptake, investment and innovation in AI across the EU.
The AI Act ensures that Europeans can trust what AI has to offer; while most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes. For example, it is often not possible to find out why an AI system has made a decision or prediction and taken a particular action. Often it may become difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme.
Although existing legislation provides some protection, it is insufficient to address the specific challenges AI systems may bring.
To facilitate the transition to the new regulatory framework, the Commission has launched the AI Pact, a voluntary initiative that seeks to support the future implementation, engage with stakeholders and invite AI providers and deployers from Europe and beyond to comply with the key obligations of the AI Act ahead of time.
On AI Pact in: https://digital-strategy.ec.europa.eu/en/policies/ai-pact

The AI Pact
The AI Act entered into force on August 1, 2024; however, some provisions of the AI Act are already fully applicable. However, some requirements on the high-risk AI systems and other provisions will only be applicable at the end of a transitional period (i.e., the time between entry into force and date of applicability).
In this context, the Commission is promoting the AI Pact, to help stakeholders prepare for the implementation of the AI Act.
The AI Pact rests on two pillars:
1. Pillar I is open to all stakeholders (including, but not limited to, companies, non-for-profit organisations, academics, civil servants, etc). Under this pillar, participants contribute to the creation of a collaborative community, sharing their experiences and knowledge. This includes webinars organised by the AI Office which provide participants with a better understanding of the AI Act, their responsibilities and how to prepare for its implementation. In turn, the AI Office gathers insights into best practices and challenges faced by the participants. In this context, participants can share best practices and internal policies that may be of use to others in their compliance journey. Depending on participants’ preferences, these best practices may also be published online in a platform where the AI Office will share information on the AI Act’s implementation process.
2. Pillar II provides a framework to foster the early implementation of some of the measures of the AI Act. This initiative encourages organisations to proactively disclose the processes and practices they are implementing to anticipate compliance. Specifically, companies providing or deploying AI systems can demonstrate and share their voluntary commitments towards transparency and high-risk requirements and prepare early on for their implementation. These commitments take the form of pledges which are “declarations of engagement”; these pledges contain concrete actions (planned or underway) to meet the AI Act’s distinct requirements and include a timeline for their adoption.
Source and references, as well as signatories of the pledges at: https://digital-strategy.ec.europa.eu/en/policies/ai-pact

Prohibited practices
The regulation includes several articles in chapter II “prohibited practices” (art. 5 from “a” to “h”) to be effective from this February, and specifies some of the prohibiting AI practices. Below are just two examples of such prohibited practices:
= First example: the sub-section “a” in art.5.1 postulates such prohibited AI practices that include “placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behavior of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm”.
= Second example: another sub-section of art.5.1.”b” prohibits the AI practices that are “placing on the market, putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behavior of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm”.
Other prohibited circumstances are in the regulation: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ%3AL_202401689

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

twelve − 7 =