Views: 46
The world’s first comprehensive regulation on artificial intelligence will enter into force in Europe by August 2026; some parts of it will be in the law already from February and August 2025. The AI Act is designed to ensure that AI models are developed and used in a trustworthy manner, with safeguards to protect people’s fundamental rights. The regulation aims to establish a harmonised EU-wide internal market for AI to encourage the uptake of modern digital technology and create supportive conditions for further innovation and investment.
Background
In December 2023, a political agreement among the EU co-legislators on the AI legislation was reached; then in January 2024 the Commission launched a package of measures to support European startups and SMEs in the development of trustworthy AI.
The January-2024 AI’s innovation package was aimed to support startups and SMEs in developing trustworthy AI that complies with EU values and rules. Both the ‘GenAI4EU’ initiative and the AI office were part of this package: together, they contribute to development and application of 14 industrial AI systems.
At the end of May 2024 the Commission unveiled the AI Office; and in July 2024 the amended EuroHPC JU Regulation entered into force, thus allowing the set-up of AI factories. The European AI Office supports the development and use of trustworthy AI, while protecting against AI risks; it is established within the European Commission as the centre of AI expertise and forms the foundation for a single European AI governance system.
On AI Office in: https://digital-strategy.ec.europa.eu/en/policies/ai-office
Both the AI Office and the regulation are providing fruitful ground for the AI-supercomputers to be used for the “training” of the general purpose AIs, the GPAI models. Of great assistance is the work of independent and evidence-based projects produced by the European Joint Research Centre, JRC; it has been fundamental in shaping the EU’s AI policies and ensuring their effective implementation.
The purpose of the new AI regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems in the EU, in accordance with the Union’s values and promoting the uptake of human centric and trustworthy AI while ensuring a high level of protection of health, safety and the EU-wide fundamental rights, including democracy, the rule of law and environmental protection. Finally, the regulation is to protect against the harmful effects of AI systems in the EU-27 and to support innovation; besides, it ensures the free movement in the EU of AI-based goods and services, thus preventing the member states from imposing restrictions on the development, marketing and use of AI systems.
The AI Pact
European AI Pact encourages and supports organisations to plan ahead for the implementation of AI regulation’s measures.
The AI Act entered into force in August 2024 when some provisions of the AI Act were already applicable. However, some requirements on the high-risk AI systems and other provisions will only be applicable at the end of a transitional period, i.e. the time between entry into force and date of applicability.
In this context, the Commission has been promoting the AI Pact, seeking the industry’s voluntary commitment to anticipate the AI Act and to start implementing its requirements ahead of the legal deadline. To gather participants, the first call for interest was launched in November 2023, obtaining responses from over 550 organisations of various sizes, sectors, and countries. The AI Office has since initiated the development of the AI Pact, which is structured around two pillars:
= Pillar I, as a gateway to engage the AI Pact network (those organisations that have expressed an interest in the Pact), encourages the exchange of best practices, and provides with practical information on the AI Act implementation process;
= Pillar II encourages AI system providers and deployers to prepare early and take actions towards compliance with requirements and obligations set out in the AI legislation.
Reference to: https://digital-strategy.ec.europa.eu/en/policies/ai-pact
Commission’s opinion: citation
= “AI has the potential to change the way we work and live and promises enormous benefits for citizens, our society and the European economy. The European approach to technology puts people first and ensures that everyone’s rights are preserved. With the AI Act, the EU has taken an important step to ensure that AI technology uptake respects EU rules in Europe”.
Margrethe Vestager, Executive Vice-President for the European digital transition
= “This regulation marks a major milestone in Europe’s leadership in trustworthy AI. With the entry into force of the AI Act, European democracy has delivered an effective, proportionate and world-first framework for AI, tackling risks and serving as a launch pad for European AI startups”.
Thierry Breton, Commissioner for Internal Market
Reference and source: https://ec.europa.eu/commission/presscorner/detail/en/ip_24_4123
Main AI regulation’s features
The regulation introduces a forward-looking definition of AI, based on a product safety and risk-based approach in the EU:
= Minimal risk: In this category there are, e.g. most AI systems, such as AI-enabled recommender systems and spam filters, which face no obligations under the AI Act due to their minimal risk to citizens’ rights and safety. Companies can voluntarily adopt additional codes of conduct.
= Specific transparency risk: AI systems like chatbots must clearly disclose to users that they are interacting with a machine. Certain AI-generated content, including deep fakes, must be labeled as such, and users need to be informed when biometric categorization or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.
= High risk: AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems: such high-risk AI systems include for example AI’s used for recruitment, as well as to assess whether somebody is entitled to get a loan, or to run autonomous robots.
= Unacceptable risk: AI systems considered a clear threat to the fundamental rights of people will be banned. This includes AI systems or applications that manipulate human behavior to circumvent users’ free will, such as toys using voice assistance encouraging dangerous behavior of minors, systems that allow ‘social scoring’ by governments or companies, and certain applications of predictive policing. In addition, some uses of biometric systems will be prohibited, for example emotion recognition systems used at the workplace and some systems for categorizing people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).
To complement this system, the AI regulation also introduces rules for so-called general-purpose AI models, which are highly capable AI models that are designed to perform a wide variety of tasks like generating human-like text. General-purpose AI models are increasingly used as components of AI applications. The AI Act will ensure transparency along the value chain and addresses possible systemic risks of the most capable models.
More in AI regulation at: https://eur-lex.europa.eu/eli/reg/2024/1689/oj
AI rules: application and enforcement
The EU member states have until 2 August 2025 to designate national competent authorities, which will oversee the application of the rules for AI systems and carry out market surveillance activities. The European Commission’s AI Office will be the key implementation body for the AI regulation at the EU-level, as well as the enforcer for the rules for general-purpose AI models.
Three advisory bodies will support the implementation of the rules: 1) the European Artificial Intelligence Board will ensure a uniform application of the AI Act among the EU-27 member states and will act as the main body for cooperation between the Commission and the states.
2) A scientific panel of independent experts will offer technical advice and input on enforcement; in particular, this panel can issue alerts to the AI Office about risks associated to general-purpose AI models. 3) The AI Office can also receive guidance from an advisory forum, composed of a diverse set of stakeholders.
Companies not complying with the rules will be fined; the fines could go up to 7% of the global annual turnover for violations of banned AI applications, up to 3% for violations of other obligations and up to 1.5% for supplying incorrect information.
Conclusion
The majority of rules of the AI Act will start applying on 2 August 2026. However, prohibitions of AI systems deemed to present an unacceptable risk will already apply after six months, while the rules for so-called General-Purpose AI models will apply after 12 months.
To bridge the transitional period before full implementation, the Commission insists on a proper implementation of the new AI law: this initiative invites AI developers to voluntarily adopt key obligations of the AI Act ahead of the legal deadlines.
The Commission is also developing guidelines to define and detail how the AI Act should be implemented and facilitating co-regulatory instruments like standards and codes of practice.
The Commission continues to discuss the process of creating the first general-purpose AI Code of Practice, as well as multi-stakeholder consultations to provide an opportunity for all stakeholders to have their say on the first Code of Practice under the AI regulation.