Views: 10
The European AI law is the first set of rules in the world aimed at regulating digital infrastructures, including virtual assistants, large language models, AI and ChatGPT, etc. Although it was published last August, the regulation will apply gradually: e.g. some AI provisions already from August 2024, the general-purpose AI rules – a year after, and rules on high-risk AI systems in three years’ time. Some 700 companies will be prepared for the legal transformation.
Background: the EU AI legislation
The preparations for the present regulation started by the Commission in 2018 with the creation of the High-Level Expert Group on Artificial Intelligence (AI HLEG) and the European AI Alliance.
The EU Artificial Intelligence Act, EUAIAct sets harmonised rules for the development, placement on the market and use of AI systems in the EU following a proportionate risk-based approach. The Act lays down a solid risk methodology to define “high-risk” AI systems that pose significant risks to health, safety and/or fundamental human rights: these AIs will have to comply with a set of horizontal mandatory requirements for trustworthy AI, as well as follow conformity assessment procedures before those systems can be placed on the EU market.
T ere are clear obligations in the law for providers of AI systems: i.e. to ensure safety and respect of existing legislation protecting fundamental rights throughout the whole AI systems’ lifecycle. The rules will be enforced through a governance system at the national level and a EU-wide cooperation mechanism through the established European Artificial Intelligence Board.
Measures are also proposed to support innovation, in particular through AI regulatory sandboxes and other measures, to reduce the regulatory burden and to support SMEs and start-ups.
It is important to mention that placing on the market, putting into service and/or use of certain AI systems intended to distort human behavior, etc. is forbidden. A special EU standardization organisation CEN-CELENEC is created for that purpose.
There clear obligations foreseen in the law for high-risk AI systems, due to e.g. their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law. Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections). Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, as well as ensure human oversight.
Note. The basic regulation (2024/1689) concerning the EU Artificial Intelligence issues was published last July in the Official Journal of the European Union. Source: https://www.artificial-intelligence-act.com/
Legal deadlines
The Artificial Intelligence Act shall apply from 2 August 2026, with the following exceptions (taking into account the unacceptable risk associated with the use of AI in certain ways).
The chapters on general provisions and prohibited AI practices shall apply from 2 February 2025.
The following chapters: on notifying authorities and notified bodies; on the General Purpose AI models; on Governance; on Penalties, and on Confidentiality shall apply from 2 August 2025, with the exception of Article 101 on fines for providers of general-purpose AI models.
Article 6(1) and the corresponding obligations shall apply from 2 August 2027.
According to Article 77 (Powers of authorities protecting fundamental rights), by 2 November 2024, each EU state shall identify the public authorities or bodies “which supervise or enforce the respect of obligations under Union law protecting fundamental rights, including the right to non-discrimination, in relation to the use of high-risk AI systems.
The European Commission shall, no later than 2 February 2026, provide guidelines specifying the practical implementation of Article 6 (Classification rules for high-risk AI systems) in line with Article 96 (Guidelines from the Commission on the implementation of this Regulation) together with a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk.
Source: https://www.artificial-intelligence-act.com/Artificial_Intelligence_Act_Links.html; and https://www.artificial-intelligence-act.com/
The standardization process
National digital authorities, which have the authority to protect data and overseeing the General Data Protection Regulation, GDPR are going to have “shared competence” to check companies’ compliance with the AI Act. As soon as the European Artificial Intelligence Act (AI Act) enters into force this August and aims to foster responsible artificial intelligence development and deployment in the EU-27, the member states have until that date to appoint their regulators in charge of AI, as well as national data protection authorities.
The process of standardization is covering numerous spheres including setting standards for artificial intelligence systems and other digital products under the AI Act; these standards are supposed to create certainty for companies and “demonstrate compliance”. However, there is still a lot to be done before those standards are ready: coordination and supervision on algorithms normally take many years; the EU’s community believes the process shall be stepped up.
Reference to: https://www.euronews.com/next/2024/12/30/times-running-out-on-ai-standardisation-process-dutch-watchdog-warns?utm_source=newsletter&utm_campaign=feed_next_ia&utm_medium=referral&insEmail=1&insNltCmpId=2012&insNltSldt=10080&insPnName=euronewsfr&isIns=1&isInsNltCmp=1
More on European Data Protection Board in: https://www.edpb.europa.eu/edpb_en; as well as on harmonized rules on artificial intelligence in: https://www.edpb.europa.eu/system/files/2021-06/edpb-edps_joint_opinion_ai_regulation_en.pdf
AI factories to assist digital transition
The European-wide AI factories’ main goal is activate the digital revolution, i.e. to generate revenue and intelligence for those creating e.g. chatbots and/or generative AI. For the optimization of the digital process, the digital agents and AI providers will need to use one of the seven presently created AI factories.
Each of these “strong AI factories” are having four main development components: a) a data “pipeline” that prepares data for the AI, b) the algorithm’s creating facilities, c) software infrastructure with modern supercomputers (that e.g. can support the AI training), and d) an experimentation platform to test existing and future AIs.
Source: https://www.integrin.dk/2024/12/13/european-ai-factories-using-massive-computing-facilities/
The first AI Factories (AIFs, deployed during 2025-2026) will have €1.5 billion investment (combining national and EU funding): half of this is funded by the EU’s Digital Europe Program for AI infrastructure as well as Horizon Europe program for AIF services. The selected seven AIFs are established at leading research and technology hubs in the following EU states: Barcelona, Spain (with the Barcelona Supercomputing Centre); Bologna, Italy; Kajaani, Finland; Bissen, Luxembourg; Linköping, Sweden (at the University of Linköping); Stuttgart, Germany (at the University of Stuttgart); and in Athens, Greece.
More on seven AIFs in: https://eurohpc-ju.europa.eu/selection-first-seven-ai-factories-drive-europes-leadership-ai-2024-12-10_en
The seven AI Factories will involve 15 EU member states and two European High Performance Computing Joint Undertaking (EuroHPC) participating States: thus Portugal, Romania and Turkey will join the Spain’s computer’s facilities; Austria and Slovenia will join the Italian facilities; and Czechia, Denmark, Estonia, Norway and Poland have joined the Finish AIF.
Regardless of their location in EU, European scientists and users from the public sector and industry can benefit from the EuroHPC supercomputers via the EuroHPC Access Calls. This access enables them to advance science and support the development of a wide range of applications with industrial, scientific, and societal relevance for Europe.
Other EU states interest in either joining the newly selected AIFs or creating new AI factories are invited to submit their proposals by February 2025.
Source: https://ec.europa.eu/commission/presscorner/detail/en/ip_24_6302
General into on the EU legislation “boosting trustworthy artificial intelligence” in:
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202401732