Views: 49
The G7 adopted internationally guiding principles and the voluntary code of conduct to guarantee the safety and fundamental rights of people and businesses in using numerous AIs applications. The agreement will strengthen the AI uptake, investment and innovation reflecting the EU values to promote trustworthy AIs.
The G7 leaders have agreed on International Guiding Principles on Artificial Intelligence, AI and a voluntary Code of Conduct for AI. These principles and the voluntary code of conduct will complement at the international level the legally binding rules that the EU co-legislators are currently finalizing under the EU AI Act.
The Artificial Intelligence Process was established at the G7 summit in May 2023 to promote the advancing AI systems on the global level. The initiative has been a part of a wider range of international discussions on AIs, including at the OECD, the Global Partnership on Artificial Intelligence (GPAI) and in the context of the EU-US Trade and Technology Council, TTC and the EU’s Digital Partnerships.
European approach
There are huge potential benefits of artificial intelligence, AI for citizens, businesses and the economy in general. However, the acceleration in the capacity of AI’s applications also brings new challenges. Already a regulatory frontrunner with the AI Act, the EU is also contributing to AI guardrails and governance at global level. I am pleased to welcome the G7 international Guiding Principles and the voluntary Code of Conduct, reflecting EU values to promote trustworthy AI. I call on AI developers to sign and implement this Code of Conduct as soon as possible.
The Commission proposed in April 2021 new rules and actions aiming to turn Europe into the global hub for trustworthy artificial intelligence, AI. The combination of the first-ever legal framework on AI and a new coordinated plan with the member states will guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU. The new rules will complement this approach by adapting safety rules to increase users’ trust in the new, versatile generation of products.
More on proposal in: https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence
High-risk AI systems will be subject to strict obligations before they can be put on the market, which include: = adequate risk assessment and mitigation systems; = high quality of the datasets feeding the system to minimise risks and discriminatory outcomes; = logging of activity to ensure traceability of results; = detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance; – clear and adequate information to the users; = appropriate human oversight measures to minimise risks; and = high level of robustness, security and accuracy.
First published in 2018 to define actions and funding instruments for the development and uptake of AI, the Coordinated Plan on AI enabled a vibrant landscape of national strategies and EU funding for public-private partnerships and research and innovation networks. The plan’s comprehensive update suggests concrete joint actions for collaboration to ensure all efforts are aligned with the European Strategy on AI and the European Green Deal, while taking into account new challenges brought by the coronavirus pandemic. It puts forward a vision to accelerate investments in AI, which can benefit the recovery. It also aims to spur the implementation of national AI strategies, remove fragmentation, and address global challenges.
Following the publication of the European Strategy on AI in 2018 and after extensive stakeholder consultation, the High-Level Expert Group on Artificial Intelligence developed Guidelines for Trustworthy AI in 2019 and an Assessment List for Trustworthy AI in 2020. In parallel, the first Coordinated Plan on AI was published in December 2018 as a joint commitment with the member states.
More in: https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682
Since first announcing its intention to work on a Code of Conduct at the Ministerial TTC in May 2023, the European Commission actively worked with key G7 international partners to develop the principles and the Code of Conduct on AI. These international commitments are consistent with the legally binding rules currently being negotiated as part of the more comprehensive Artificial Intelligence Act (EU AI Act), which will be applied prospectively in the EU-27 states.
The proposal for the EU AI Act will not only guarantee the safety and fundamental rights of people and businesses, but will strengthen the AI uptake, investment and innovation in the EU-27. The AI Act will provide risk-based, legally binding rules for AI systems that are placed on the market or put into service in the EU-wide.
More in: https://ec.europa.eu/commission/presscorner/detail/en/IP_23_5379
Ensuring safety and trustworthiness of the AI’s technology
The G7 leaders adopted eleven guiding principles which provide orientation for organisations that are developing, deploying and using advanced AI systems, such as foundation models and generative AI, to promote safety and trustworthiness of the technology. The principles include commitments to mitigate risks and misuse and identify vulnerabilities, to encourage responsible information sharing, reporting of incidents, and investment in cybersecurity as well as a labeling system to enable users to identify AI-generated content.
These principles have been jointly developed by the EU with the other G7 members under the Hiroshima Artificial Intelligence Process. The guiding principles have in turn are going to serve as the basis to compile a code of conduct, which will provide detailed and practical guidance for organisations developing AI.
The voluntary code of conduct will also promote responsible governance of AI globally; both documents will be reviewed and updated –if necessary – including through inclusive multi-stakeholder consultations, to ensure they remain fit for purpose and responsive to the AI’s rapidly evolving technology. The G7 leaders have called on organisations developing advanced AI systems to commit to the application of the International Code of Conduct; the first signatories will be announced in the near future.
Commission’s opinion
= The potential benefits of Artificial Intelligence for citizens and the economy are huge; however, the acceleration in the capacity of AI also brings new challenges. Already a regulatory frontrunner with the AI Act, the EU is also contributing to AI governance at global level. The G7 international guiding principles and the voluntary code of conduct are reflecting the EU values to promote trustworthy AI. The EU wants all AI developers to sign and implement the code of conduct as soon as possible.
Ursula von der Leyen, President of the European Commission
= The EU wants to have trustworthy, ethical, safe and secure generative artificial intelligence; with the internationally agreed principles and the Code of Conduct, the EU and the like-minded partners can lead the way in making sure AI brings benefits while addressing its risks. The EU calls on developers of generative AI to commit to the application of the Code of Conduct.
Věra Jourová, Vice-President for Values and Transparency
= With the AI Act, the EU is becoming a global frontrunner in setting clear and proportionate rules on AI to tackle risks and promote innovation. The risk-based approach of the EU AI Act reflects internationally guiding principles presently agreed.
Thierry Breton, Commissioner for Internal Market
More Information in the following Commission websites: = G7 Leaders Joint Statement; = G7 AI International Guiding Principles; = G7 AI International Code of Conduct; = Proposal for a European Artificial Intelligence Act.