Views: 6
Modern national growth concepts tend to include the ESG approaches, which are composed of environmental, social and governance components. Within this triangle, the governance often becomes a priority, particularly in the business sectors, in digital transition and data management. For the whole process to be successful, “managing” existing volumes and complexities of information becomes vital. Within the digital transition the AI models are important for the cross-border data flows and compliance requirements.
Background
Presently the world-wide digital regulation is in the making: hence, several national regulations are becoming so fundamental that they can define the global scene too: these are e.g. the EU’s General Data Protection Regulation, China’s CSL, the US Cloud Act, etc. which are both significantly influencing national/global digital transition and data management. The national governance is also shaping data center localization, operational compliance, international data flows and use of supercomputers.
In the national domain, there is a continued trend towards developing and regulating national data management; the latter presently is closely associated with the data-sovereignty frameworks. Specifically, countries in Asia, South America and Africa are increasingly adopting localized data governance policies to ensure that data remain within their borders to promote national security and economic growth.
Harmonization of data governance standards across regions, however, presents both challenges and opportunities: the process is encouraging international cooperation to establish common principles such as safety, security and trust.
For example, the EU co-legislators (the Parliament and the Council) in November 2024 discussed a new regulatory draft concerning the ESG’s transparency and rating activities’ integrity in the member states political economy.
ESG concept
Although in existence for about two decades, the environmental, social and governance concepts, i.e. the so-called ESG have gained importance only recently both for the national-regional political economy and corporate spheres.
The ESG-triangle also follows other most urgent global challenges, such as sustainability, climate mitigation and digitalisation; hence, the ESG concept goes beyond “purely” environmental connotation: i.e. the emphasis is placed on social and governance issues too.
The ESG importance for modern political economy is based on the following presumptions in the triangle concept: e.g. in environmental criteria it explains the ways the public-private entities and numerous corporate units have “to safeguard human environment” and nature equilibrium; in social criteria it examines how companies and states are managing the relationships with employees, suppliers, customers and communities; and in the governance measures the ESG criteria are aimed at revealing available public/private instruments in evolving national political economy’s basics, including the role of political leadership, corporate executive abilities, legislation, as well as audits, internal controls and shareholder rights.
Global challenges are fundamentally changing present political economy’s patterns in most of the states around the world as well as in the European Union.
For example the attention towards some most vital aspects in EU-wide political economy’s transformations including such spheres as the “green deal”, competitiveness, digitalisation, renewable energy, etc. has been quite fundamental.
More in: https://www.integrin.dk/2023/09/23/transformations-in-the-eus-political-economy-facing-modern-crises/
Regulatory aspects in digital transition
Since the transition inception about a decade ago, the regulatory frameworks have defined the process. The three most globally influential regions include the EU General Data Protection Regulation (EUGDPR), China’s Cybersecurity Law, CSL and the US Clarifying Lawful Overseas Use of Data (so called Cloud Act). While these three are the most globally recognized models and emblematic of three distinctive approaches to data governance, they only represent a portion of the world. Expanding the scope of analysis and assessing a wider range of nations’ approaches to data governance can provide a more comprehensive picture of global and national trends and best practices.
The EU has published in the start of 2025 the second draft of the General-purpose AI Code of Practice. Some EU co-legislators (i.e. MEPs) note on the lack of enforcement capacity which can put the EU at risk globally: some even warn that the European AI Office is severely understaffed to implement new AI rules. Currently, the office has approximately 85 staff members, with only 30 specifically working on AI Act implementation.
This approach contrasts sharply with that of the UK AI Safety Institute, which has over 150 staff focused solely on AI safety, despite lacking formal legislation.
South Korea has passed in January 2025 the “Basic Act on the Development of Artificial Intelligence and the Establishment of Trust” which has become the second jurisdiction in the world after the EU to enact comprehensive AI legislation.
More in: https://substack.com/@artificialintelligenceact/note/c-84702684?utm_source=feed-email-digest. Additionally in: https://substack.com/home/post/p-154070979
AI strategies in business
Corporate sectors increasingly adapting to various AI strategies in order to comply with regional and global data governance laws, leading to the construction of more localized data centers and the development of innovative solutions to manage data within the evolving digital legal frameworks.
Strategic importance of AI infrastructures in the digital transition with super-computers and data centers drives the dramatically increased investment process: e.g. already in 2023, the global investment in AI reached almost $843 billion, with projections indicating continued growth in national investments.
On a national level, the US is a leader in supercomputers (35% of the global) and in the number of data centers (40 % of the global).
There are huge potential benefits of artificial intelligence infrastructures for citizens, businesses and the economy in general. However, the acceleration in the capacity of AI’s applications also brings new challenges: i.e. being a regulatory frontrunner with the AI Act, the EU is also contributing to AI guardrails and governance at global level.
The EU welcomed the G7 international Guiding Principles and the voluntary Code of Conduct (adopted in October 2023), reflecting EU values to promote trustworthy AI. Thus, the EU called on AI developers to sign and implement this Code of Conduct as soon as possible.
The Commission proposed already in April 2021 some new rules and actions aiming to turn Europe into the global hub for trustworthy AI.
The combination of the first-ever EU-wide legal framework on AI and a new coordinated plan with the member states will guarantee the safety and fundamental rights of people and businesses, while strengthening the AI uptake, investment and innovation among the EU states. The new rules will complement this approach by adapting safety rules to increase users’ trust in the new and versatile generation of products and services.
More on proposals and new rules in: https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence
AI: risk categories
The EU AI Act imposes a wide range of obligations on the various actors in the lifecycle of a high-risk AI system, which include requirements on data training and data governance, technical documentation, record-keeping, technical robustness, transparency, human oversight, and cybersecurity. As part of the phased compliance process, the EU AI Act emphasizes the importance of AI literacy among employees to ensure safe and compliant AI usage.
Starting in February 2025, the EU AI Act requires organizations in the European market to ensure employees involved in AI use and deployment have adequate AI literacy.
Besides, the AI Act is an EU-wide regulation on artificial intelligence: it is the first comprehensive regulation on AI by major global regulators. However, the AI legislation assigns the AI applications to three risk categories:
– First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned.
– Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, etc. are subject to specific legal requirements.
Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.
More on the AI Act in: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689