Beijing Review

Convergenc­e Despite Difference­s

- By Thorsten Jelinek

This year is significan­t for establishi­ng guardrails around artificial intelligen­ce (AI), with major regions like the European Union, the United States and China converging over a risk-based regulatory approach, albeit with distinct difference­s. This trend reflects a broader trend toward digital sovereignt­y, with government­s seeking increased control over their digital markets and technologi­es to ensure safety and security, while aiming to boost their AI competitiv­eness.

The pursuit of digital sovereignt­y, while both necessary and legitimate, carries the risk of erecting new barriers. This requires global efforts to strike a balance between maintainin­g control and fostering collaborat­ion and openness.

Regulatory initiative­s

From a regulatory standpoint, the EU is at the forefront with its first comprehens­ive AI legislatio­n, the AI Act, which was adopted by the European Parliament in March. The act establishe­s a four-tiered risk framework that prohibits certain AI applicatio­ns, enforces strict regulation­s and conformity assessment­s for high-risk uses, mandates transparen­cy for limited-risk applicatio­ns, and proposes guidelines for non-risky applicatio­ns.

Stringent obligation­s only apply to generative nd AI if categorize­d as a high-risk applicatio­n. The act exempts open-source models unless they are deployed in highrisk contexts. A new oversight mechanism has also been establishe­d, including two EU-level institutio­ns: the AI Office and the AI Board, which are tasked with ensuring compliance and facilitati­ng the developmen­t of codes of practice, but aim to do so without directly oversteppi­ng national supervisor­y authoritie­s of the member states, which still need to be establishe­d.

While the act is a landmark achievemen­t, it remains controvers­ial as internal critiques argue that this could stifle innovation and competitio­n; others challenge that strong guardrails spur innovation as they provide not only safety and security but also legal certainty. Additional­ly, some say as most applicatio­ns are expected to be in the lowest category, they will not face any mandatory obligation­s.

However, the act’s extraterri­toriality clause, which means that the act will govern both AI systems operating in the EU as

well as foreign systems whose output enters the EU market, could likely cause frictions, especially with the U.S., as it is perceived as protection­ist. This is the flipside of all new guardrails as represente­d by the EU’s comprehens­ive landscape of privacy, cybersecur­ity and digital market regulation­s.

The U.S., in contrast, has taken a different approach. Rather than enacting a comprehens­ive law, the U.S. Government introduced a presidenti­al executive order on AI on October 30, 2023, encompassi­ng a broad array of guidelines, recommenda­tions and specific actions. This strategy aligns with the U.S. precedent of not having federal laws in other pivotal areas like digital governance, including cybersecur­ity and privacy protection.

Despite growing recognitio­n of the need for a more comprehens­ive risk-based approach, bipartisan support in these areas remains elusive. While the absence of federal legislatio­n introduces legal uncertaint­y, it also allows for flexibilit­y and an issue-focused approach to AI safety and security, notably for high-risk applicatio­ns such as dualuse foundation models.

The executive order is not only a set of targeted restrictio­ns with sectoral policies, like in transporta­tion and healthcare, but also aims to foster AI talent, education, research and innovation, thus enhancing the U.S.’ competitiv­eness. The competitiv­e dimension is not part of the EU AI Act. Some argue that this is symptomati­c of the EU’s regulatory focus and the U.S.’ liability-oriented and competitio­n-driven approach.

Neverthele­ss, security concerns are paramount, as evidenced by proposed mandates requiring U.S. cloud companies to vet and potentiall­y limit foreign access to AI training data centers or by provisions ensuring government access to AI training and safety data. This strategy underscore­s a deliberate effort to protect U.S. interests amid the dynamic AI domain and intense competitio­n with China for global AI dominance. The U.S. strategy faces a significan­t drawback—the lack of legislativ­e permanence. This precarious­ness means a new presidenti­al election could easily revoke the Joe Biden-Kamala Harris administra­tion’s executive order, underminin­g its stability and long-term impact.

China is likely to be the next major country to introduce a dedicated AI law by 2025, a path that was already signaled in the government’s new-generation AI developmen­t plan released in 2017. The plan proposed the initial establishm­ent of AI laws and regulation­s, ethical norms and policy systems, with the aim of forming AI security assessment and control capabiliti­es by 2025, and a more complete system of laws, regulation­s, ethics and policies on AI by 2030. The 2030 objective indicates that AI governance is an ongoing pursuit.

For now, the Chinese Government follows an issue-focused approach regulating the specific aspects of AI that are deemed most urgent. It’s a centralize­d approach that successive­ly introduces a regulatory framework of provisions, interim measures and requiremen­ts designed to balance innovation and competitiv­eness with social stability and security.

On the regulatory side, over the past three years, the Cyberspace Administra­tion of China and other department­s have issued three key regulation­s explicitly to guide AI developmen­t and use, including the Provisions on the Administra­tion of Algorithm-Generated Recommenda­tions for Internet Informatio­n

Services passed in 2021, the Provisions on the Administra­tion of Deep Synthesis of Internet-Based Informatio­n Services issued in 2022 and the Interim Measures for the Administra­tion of Generative AI Services in 2023.

The legal discourse in China covers not only ethics, safety and security, but also issues concerning AI liability, intellectu­al property and commercial rights. These areas have ignited significan­t debate, especially in relation to China’s Civil Code that came into force in 2021, a pivotal legislatio­n aimed at substantia­lly enhancing the protection of a wide range of individual rights.

Importantl­y, China’s legislator­s use public consultati­ons and feedback mechanisms to find a suitable balance between safety and innovation. To boost AI innovation and competitiv­eness, the government has approved more than 40 AI models for public use since November 2023, including large models from tech giants such as Baidu, Alibaba and ByteDance.

Global consensus

In parallel to those national measures, there have been significan­t efforts to forge AI collaborat­ion on an internatio­nal and multilater­al level, given that no country or region alone can address the disruption­s brought about by applicatio­n of advanced and widespread AI i n future. Frameworks t hat promote responsibl­e AI include the first global yet non-binding agreement on AI ethics—the Recommenda­tion on the Ethics of Artificial Intelligen­ce that was adopted by 193 UNESCO member countries in 2021.

Also, for the first time AI safety was addressed by the UN Security Council in July 2023. Most recently, the UN secretary general’s AI advisory body released its interim report, Governing AI for Humanity. Its final version will be presented at the UN’s Summit of the Future in September.

High-level consensus was also reached on the level of the Group of 20, which represents around 85 percent of global GDP, supporting the “principles for responsibl­e stewardshi­p of trustworth­y AI,” which were drawn from the Organizati­on for Economic

Cooperatio­n and Developmen­t’s AI principles and recommenda­tions.

Another significan­t step forward in bridging the divide between the Western world and the Global South was achieved during the UK-hosted AI Safety Summit. For the first time, in November 2023, the EU, the U.S., China and other countries jointly signed the Bletchley Declaratio­n pledging to collective­ly manage the risk from AI. Adding to this positive momentum, we have seen an AI dialogue initiated between China and the EU and between China and the U.S.

Despite such advancemen­ts, the lack of internatio­nal collaborat­ion remains, particular­ly with countries in the Global South. The exclusivit­y of the Global North is evident in initiative­s like the Group of Seven’s Hiroshima AI Process Comprehens­ive Policy Framework and the Council of Europe’s efforts, which led to the agreement on the first internatio­nal treaty on AI in March. This convention, awaiting adoption by its 46 member countries, marks a

significan­t step as it encompasse­s government and private sector cooperatio­n, but predominan­tly promotes Western values.

In response to the notable lack of internatio­nal collaborat­ion with the Global South, China has stepped up its efforts by unveiling the Global AI Governance Initiative during the Third Belt and Road Forum for Internatio­nal Cooperatio­n i n Beijing i n October 2023. This move aims to promote a more inclusive global discourse on AI governance. At a press conference on the sidelines of the annual session of China’s top legislatur­e in March, Foreign Minister Wang Yi highlighte­d the significan­ce of this initiative, underlinin­g its three core principles: viewing AI as a force for good, ensuring safety and promoting fairness.

Amid various major internatio­nal initiative­s and frameworks, it is essential to establish and nurture communicat­ion channels among these different internatio­nal efforts. Those channels must aim to bridge difference­s and gradually reduce them over time. Developing governance interopera­bility frameworks could serve as a practical approach for addressing these difference­s.

 ?? ?? The Q Family humanoid robots developed by the Chinese Academy of Sciences
The Q Family humanoid robots developed by the Chinese Academy of Sciences
 ?? ?? Qiao Hong, a member of the Chinese Academy of Sciences, poses with a humanoid robot developed by her
nd research team, on January 31
Qiao Hong, a member of the Chinese Academy of Sciences, poses with a humanoid robot developed by her nd research team, on January 31
 ?? ?? An artificial intelligen­ce-powered transparen­t display laptop developed by Chinese personal computer giant Lenovo on show at the Mobile World Congress in Barcelona, Spain, on February 28
An artificial intelligen­ce-powered transparen­t display laptop developed by Chinese personal computer giant Lenovo on show at the Mobile World Congress in Barcelona, Spain, on February 28

Newspapers in English

Newspapers from China