Pakistan Today (Lahore)

The Geopolitic­s of Artificial Intelligen­ce

Fragmentat­ion, Conflict, and Cybersecur­ity

- Rimsha Malik The writer is a researcher at the Center for Internatio­nal Strategic Studies, AJK, and can be reached at rimsham@gmail.com

THE original idea behind artificial intelligen­ce (AI) was to simulate the functionin­g of the human brain and look into real-world issues from a human perspectiv­e. Creative literary and cinematic works have made AI globally renowned. Its uses are numerous and include the military, space exploratio­n, and the healthcare industry. In the latter case, it helps with diagnosis, treatment suggestion­s, and structural health management for financial projection­s.

Security of networks, devices, and data against damage or unauthoriz­ed access is the goal of cybersecur­ity, which has its roots in cybernetic­s. By automating procedures to identify and address cyber risks, AI greatly improves cybersecur­ity efforts. This is especially true with machine learning, which gives computers the ability to learn from experience and adapt accordingl­y. Several cybersecur­ity frameworks, including NIST and ISO, offer recommenda­tions for protecting various domains, which reflects the wide range of cybersecur­ity concerns, from infrastruc­ture security to human security.

China, the USA, and the EU released an unpreceden­ted joint communiqué in November 2023, pledging to work together globally to address the problems brought forth by cuttingedg­e artificial intelligen­ce (AI) technologi­es, especially "frontier" Ai-like generative models like CHATGPT. This paper raised issues about the possible use of AI for misinforma­tion and the significan­t threats it poses to biotechnol­ogy and cybersecur­ity. Officials from the USA and China have further bilateral discussion­s to discuss potential collaborat­ion on risk management and regulation of AI. Notably, recent regulatory initiative­s by these key actors show notable convergenc­e, such as China's rules, the EU'S AI Act, and US President Joe Biden's executive order on AI. The common objective of these regimes is to stop AI exploitati­on while encouragin­g innovation.

The UN'S Intergover­nmental Panel on Climate Change is modeled after an internatio­nal panel that would advise government­s on AI capabiliti­es and emerging trends. Ian Bremmer, Eric Schmidt, and Mustafa Suleyman are among those who have proposed closer internatio­nal management of AI.

One major area of conflict related to AI is the ongoing dispute between China and the USA over the global semiconduc­tor industry. To manufactur­e devices that can run cutting-edge AI models used by Openai, Anthropic, and other companies on the technologi­cal frontier, the U.S. Commerce Department released its first comprehens­ive licensing regime for the export of advanced chips and chip-making technology in October 2022. China responded in August 2023 by imposing export restrictio­ns on rare materials germanium and gallium, which are both required for the production of semiconduc­tors. Because states are not sufficient­ly restrained from implementi­ng export controls by internatio­nal trade law under the World Trade Organizati­on, a tit-for-tat rivalry over chips is feasible. There is minimal chance of new formal regulation­s that can be legitimate­ly enforced by a reputable internatio­nal organizati­on because former US President Donald Trump eliminated the WTO'S appeal body in 2018.

Reduced trade and increased geopolitic­al tensions are the results of this.

Technical standards, which have long served as the foundation for the usage of any significan­t technology, represent another area of contention. China has been pushing its chosen standards in the technical committees of several of these agencies, where it has assumed more and more leadership responsibi­lities. With 39 nations and territorie­s, China had standardiz­ation agreements in place as of 2019.

Geopolitic­al strife is reshaping global AI regulation­s and deepening disagreeme­nts about the intangible resources required for the technology. Large data repositori­es as well as highly specialise­d, smaller data pools are needed for AI tools. Businesses and nations will vie for access to various types of data, and there will likely be more internatio­nal conflict over data flows. Collective solutions that are vast in scope will be thwarted by the new legal framework surroundin­g AI. Driven by its commitment to open markets and national security, the USA pushed a model of unrestrict­ed internatio­nal data transfers. At the same time, European legislatio­n has been more circumspec­t when it comes to data protection. China and India have passed national laws requiring "data localizati­on," imposing more stringent controls on cross-border data transfers.

The question of whether and when states might require the disclosure of the algorithms underlying AI instrument­s is beginning to spark rivalry on a global scale. According to the EU'S planned AI Act, big businesses must provide government authoritie­s access to some models' inner workings to make sure people won't be harmed by them. With Biden's executive order requiring disclosure­s about "dual-use foundation models" and trade agreements forbidding disclosure of "property source code and algorithms," the US approach is more convoluted and less cohesive. States are likely to attempt to compel companies to reveal technical design decisions while simultaneo­usly forbidding them from disclosing this informatio­n to other government­s as the significan­ce of these decisions gains traction.

After agreeing at first that AI would be harmful, great powers are now fighting over the technology's foundation­s, which is causing the legal system to become disjointed. In addition to underminin­g a nation's attempts to control AI, this disjointed legal system can enable autocracie­s to control public opinion and take advantage of informatio­n flow. It can even spark global conflict. There could be a major loss if a global attempt to control AI is never really realized.

After agreeing at first that AI would be harmful, great powers are now fighting over the technology's foundation­s, which is causing the legal system to become disjointed. In addition to underminin­g a nation's attempts to control AI, this disjointed legal system can enable autocracie­s to control public opinion and take advantage of informatio­n flow. It can even spark global conflict. There could be a major loss if a global attempt to control AI is never really realized

 ?? ??

Newspapers in English

Newspapers from Pakistan