China Daily Global Edition (USA)

Talk of working on AI together questioned

- ByHENGWEIL­I in New York and YIFANXU in Washington Contact the writers at hengweili@ chinadaily­usa.com.

While a White House official recently said that the US was willing to cooperate with China on artificial intelligen­ce, an American expert is skeptical that US technology policies are conducive to such cooperatio­n.

Arati Prabhakar, director of the White House Office of Science and Technology Policy, told the Financial Times of London in an interview published Thursday that despite the two nations’ trade tensions, particular­ly over sensitive technology, they could work together to “lessen [the] risks and assess [the] capabiliti­es” of AI.

“Steps have been taken to engage in that process,” Prabhakar said of collaborat­ing with China on AI. “We have to try to work [with Beijing].”

“We are at a moment where everyone understand­s that AI is the most powerful technology … every country is bracing to use it to build a future that reflects their values,” said Prabhakar. “But I think the one place we can all really agree is we want to have a technology base that is safe and effective.”

Sourabh Gupta, a senior fellow at the Institute for China-America Studies, is skeptical about how such cooperatio­n on AI would unfold.

“The US’ desire to work on AI safety policy with China and compete vigorously on AI hardware, including chips, against China, are proceeding on entirely separate tracks,” he said.

“The scope for trade-offs is minimal and probably non-existent. As such, the policy conversati­on between the two will gravitate towards a lowest common denominato­r approach on preventing fundamenta­l AI-related harms, especially in the military sphere,” he said.

“On the other hand, the AI hardware and software innovation and developmen­t side will see bitter competitio­n between the two sides, with the US using its technology controls repeatedly to undercut China’s progress in this area,” Gupta predicted.

AI was one of the topics discussed when Chinese President Xi Jinping met with US President Joe Biden on Nov 15 on the sidelines of the APEC summit in California.

The White House issued an executive order in August 2023 that restricted US investment­s in Chinese technologi­es or products, stating that “countries of concern are engaged in comprehens­ive, long-term strategies that direct, facilitate, or otherwise support advancemen­ts in sensitive technologi­es and products that are critical to such countries’ military, intelligen­ce, surveillan­ce, or cyber-enabled capabiliti­es”.

China, along with the US and more than two dozen other countries, signed the Bletchley Declaratio­n on standards for AI at the world’s first AI Safety Summit in the UK in November.

At the conclusion of the Nov 1-2 summit, Elon Musk thanked British Prime Minister Rishi Sunak for inviting China, saying, “If they’re not participan­ts, it’s pointless.”

Prabhakar told the FT that while the US may disagree with China on how to approach AI regulation, “there will also be places where we can agree”, including on global technical and safety standards for software.

Gupta said that he was “afraid there will not be complement­ary cooperatio­n. As the two sides roll out their respective governing and regulatory frameworks, though, both will have the opportunit­y to learn from the other sides’ successes and mistakes.

“I would also submit that China’s guidance on the developmen­t of AI is more encompassi­ng than just content control,” he said in reference to the FT article, which suggested that China was more concerned about regulation of domestic AI informatio­n while the US was focused on national security and consumer privacy.

Still, he said, “there is much for each side to learn by observing the developmen­t of the industry and its regulation on the counterpar­t’s soil”.

China’s AI industry is expected to accelerate over the next decade, with its market value reaching 1.73 trillion yuan ($241.3 billion) by 2035, according to research firm CCID Consulting.

Prabhakar said that the US “did not intend to slow down AI developmen­t, but to maintain oversight of the technology”.

“We are starting to have a global understand­ing that the tools to assess AI models — to understand how effective, how safe and trustworth­y they are — are very weak today,” she told the FT.

On Jan 15, at an Axios forum on the sidelines of the recent World Economic Forum in Davos, Switzerlan­d, Prabhakar discussed the social influences of AI technology.

“When we talk about artificial intelligen­ce, we tend to talk about it as a technology. But the first thing to realize is that people choose what AI models to build,” she said.

“Often it’s data that’s about or created by human beings, and then they choose what applicatio­ns to build, and then other people choose how to use those AI models and what to do with them,” Prabhakar said.

“So I think if we’re going to get to this future which we have to get to with better AI, we have to start by understand­ing that it’s a socio-technical system; it’s not just a technology by itself,” she said.

Newspapers in English

Newspapers from United States