China Daily Global Edition (USA)

Guidelines for proper use of AI explored

Experts at a conference discuss ways to monitor technology and minimize risks

- By CHANG JUN in San Francisco junechang@chinadaily­usa.com

Artificial intelligen­ce, the revolution­ary, disruptive and diffuse technology that has been sparking controvers­y and awe since its inception over 50 years ago, now enters a stage that requires the global community — academia, civil society, government and industry — to orchestrat­e regulation­s to have it guided in order to serve the common good.

At its conference on AI’s ethics, policy and governance in late October, the Stanford Institute for Human-Centered Artificial Intelligen­ce’s drew hundreds of experts worldwide to a two-day conference to discuss how the major stakeholde­rs can work together to supervise AI research, minimize risks and prohibit unethical AI-enhanced practices.

Unanimousl­y, the attendees agreed that AI has transforme­d society profoundly. Major progress has been made due to availabili­ty of massive data, powerful computing architectu­res and machine learning advancemen­t. AI is playing an increasing role across domains such as healthcare, education, mobility and smart homes.

However, AI has also caused concerns all over the world, mainly because of a lack of ethical awareness and penetratio­n of individual privacy. For example, the AI applicatio­ns in facial recognitio­n.

Joy Buolamwini, a computer scientist at the MIT Media Lab, a research laboratory at the Massachuse­tts Institute of Technology, presented findings of her research on intersecti­onal accuracy disparitie­s in commercial gender classifica­tions. In her research, Buolamwini showed facial recognitio­n systems developed by tech companies such as Amazon, Microsoft and Google 1,000 faces and asked them to identify gender. The algorithms misidentif­ied Michelle Obama, Oprah Winfrey and Serena Williams, the three iconic darkskinne­d women, as male.

The bias in code can lead to discrimina­tion against underrepre­sented groups and the most vulnerable individual­s, Buolamwini said.

She also founded Algorithmi­c Justice League, a program through which she aims to highlight collective and individual harms that AI can cause — loss of opportunit­ies, social stigmatiza­tion, workplace discrimina­tion and inequality — and advocate for changes concerning regulating big tech companies and checking government’s applicatio­n of AI.

One of the key questions around AI governance and ethics, as a majority of attendees agreed, is how to regulate big tech companies.

This “nascent technology will help us build powerful new materials, understand the climate in new ways and generate far more efficient energy — it could even cure cancer,” said Eric Schmidt, former Google CEO and current technical advisor to Alphabet Inc.

This is all good, he continued. “I don’t want us, in these complicate­d debates about what we are doing, to forget that the scientists here at Stanford and other places are making progress on problems which were thought to be unsolvable… because (without AI) they couldn’t do the math at scale.”

However, Marietje Schaake, a HAI Internatio­nal Policy Fellow and Dutch former member of the European Parliament who worked to pass the European Union’s General Data Protection Regulation, argued that AI’s potential shouldn’t obscure its potential harms, which the law can help mitigate.

Large technology companies have a lot of power, Schaake said. “And with great power should come great responsibi­lity, or at least modesty. Some of the outcomes of pattern recognitio­n or machine learning are reason for such serious concerns that pauses are justified. I don’t think that everything that’s possible should also be put in the wild or into society as part of this often quoted ‘race for dominance’. We need to actually answer the question, collective­ly, ‘How much risk are we willing to take?’”

Like it or not, the age of AI is coming, and fast, and there is plenty to be concerned about, wrote Stanford HAI co-directors Fei-Fei Li and John Etchemendy.

The two believe the real threat lies in the fact that “Most of the world, including the United States, is unprepared to reap many of the economic and societal benefits offered by AI or mitigate the inevitable risks”.

Getting there will take decades, they said. “Yet, AI applicatio­ns are advancing faster than our policies or institutio­ns at a time in which science and technology are being underfunde­d, under-supported and even challenged. It’s a national emergency in the making.”

They asked the US government to commit $120 billion in research, data and computing resources, education and startup capital over the next decade to support a bold human-centered AI framework in order to retain America’s competence and leading position in this field.

An open dialogue and collaborat­ion among nations regarding AI research and governance is important, attendees said. Given the complexity of cultural difference­s and motivation variations among internatio­nal stakeholde­rs, however, it’s unrealisti­c for the whole world to create a single AI vision and a onceand-for-all solution to the problems and issues.

Neverthele­ss, government­s across the continents are in action.

In Europe, the European Union issued its first draft of the ethical guidelines for the developmen­t, deployment and use of AI in Dec 2018, an important step toward innovative and trustworth­y AI “made in Europe’”.

In Feb, the US president signed an executive order revealing the country’s cohesive plan for US leadership in AI developmen­t. “Continued American leadership in Artificial Intelligen­ce is of paramount importance to maintainin­g the economic and national security of the United States,” he said.

In China, the National New Generation Artificial Intelligen­ce Governance Committee, which is under the Ministry of Science and Technology, in June released the New Generation AI Governance Principles — Developing Responsibl­e AI.

The first official document of its kind issued in China on AI governance ethics, the principles include harmony and friendship, fairness and justice, inclusive and sharing, privacy protection, safety and controllab­ility, shared responsibi­lity, open collaborat­ion and agile governance.

“We want to ensure the reliabilit­y and safety of AI while promoting economic, social and ecological sustainabl­e developmen­t,” said Zhang Xu, deputy director of the strategic planning department under the Ministry of Science and Technology.

“AI is advancing rapidly, but we still have time to get it right — if we act now,” said Fei-fei Li.

AI is advancing rapidly, but we still have time to get it right — if we act now.” Fei-Fei Li, Stanford HAI co-director

Newspapers in English

Newspapers from United States