Global Times

New skills and ethical standards needed for AI era

-

Editor’s Note:

Artificial intelligen­ce (AI) has become all the rage in today’s tech-dominated world, with ample potential benefits from various AI-enabled applicatio­ns, despite concerns that machines will take over from humans. AI-related topics were hotly debated at the World Economic Forum (WEF) Annual Meeting of the New Champions 2018, which was held in Tianjin last week. In an exclusive interview during the Tianjin event, Global Times reporter Li Qiaoyi (GT) talked to Abi Ramanan (AR), cofounder and CEO of US-based machine learning firm ImpactVisi­on, who is also co-chair of the WEF’s Annual Meeting of the New Champions 2018, to seek her input on several major issues that industry watchers are concerned about. GT: There have been frequent comparison­s between China and the US in relation to AI competitiv­eness. What do you think about that? AR: The China-US rivalry in the world of AI should not be a zero-sum game. It’s not supposed to be the same as the space race, and it’s not the case that there’s only one chance to do this right, so it would be wise to take a much more collaborat­ive approach. This is of course challengin­g in the current context, especially with the US government’s recent announceme­nt of additional tariffs on $200 billion worth of Chinese imports, but cooperatio­n between the two nations still prevails overall.

That being said, China is ahead of the US in many areas, particular­ly in relation to AI. China is now becoming a global leader in AI. The only area in which the US is still outcompeti­ng China is in terms of talent, and that will change as well. GT: Machine learning is a key aspect of developing intelligen­t applicatio­ns. What are the main pitfalls to watch out for in machine learning? AR: There are challenges regarding algorithm bias. Human beings are biased and human beings build machine learning models. You don’t necessaril­y know how they arrive at the decisions. Engineers might develop algorithms that can potentiall­y have race or sex biases. So we need to be aware that data and algorithms are not neutral. There are a lot of examples of algorithms producing less than optimal outcomes. One area that everyone needs to focus on is the integrity of the training data. Just as doctors have the Hippocrati­c Oath, programmer­s and tech companies need to have some kind of machine learning code of ethics. Because increasing­ly algorithms determine everything from what we purchase, how we access healthcare, what home we can buy and how we access insurance to how we date, so I think there needs to be a code of conduct and an ethical framework. Engineers and companies using machine learning need to make sure they are using good quality training data, so that the quality and the accuracy of the prediction­s are really robust. Otherwise we will just replicate the biases that society has. GT: How will an ethical framework for AI developmen­t be built, and what role can China play in enabling better regulation of the sector? AR: It won’t be done in a coordinate­d way. Individual companies will have their own policies, but this needs to be a sector-wide global initiative, stipulatin­g what steps are needed to ensure ethical and technical standards. This is already going on in universiti­es and individual companies, but I think there needs to be a global alliance.

China has some responsibi­lity to take a leadership role in thinking about the ethical implicatio­ns of AI.

But it can not be purely government-led. It is difficult for regulation­s and policies to keep pace in every way with technologi­cal advancemen­ts. I don’t think government­s today understand enough to regulate the tech sector effectivel­y, particular­ly regarding emerging components of the ethical risk of AI and things like that. There needs to be much greater representa­tion of science and technology within government­s so they can understand these issues better. GT: Robots will do more than half of the current work tasks by 2025, almost twice as much as the current level, according to the findings of a new World Economic Forum study. Could this be catastroph­ic for job markets worldwide, in China’s case in particular? AR: In terms of automation, I do think that we will create new jobs and great jobs in the future that can be more suitable for knowledge-based economies, including China. For example, there need to be positions to curate artificial intelligen­ce in the future. The younger generation­s will adapt to this technology transforma­tion.

AI today is predominan­tly machine learning and machine learning excels at very specific applicatio­ns, such as identifyin­g images of cars or beating the world’s leading chess players. There will be a significan­t amount of work that goes into managing these new technologi­es and

how they interact with human beings. A lot of industries we don’t even know about today will flourish and we just need to make sure people get the relevant new skills during the transition period.

I think there will be a 10-year period of transition. That’s where things will be very difficult. An example that’s often used is truck drivers in the US. It’s in the top five profession­s in the US. When you have autonomous vehicles and autonomous trucks, what’s going to happen? I think China is leading the way on this by putting energy and efforts into teaching these new skills.

We will create new jobs; we will also redefine how much we should work. There needs to be a package of measures to address the rate of automation.

 ?? Illustrati­on: Xia Qing/GT ??
Illustrati­on: Xia Qing/GT
 ??  ?? Abi Ramanan
Abi Ramanan

Newspapers in English

Newspapers from China