New skills and ethical standards needed for AI era
Editor’s Note:
Artificial intelligence (AI) has become all the rage in today’s tech-dominated world, with ample potential benefits from various AI-enabled applications, despite concerns that machines will take over from humans. AI-related topics were hotly debated at the World Economic Forum (WEF) Annual Meeting of the New Champions 2018, which was held in Tianjin last week. In an exclusive interview during the Tianjin event, Global Times reporter Li Qiaoyi (GT) talked to Abi Ramanan (AR), cofounder and CEO of US-based machine learning firm ImpactVision, who is also co-chair of the WEF’s Annual Meeting of the New Champions 2018, to seek her input on several major issues that industry watchers are concerned about. GT: There have been frequent comparisons between China and the US in relation to AI competitiveness. What do you think about that? AR: The China-US rivalry in the world of AI should not be a zero-sum game. It’s not supposed to be the same as the space race, and it’s not the case that there’s only one chance to do this right, so it would be wise to take a much more collaborative approach. This is of course challenging in the current context, especially with the US government’s recent announcement of additional tariffs on $200 billion worth of Chinese imports, but cooperation between the two nations still prevails overall.
That being said, China is ahead of the US in many areas, particularly in relation to AI. China is now becoming a global leader in AI. The only area in which the US is still outcompeting China is in terms of talent, and that will change as well. GT: Machine learning is a key aspect of developing intelligent applications. What are the main pitfalls to watch out for in machine learning? AR: There are challenges regarding algorithm bias. Human beings are biased and human beings build machine learning models. You don’t necessarily know how they arrive at the decisions. Engineers might develop algorithms that can potentially have race or sex biases. So we need to be aware that data and algorithms are not neutral. There are a lot of examples of algorithms producing less than optimal outcomes. One area that everyone needs to focus on is the integrity of the training data. Just as doctors have the Hippocratic Oath, programmers and tech companies need to have some kind of machine learning code of ethics. Because increasingly algorithms determine everything from what we purchase, how we access healthcare, what home we can buy and how we access insurance to how we date, so I think there needs to be a code of conduct and an ethical framework. Engineers and companies using machine learning need to make sure they are using good quality training data, so that the quality and the accuracy of the predictions are really robust. Otherwise we will just replicate the biases that society has. GT: How will an ethical framework for AI development be built, and what role can China play in enabling better regulation of the sector? AR: It won’t be done in a coordinated way. Individual companies will have their own policies, but this needs to be a sector-wide global initiative, stipulating what steps are needed to ensure ethical and technical standards. This is already going on in universities and individual companies, but I think there needs to be a global alliance.
China has some responsibility to take a leadership role in thinking about the ethical implications of AI.
But it can not be purely government-led. It is difficult for regulations and policies to keep pace in every way with technological advancements. I don’t think governments today understand enough to regulate the tech sector effectively, particularly regarding emerging components of the ethical risk of AI and things like that. There needs to be much greater representation of science and technology within governments so they can understand these issues better. GT: Robots will do more than half of the current work tasks by 2025, almost twice as much as the current level, according to the findings of a new World Economic Forum study. Could this be catastrophic for job markets worldwide, in China’s case in particular? AR: In terms of automation, I do think that we will create new jobs and great jobs in the future that can be more suitable for knowledge-based economies, including China. For example, there need to be positions to curate artificial intelligence in the future. The younger generations will adapt to this technology transformation.
AI today is predominantly machine learning and machine learning excels at very specific applications, such as identifying images of cars or beating the world’s leading chess players. There will be a significant amount of work that goes into managing these new technologies and
how they interact with human beings. A lot of industries we don’t even know about today will flourish and we just need to make sure people get the relevant new skills during the transition period.
I think there will be a 10-year period of transition. That’s where things will be very difficult. An example that’s often used is truck drivers in the US. It’s in the top five professions in the US. When you have autonomous vehicles and autonomous trucks, what’s going to happen? I think China is leading the way on this by putting energy and efforts into teaching these new skills.
We will create new jobs; we will also redefine how much we should work. There needs to be a package of measures to address the rate of automation.