How to make sure AI has the human touch
▶ The World Governments Summit has heard that inclusive conversations will be key
Artificial intelligence has been at the centre of the World Governments Summit in Dubai this week, where many from the Time 100 list of influential AI figures were joined by Nobel Prize winners as well as numerous of heads of government and ministers. The fact that discussions about AI have been so numerous may reflect how, as a global community, we are still grappling with the implications of this revolution.
Some of the contributions make clear the challenges ahead. At the Arab Fiscal Forum, a pre-summit event, International Monetary Fund managing director Kristalina Georgieva said 40 per cent of jobs across the world would be exposed to AI in the next few years, a development she described as a “tsunami eating into labour markets”.
“Some jobs will disappear altogether; some jobs will no longer exist. Other jobs will be enhanced or diminished,” she added. “And we know that we can only take advantage of opportunities if we are ready for them.”
Indeed, this need for readiness characterises many discussions about AI, not just at the WGS. There is a sense that the technology will get smarter and more ubiquitous. If so, what can be done to channel it in the right direction?
Again, the WGS provided an important platform for exploring these issues. In a discussion with Omar Al Olama, Minister of State for AI, the Digital Economy and Remote Work Applications, OpenAI co-founder Sam Altman suggested there needed to be an international compact to regulate AI. “We are going to need I believe some sort of global system, such as the International Atomic Energy Agency, for what happens to the world’s most powerful AI systems,” Mr Altman said.
Although international consensus on regulating AI is desirable, achieving it is another thing entirely. In the meantime, national governments will have to develop policies and institutions that will allow AI to thrive but in a controlled way. There are ways to achieve this: auditing AI systems for fairness and security; developing “sandboxes” for the safe testing of new technologies; and requiring tech companies to disclose how their systems work. Education is a vital part of this approach, something the UAE has already embraced – by opening the world’s first AI research university in Abu Dhabi in 2019.
In that vein, Jensen Huang, head of the Nvidia Corporation, a US-based tech multinational, told the Dubai summit about what he called “sovereign AI” – national ownership over a country’s data and the intelligence it produces. Every government, Mr Huang suggested, ought to have “data sovereignty”.
There is also the anxiety that AI is developing in a way that excludes human input. There are justifiable fears that people will lose their jobs. But fears of automation at the expense of human input may be overhyped. If, as Mr Huang suggested, it is in our power to make AI a technology that everyone can use, then we are entering a new paradigm.
Many more conversations, in addition to those we’ve heard at the WGS, will need to take place in the years ahead.