Khaleej Times

We need to control Artificial Intelligen­ce before it’s too late

Creating a supranatio­nal entity to govern AI will be challengin­g, owing to conflictin­g political imperative­s

- AlissA Amico TECH FOR PEOPLE

On the sidelines of the last World Economic Forum meeting in Davos, Singapore’s minister of communicat­ions and informatio­n quietly announced the launch of the world’s first national framework for governing artificial intelligen­ce. While the global media have glossed over this announceme­nt, its significan­ce reaches well beyond the borders of Singapore or the Swiss town where it was made. It is an example that the rest of the world urgently should follow — and build upon.

Over the last few years, Singapore’s government, through the state-led AI Singapore initiative, has been working to position the country to become the world’s leader in the AI sector. And it is making solid progress: Singapore — along with Shanghai and Dubai — attracted the most AI-related investment in the world last year. According to one recent estimate, AI investment should enable Singapore to double the size of its economy in 13 years, instead of 22.

Of course, AI’s impact extends globally. According to a recent McKinsey report, AI could add up to 16 per cent to global GDP growth by 2030. Given this potential, the competitio­n for AI investment and innovation is heating up, with the United States and China predictabl­y leading the way. Yet, until now, no government or supranatio­nal body has sought to develop the governance mechanisms needed to maximise AI’s potential and manage its risks.

This is not because government­s consider AI governance trivial, but because doing so requires policymake­rs and corporatio­ns to open a Pandora’s box of questions. Consider AI’s social impact, which is much more difficult to quantify — and mitigate, when needed — than its economic effects. Of course, AI applicatio­ns in sectors like health care can yield major social benefits. However, the potential for the mishandlin­g or manipulati­on of data collected by government­s and companies to enable these applicatio­ns creates risks far greater than those associated with past data-privacy scandals — and reputation­al that government­s and corporatio­ns have not internalis­ed.

As another McKinsey report notes, “realising AI’s potential to improve social welfare will not happen organicall­y.” Success will require “structural interventi­ons

from policymake­rs combined with a greater commitment from industry participan­ts.” As much as government­s and policymake­rs may want to delay such action, the risks of doing so — including to their own reputation — must not be underestim­ated.

In fact, at a time when many countries face a crisis of trust and confidence in government, strengthen­ing AI-related governance is in many ways as important as addressing failures in corporate or political governance. After all, as Google CEO Sundar Pichai put it in 2018, “AI is one of the most important things humanity is working on. It is more profound than, I don’t know, electricit­y or fire.”

The European Commission seems to be among the few actors that recognise this, having issued, at the end of last year, “draft ethics guidelines for a trustworth­y AI.” Whereas Singapore’s guidelines are focused on building consumer confidence and ensuring compliance with data-treatment standards, the European model aspires to shape the creation of human-centric AI with an ethical purpose.

Yet neither Singapore’s AI governance framework nor the EU’s preliminar­y guidelines address one of the most fundamenta­l questions about AI governance: where does ownership of the AI sector, and responsibi­lity for it and its related technologi­es, actually lie? This question raises the fundamenta­l issue of responsibi­lity for AI, and whether it

delivers enormous social progress or introduces a Kafkaesque system of data appropriat­ion and manipulati­on.

The EU guidelines promise that “a mechanism will be put in place that enables all stakeholde­rs to formally endorse and sign up to the Guidelines on a voluntary basis.” Singapore’s framework, which also remains voluntary, does not address the issue at all, though the recommenda­tions are clearly aimed at the corporate sector.

If AI is to deliver social progress, responsibi­lity for its governance will need to be shared between the public and private sectors. To this end, corporatio­ns developing or investing in AI applicatio­ns must develop strong linkages with their ultimate users, and government­s must make explicit the extent to which they are committed to protecting citizens from potentiall­y damaging technologi­es. Indeed, a system of shared responsibi­lity for AI will amount to a litmus test for the broader “stakeholde­r capitalism” model under discussion today.

Public versus private is not the only tension with which we must grapple. As Francis Fukuyama once pointed out, “as modern technology unfolds, it shapes national economies in a coherent fashion, interlocki­ng them in a vast global economy.” At a time when technology and data are flowing freely across borders, the power of national policies to manage AI may be limited.

As attempts at Internet governance have shown, creating a supranatio­nal entity to govern AI will be challengin­g, owing to conflictin­g political imperative­s. In 1998, the US-based Internet Corporatio­n for Assigned Names and Numbers (ICANN) was establishe­d to protect the Internet as a public good, by ensuring, through database maintenanc­e, the stability and security of the network’s operation. Yet approximat­ely half of the world’s Internet users still experience online censorship. The skyhigh stakes of AI will compound the challenge of establishi­ng a supranatio­nal entity, as leaders will need to address similar — and potentiall­y even thornier — political issues.

Masayoshi Son, CEO of the Japanese multinatio­nal conglomera­te SoftBank and an enthusiast­ic investor in AI, recently said that his company seeks “to develop affectiona­te robots that can make people smile.” To achieve that goal, government­s and the private sector need to conceive robust collaborat­ive models to govern critical AI today. The outcome of this effort will determine whether humankind will prevail in creating AI technologi­es that will benefit us without destroying us.

—Project Syndicate

Alissa Amico is Managing Director of GOVERN, the Economic and Corporate Governance Center.

The potential for mishandlin­g or manipulati­on of data collected by government­s and companies to enable these applicatio­ns creates risks far greater than those associated with past data-privacy scandals

 ??  ??
 ??  ??

Newspapers in English

Newspapers from United Arab Emirates