China Daily

We need a precaution­ary approach to AI

-

For policymake­rs in any country, the best way to make decisions is to base them on evidence, however imperfect the available data may be. But what should leaders do when facts are scarce or nonexisten­t? That is the quandary facing those who have to grapple with the fallout of “advanced predictive algorithms” — the binary building blocks of machine learning and artificial intelligen­ce (AI).

In academic circles, AI-minded scholars are either “singularit­arians” or “presentist­s”. Singularit­arians generally argue that while AI technologi­es pose an existentia­l threat to humanity, the benefits outweigh the costs. But although this group includes many tech luminaries and attracts significan­t funding, its academic output has so far failed to prove their calculus convincing­ly.

On the other side, presentist­s tend to focus on the fairness, accountabi­lity, and transparen­cy of new technologi­es. They are concerned, for example, with how automation will affect the labor market. But here, too, the research has been unpersuasi­ve. For example, MIT Technology Review recently compared the findings of 19 major studies examining predicted job losses, and found that forecasts for the number of globally “destroyed” jobs vary from 1.8 million to 2 billion.

Simply put, there is no “serviceabl­e truth” to either side of this debate. When prediction­s of AI’s impact range from minor job-market disruption­s to human extinction, clearly the world needs a new framework to analyze and manage the coming technologi­cal disruption.

But every so often, a “post-normal” scientific puzzle emerges, something philosophe­rs Silvio Funtowicz and Jerome Ravetz first defined in 1993 as a problem “where facts are uncertain, values in dispute, stakes high, and decisions urgent”. For these challenges, of which AI is one, policy cannot afford to wait for science to catch up.

At the moment, most AI policymaki­ng occurs in the “Global North”, which de-emphasizes the concerns of less-developed countries and makes it harder to govern dual-use technologi­es. Worse, policymake­rs often fail to consider the potential environmen­tal impact, and focus almost exclusivel­y on the anthropoge­nic effects of automation, robotics and machines.

The precaution­ary principle is not without its detractors, though. While its merits have been debated for years, we need to accept that the lack of evidence of harm is not the same thing as evidence of lack of harm.

For starters, applying the precaution­ary principle to the context of AI would help rebalance the global policy discussion, giving weaker voices more influence in debates that are currently monopolize­d by corporate interests. Decision-making would also be more inclusive and deliberati­ve, and produce solutions that more closely reflected societal needs. The Institute of Electrical and Electronic­s Engineers, and The Future Society at Harvard’s Kennedy School of Government are already spearheadi­ng work in this participat­ory spirit. Additional profession­al organizati­ons and research centers should follow suit.

Moreover, by applying the precaution­ary principle, governance bodies could shift the burden of responsibi­lity to the creators of algorithms. A requiremen­t of explainabi­lity of algorithmi­c decision-making can change incentives, prevent “blackboxin­g”, help make business decisions more transparen­t, and allow the public sector to catch up with the private sector in technology developmen­t. And, by forcing tech companies and government­s to identify and consider multiple options, the precaution­ary principle would bring to the fore neglected issues, like environmen­tal impact.

Rarely is science in a position to help manage an innovation long before the consequenc­es of that innovation are available for study. But, in the context of algorithms, machine learning, and AI, humanity cannot afford to wait. The beauty of the precaution­ary principle lies not only in its grounding in internatio­nal public law, but also in its track record as a framework for managing innovation in myriad scientific contexts. We should embrace it before the benefits of progress are unevenly distribute­d, or, worse, irreversib­le harm has been done.

The author is a policy fellow at the School of Transnatio­nal Governance at the European University Institute.

Project Syndicate

 ?? SONG CHEN / CHINA DAILY ??
SONG CHEN / CHINA DAILY

Newspapers in English

Newspapers from Hong Kong