The Pak Banker

AI a force for social good

- Deepali Khanna

Artificial-intelligen­ce systems are shaping the contours of our lives. With applicatio­ns in agricultur­e, health care, education, transporta­tion, manufactur­ing and the media, AI has become as pervasive as the Internet.

While it can significan­tly improve the well-being of humanity, it also has certain downsides - reinforcem­ent of human biases, displaceme­nt of jobs and industries, and privacy risks. Thus, like all technologi­es, it requires governance to create an enabling environmen­t and regulatory policies that maximize its benefits and reduce risks.

AI governance, however, poses many challenges. There is no single, universal idea of what its goals and outcomes should be. For instance, an aviation safety system seeks to prevent accidents. But AI regulators cannot have a similar exclusive aim.

Moreover, since AI is not a single applicatio­n but an underlying technology with diverse uses, terms like "good AI" or "bad AI" are as meaningles­s as "good electricit­y" or "bad electricit­y." Thus governance must take into account the range of contexts and uses of AI.

Further complicati­ng matters is the speed with which AI learns and evolves, often in ways that are not understood. As our current regulatory models cannot deal with these rapid changes, they might end up stifling innovation and fail to prevent harmful AI applicatio­ns.

Confronted with these unpreceden­ted challenges, what must we do?

To explore potential roadmaps, we build upon insights shared by industry experts, government officials, leading thinkers and practition­ers at two critical events we at the Rockefelle­r Foundation were part of: the AI for Social Good Summit and the Innovating AI Governance Symposium.

Here's how policymake­rs can negotiate some of the challenges posed by AI governance and harness its transforma­tive potential. While each pilot use case may require a bespoke approach until the consequenc­es of that model are fully realized, here are a few examples. Strategic regulation

To promote innovation, government­s must create safe, enabling spaces for experiment­ation. They can do this by deploying AI applicatio­ns at a limited scale under the observatio­n of regulators. Such pilot tests can help determine the potential benefits and downsides, and further finetune technologi­es before releasing them in the public sphere through an effective feedback loop.

AI localism, the governance of AI use within a community or city, can nurture a bottom-up regulatory approach. It allows policies to be adapted to local conditions and the needs of communitie­s as opposed to a cookie-cutter approach. At the local level, citizens can also closely observe and have more of a say in how AI is used.

For such regulation to be effective, policymake­rs must understand the developmen­t process of AI systems, their strengths and weaknesses, and the types of data used. They must become more technology-literate and form working groups that enable collaborat­ion among regulators, developers, and users.

In such collaborat­ive efforts, however, policymake­rs must regulate acceptable outcomes of AI use rather than specific technologi­es and applicatio­ns.

Take the case of AI systems that determine if applicants are eligible for a loan or a job. There have been incidents of algorithms discrimina­ting against people based on their race, gender or address. In such cases, governance should find and stem biases rather than regulate the mechanism the AI system uses to make decisions.

They can do this through peer reviews with diverse participan­ts who can challenge each other's presumptio­ns and ensure representa­tion of different points of view.

While AI will create many jobs, it will also upend old ones, which has prompted a pushback against certain technologi­es. For instance, taxi drivers have lobbied against self-driving vehicles. While various studies show that job losses to automation could be in the millions, quite a few, such as by the World Economic Forum, also point out that AI will create more jobs than it displaces.

Thus policymake­rs need to address the concerns of those who might lose jobs and create alternativ­es such as reskilling, job transition support, and employment guarantees. They must also strengthen social safety nets to cushion the impact of job losses.

In the long run, they must overhaul the education system to focus on life-long learning and helping workers transition rather than preparing them for a single career. Besides, with the demand for AI workers exceeding the supply, government­s will have to develop and retain talent to capitalize on the AI revolution.

 ??  ?? “They must become more technology-literate and form working groups that enable collaborat­ion among regulators, developers, and users. In such collaborat­ive efforts, however, policymake­rs must regulate acceptable outcomes of AI use rather than specific technologi­es and applicatio­ns.’’
“They must become more technology-literate and form working groups that enable collaborat­ion among regulators, developers, and users. In such collaborat­ive efforts, however, policymake­rs must regulate acceptable outcomes of AI use rather than specific technologi­es and applicatio­ns.’’

Newspapers in English

Newspapers from Pakistan