AI a force for social good
Artificial-intelligence systems are shaping the contours of our lives. With applications in agriculture, health care, education, transportation, manufacturing and the media, AI has become as pervasive as the Internet.
While it can significantly improve the well-being of humanity, it also has certain downsides - reinforcement of human biases, displacement of jobs and industries, and privacy risks. Thus, like all technologies, it requires governance to create an enabling environment and regulatory policies that maximize its benefits and reduce risks.
AI governance, however, poses many challenges. There is no single, universal idea of what its goals and outcomes should be. For instance, an aviation safety system seeks to prevent accidents. But AI regulators cannot have a similar exclusive aim.
Moreover, since AI is not a single application but an underlying technology with diverse uses, terms like "good AI" or "bad AI" are as meaningless as "good electricity" or "bad electricity." Thus governance must take into account the range of contexts and uses of AI.
Further complicating matters is the speed with which AI learns and evolves, often in ways that are not understood. As our current regulatory models cannot deal with these rapid changes, they might end up stifling innovation and fail to prevent harmful AI applications.
Confronted with these unprecedented challenges, what must we do?
To explore potential roadmaps, we build upon insights shared by industry experts, government officials, leading thinkers and practitioners at two critical events we at the Rockefeller Foundation were part of: the AI for Social Good Summit and the Innovating AI Governance Symposium.
Here's how policymakers can negotiate some of the challenges posed by AI governance and harness its transformative potential. While each pilot use case may require a bespoke approach until the consequences of that model are fully realized, here are a few examples. Strategic regulation
To promote innovation, governments must create safe, enabling spaces for experimentation. They can do this by deploying AI applications at a limited scale under the observation of regulators. Such pilot tests can help determine the potential benefits and downsides, and further finetune technologies before releasing them in the public sphere through an effective feedback loop.
AI localism, the governance of AI use within a community or city, can nurture a bottom-up regulatory approach. It allows policies to be adapted to local conditions and the needs of communities as opposed to a cookie-cutter approach. At the local level, citizens can also closely observe and have more of a say in how AI is used.
For such regulation to be effective, policymakers must understand the development process of AI systems, their strengths and weaknesses, and the types of data used. They must become more technology-literate and form working groups that enable collaboration among regulators, developers, and users.
In such collaborative efforts, however, policymakers must regulate acceptable outcomes of AI use rather than specific technologies and applications.
Take the case of AI systems that determine if applicants are eligible for a loan or a job. There have been incidents of algorithms discriminating against people based on their race, gender or address. In such cases, governance should find and stem biases rather than regulate the mechanism the AI system uses to make decisions.
They can do this through peer reviews with diverse participants who can challenge each other's presumptions and ensure representation of different points of view.
While AI will create many jobs, it will also upend old ones, which has prompted a pushback against certain technologies. For instance, taxi drivers have lobbied against self-driving vehicles. While various studies show that job losses to automation could be in the millions, quite a few, such as by the World Economic Forum, also point out that AI will create more jobs than it displaces.
Thus policymakers need to address the concerns of those who might lose jobs and create alternatives such as reskilling, job transition support, and employment guarantees. They must also strengthen social safety nets to cushion the impact of job losses.
In the long run, they must overhaul the education system to focus on life-long learning and helping workers transition rather than preparing them for a single career. Besides, with the demand for AI workers exceeding the supply, governments will have to develop and retain talent to capitalize on the AI revolution.