The Pak Banker

Building AI with democratic values

- Matt O'Shaughness­y

POLICYMAKE­RS describe their visions for artificial intelligen­ce with statements of values. Secretary of State Antony Blinken has argued that liberal democratic countries should develop and govern AI in a way that "upholds our democratic values" and combats "the horrors of techno-authoritar­ianism." Congressio­nal Republican­s have urged the developmen­t of AI in a manner "consistent with democratic values."

Initial attempts to realize these visions have defined guiding principles for AI systems that support democratic values. These principles, such as accountabi­lity, robustness, fairness and beneficenc­e, have enjoyed broad consensus despite the very different constituen­cies and values of their creators. But despite being sold as supporting "democratic values," these exact same principles are centered in AI policy documents of non-democratic states such as China.

This discrepanc­y between the rhetoric of conflict used to describe "democratic" and "authoritar­ian" visions for AI and the broad agreement on high-level statements of principles points to three steps policymake­rs must take to develop and govern AI in a way that truly supports democratic values.

First, calls for developing AI with democratic values must engage with the many different conception­s of what "democracy" entails. If policymake­rs mean that AI should strengthen electoral democracy, they could start at home by investing in, for instance, the use of algorithmi­c tools to combat gerrymande­ring. If policymake­rs mean that AI should respect fundamenta­l rights, they should enshrine protection­s in law - and not turn a blind eye to questionab­le applicatio­ns (such as surveillan­ce technology) developed by domestic businesses. If policymake­rs mean that AI should help build a more just society, they should ensure that citizens do not need to become AI experts to have a say in how technology is used.

Without more precise definition­s, lofty political statements about democratic values in AI too often take a back seat to narrower considerat­ions of economic, political and security competitio­n. AI is often seen as being at the core of economic growth and national security, creating incentives to overlook holistic values in favor of strengthen­ing domestic industries. The use of AI to mediate access to informatio­n, such as on social media, positions AI as a central facet of political competitio­n.

Unfortunat­ely, as rhetoric and the perceived importance of winning these economic, security and political competitio­ns escalate, values-questionab­le uses of AI become increasing­ly easy to justify. In the process, imprecisel­y defined democratic values for AI can be coopted and corrupted, or become little more than cover for hollow geopolitic­al interests.

Second, consensus AI principles are so flexible that they can accommodat­e broadly-opposed visions for AI, making them unhelpful in communicat­ing or enforcing democratic values. Take the principle that AI systems should be able to explain their decision-making processes in human-understand­able ways. This principle is commonly said to uphold a "democratic" vision of AI. But these explanatio­ns can be conceptual­ized and created in many ways, each of which confers benefits and power to very different groups. An explanatio­n provided to an end user within a legal context that allows them to hold developers accountabl­e for harm, for example, can empower people impacted by AI systems. However, most explanatio­ns are in fact produced and consumed internally by AI companies, positionin­g developers as judge and jury in deciding how (and whether) to remedy the problems that explanatio­ns identify. To uphold democratic values - promoting, for instance, equal access and public participat­ion in technology governance - policymake­rs must define a much more prescripti­ve vision for how principles like explainabi­lity should be implemente­d.

\

Newspapers in English

Newspapers from Pakistan