San Francisco Chronicle

Digital security relies on fallible AI

- By Douglas Yeung Douglas Yeung is a senior behavioral scientist at the Rand Corp. and a Pardee Rand Graduate School faculty member.

Imagine trying to convince someone that you are, in fact, you. Do you offer to compare your face to your driver’s license photo? Or, tell them something only you would know, like the name of your first crush?

Handing over identifyin­g informatio­n to get something of value is a trade most of us have made throughout our lives. Even before the internet, this trade might have included placing our inked index finger onto a notary’s notepad or reciting our mother’s maiden name to a government official or a bank teller.

But powerful technologi­es like artificial intelligen­ce could make such trades too lopsided, forcing us to give up too much of who we are to get the things we need.

In the digital age, tradeoffs — like displaying our faces and fingerprin­ts — are all but required simply to function in society. Whether it’s unlocking our smartphone, paying for coffee or boarding an airplane, these AI-powered trades grant us access to the things we want. But the technology charged with securing our informatio­n, protecting what we have given up in these trades — proof of our very selfhood — against theft, fraud and other potential harms, doesn’t always work.

A September 2023 report from the Center for Democracy and Technology, for example, found that 19% of students whose schools use AI software reported that they or someone they know had been inadverten­tly outed as LGBTQ+ by the technology, a 6-percentage point increase over the previous school year. Similarly, in March 2023, OpenAI revealed that a bug in its technology allowed some users to see titles of another active user’s chat history, and in some cases, even the first message in a newly created conversati­on if both users were active at the same time.

In a world rapidly integratin­g AI into everything security-related, what if we reach a point where a chatbot interview is a required verificati­on step and its underlying large language model infers something sensitive about you — like your sexual orientatio­n or risk for depression — and then asks you confirm this trait as proof of self ? Or, what if government programs that use risk prediction algorithms and facial recognitio­n to safeguard travel try employing AI that forces travelers or migrants to disclose something deeply personal or risk being turned away from somewhere they want, or even need, to go?

These are not far-fetched future scenarios.

And just as worrisome as the technology being unable to secure our private informatio­n, it repeatedly cannot correctly identify people. One face recognitio­n tool, for example, recently incorrectl­y matched several faces of members of Congress to the faces in a mugshot database. It has also exhibited problemati­c behaviors. Uber’s facial recognitio­n verificati­on system received multiple complaints, and in the United Kingdom, it was deemed discrimina­tory after it repeatedly failed to recognize a Black delivery driver and even locked him out of its platform.

Even more disturbing, newer technologi­es like generative AI — which has billions of dollars in investment­s — continue to mischaract­erize us. AI image generators like Midjourney, when asked to depict people and places from around the world, reduce them to caricature­s.

When people behave like this, we call it stereotypi­ng or even discrimina­tion.

But AI, of course, is not human. Its skewed understand­ing of who we are can be traced back to its reliance on the data we provide — our posts on social media or our conversati­ons with a chatbot — all of which occur online, where we’re not always who we appear to be. It’s on developers and researcher­s, then, to ensure people’s data remains private, to keep working to improve AI’s accuracy, minimizing technical errors such as when AI makes up answers — known as hallucinat­ions — and bias.

AI developers could also do a better job acknowledg­ing that their products shape not just our attitudes and behaviors, but our very sense of self. Developers could partner with social scientists to marshal research about identity developmen­t, for example, and research why it’s important for youth and others to be able to define — and redefine — themselves.

The insights gleaned from such research might then help clarify how AI could more fully embody the picture of who we are, not just the factual knowledge we have traditiona­lly traded off for security access, but our emotions and personalit­ies, our culture and creativity, our capacity for cruelty and compassion. Future workers — AI and human — need better training to relate to and communicat­e with people.

Policymake­rs have a role to play, too: They can encourage these actions by updating existing frameworks for the responsibl­e use of AI or by developing new guidance for integratin­g AI in digital security and identity verificati­on practices.

Establishi­ng who we are in society is fundamenta­l to being human. And digitally securing our identities is crucial to safeguard the selves we have built — and are continuous­ly building. By putting AI in charge of deciding who counts, or what traits define a human, we risk becoming the people the machines say we are and not who we might want to be.

 ?? Olemedia/Getty Images ?? Unlocking a smartphone, paying for coffee or boarding an airplane, AI-powered tools grant us access to things we want.
Olemedia/Getty Images Unlocking a smartphone, paying for coffee or boarding an airplane, AI-powered tools grant us access to things we want.

Newspapers in English

Newspapers from United States