San Francisco Chronicle

How Homeland Security can mitigate public fear of its use of AI

- By Douglas Yeung and Benjamin Boudreaux Douglas Yeung is a senior behavioral scientist at Rand Corp. Benjamin Boudreaux is a policy researcher at Rand.

Americans have been concerned about tech-enabled government surveillan­ce for as long as they have known about it. Now in the age of artificial intelligen­ce, and with the announceme­nt by the Department of Homeland Security this week that it is embracing the technology, that concern isn’t going away anytime soon.

But federal agencies could mitigate some of that fear. How? By engaging the public.

Since at least the 1928 Supreme Court decision to allow law enforcemen­t use of wiretappin­g, government use of technology has provoked public debate. Two years ago, public outcry forced the IRS to shelve newly announced plans for using facial recognitio­n to identify taxpayers. More recently, the Department of Homeland Security’s CBP One app, which uses facial recognitio­n to identify asylum applicants, was found to be less able to recognize asylum seekers with darker skin, like many other such systems. This, too, has understand­ably led to public frustratio­n.

Homeland Security has a huge mission set — including protecting borders, election infrastruc­ture and cyberspace. But unlike other federal agencies, it has many public-facing missions — such as Transporta­tion Security Administra­tion agents at airports. This also gives the department a unique opportunit­y to work with the public to ensure that tech is used responsibl­y.

The department understand­s this, which is why it asked us — researcher­s who study how technology intersects with public life — to survey Americans to find insights on using technology in ways that the public would be more likely to support. Surveying a representa­tive sample of 2,800 adults in 2021, the biggest takeaway was that Americans cared less about what technology was being used than how it was being used.

For instance, we asked people whether they would support the government using facial recognitio­n for such purposes as investigat­ing crimes, tracking immigrants or identifyin­g people in public places like stadiums or polling stations. Respondent­s supported using the technology in some ways — identifyin­g victims and potential suspects of a crime, for example — far more than others. People were much more suspicious of the most sweeping uses of facial recognitio­n, like to surveil protests or monitor polling stations. And this was true for different AI technologi­es.

Another important factor was the safeguards surroundin­g a given technology’s use. In our survey, these safeguards included providing alternativ­es to engaging with the technology, administer­ing regular audits to ensure that the technology was accurate and did not have a disparate impact across demographi­c groups, and providing notificati­on and transparen­cy about how it is used. Rather than a one-size-fits-all approach, we found Americans want safeguards sensitive to the context in which the technology is applied, such as whether the technology will be used on the open border or in a dense urban city.

To its credit, the department has implemente­d some safeguards along these lines, but they are not always uniformly administer­ed. For example, although facial recognitio­n technology is optional for travelers going through airport security, some individual­s report not being made aware that it is not a requiremen­t, including a U.S. senator. Such inconsiste­ncy breeds confusion and likely mistrust.

Neverthele­ss, there is an opportunit­y for constructi­ve engagement. Many of the respondent­s to our survey said that they were either neutral or ambiguous about government use of technology, meaning that they hadn’t yet decided whether the benefits of using a given technology outweighed the risks. Far from having fully formed polarized views on the subject, many Americans are open to being persuaded one way or another.

This might allow government agencies to work within this large group of “swing” Americans to build more trust in how the government uses new tech on all of us. And, counterint­uitively, the government’s reputation for moving slowly and deliberate­ly is, in this case, perhaps an asset.

Slowness is a trait often ascribed to the government. For instance, to field our survey we had to undergo a 15-month approval process. And that slowness had consequenc­es: By the time we got our approval, large language models had burst onto the scene but because they weren’t factored into our survey, we couldn’t ask people about them.

But when it comes to deploying new technologi­es, it should be done carefully, with a clear understand­ing of their benefits and risks — especially from the perspectiv­e of communitie­s most deeply affected. This means that a deliberate­ly paced process can be a feature, not a bug; slowness can be an asset, not a hindrance.

If agencies like the Department of Homeland Security take the time to understand what makes the public more comfortabl­e with how technology is used, the public might gain confidence. Even better: If agencies using technology to surveil Americans pulled back the curtain to explain how and why they do it, similar to the process of careful and considered deployment. As our research showed, people might not be very interested in understand­ing how the tech works, but they want to know how it will be used — on them and society.

 ?? Julia Nikhinson/Associated Press 2023 ?? TSA’s Jason Lim discusses the agency’s facial recognitio­n technology at Thurgood Marshall Airport in Glen Burnie, Md.
Julia Nikhinson/Associated Press 2023 TSA’s Jason Lim discusses the agency’s facial recognitio­n technology at Thurgood Marshall Airport in Glen Burnie, Md.

Newspapers in English

Newspapers from United States