San Francisco Chronicle

Agencies must show AI tools not endangerin­g safety, rights

- By Matt O’Brien

U.S. federal agencies must show that their artificial intelligen­ce tools aren't harming the public, or stop using them, under new rules unveiled by the White House on Thursday.

“When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people,” Vice President Kamala Harris told reporters ahead of the announceme­nt.

Each agency by December must have a set of concrete safeguards that guide everything from facial recognitio­n screenings at airports to AI tools that help control the electric grid or determine mortgages and home insurance.

The new policy directive being issued to agency heads Thursday by the White House's Office of Management and Budget is part of the more sweeping AI executive order signed by President Joe Biden in October.

While Biden's broader order also attempts to safeguard the more advanced commercial AI systems made by leading technology companies, such as those powering generative AI chatbots, Thursday's directive will also affect AI tools that government agencies have been using for years to help with decisions about immigratio­n, housing, child welfare and a range of other services.

As an example, Harris said, “If the Veterans Administra­tion wants to use AI in VA hospitals to help doctors diagnose patients, they would first have to demonstrat­e that AI does not produce racially biased diagnoses.”

Agencies that can't apply the safeguards “must cease using the AI system, unless agency leadership justifies why doing so would increase risks to safety or rights overall or would create an unacceptab­le impediment to critical agency operations,” according to a White House announceme­nt.

The new policy also calls for two other “binding requiremen­ts,” Harris said. One is that federal agencies must hire a chief AI officer with the “experience, expertise and authority” to oversee all of the AI technologi­es used by that agency, she said. The other is that each year, agencies must make public an inventory of their AI systems that includes an assessment of the risks they might pose.

Some rules exempt intelligen­ce agencies and the Department of Defense, which is having a separate debate about the use of autonomous weapons.

Shalanda Young, the director of the Office of Management and Budget, said the new requiremen­ts are also meant to strengthen positive uses of AI by the U.S. government.

“When used and overseen responsibl­y, AI can help agencies to reduce wait times for critical government services, improve accuracy and expand access to essential public services,” Young said.

The new oversight was applauded Thursday by civil rights groups, some of which have spent years pushing federal and local law enforcemen­t agencies to curb the use of face recognitio­n technology tied to wrongful arrests of Black men.

A September report by the U.S. Government Accountabi­lity Office reviewing seven federal law enforcemen­t agencies, including the FBI, found that they cumulative­ly conducted more than 60,000 searches using face-scanning technology without first requiring sufficient staff training on how it works and how to interpret results.

 ?? Matt Kelley/Associated Press ?? Vice President Kamala Harris announced new requiremen­ts Thursday for how federal agencies use AI technology.
Matt Kelley/Associated Press Vice President Kamala Harris announced new requiremen­ts Thursday for how federal agencies use AI technology.

Newspapers in English

Newspapers from United States