The Desert Sun

Administra­tion issues new rules on use of AI

- Joey Garrison

WASHINGTON – The Biden administra­tion on Thursday announced three new policies to guide the federal government’s use of artificial intelligen­ce, billing the standards as a model for global action on a rapidly evolving technology.

The policies, which build off an executive order President Joe Biden signed in October, come amid growing concerns about risks posed by AI to the U.S. workforce, privacy and national security, and for potential discrimina­tion in decision-making.

The White House’s Office of Management and Budget will require that federal agencies ensure the use of AI does not endanger the “rights and safety” of Americans.

To improve transparen­cy, federal agencies will have to publish online a list of AI systems they are using, as well as an assessment of the risks those systems might pose and how the risks are being managed.

The White House is also directing all federal agencies to designate a chief AI officer with a background in the technology to oversee the use of AI technologi­es within the agency.

Vice President Kamala Harris announced the rules in a call with reporters, saying the policies were shaped by input from the public and private sectors, computer scientists, civil rights leaders, legal scholars and business leaders.

“President Biden and I intend that these domestic policies will serve as a model for global action,” said Harris, who has helped lead the administra­tion’s efforts on AI and outlined U.S. initiative­s on AI during a global summit in London last November.

“All leaders from government, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligen­ce is adopted and advanced in a way that protects the public from potential harm, while ensuring everyone is able to enjoy its full benefit,” Harris said.

The federal government has disclosed more than 700 examples of current and planned AI use across agencies. The Defense Department alone has more than 685 unclassifi­ed AI projects, according to the nonpartisa­n Congressio­nal Research Service.

Disclosure­s from other agencies show AI is being used to document suspected war crimes in Ukraine, test whether coughing into a smartphone can detect COVID-19 in asymptomat­ic people, stop fentanyl smugglers from crossing the southern border, rescue children being sexually abused and find illegal rhinoceros horns in airplane luggage – among many other uses.

To assess the safety risks of AI, federal agencies by December will be required to implement safeguards to “reliably assess, test and monitor” AI’s impacts on the public, mitigate risks of algorithmi­c discrimina­tion and publicize how the government is using AI.

Harris said that, for example, if the Veterans Administra­tion wants to use AI in VA hospitals to help doctors diagnose patients, it would need to show the AI system does not produce “racially biased diagnoses.”

Biden’s AI executive order, by invoking the Defense Production Act, requires companies developing the most advanced AI platforms to notify the government and share the results of safety tests.

 ?? ??
 ?? ??
 ?? BRENDAN SMIALOWSKI/AFP VIA GETTY IMAGES FILE ?? Vice President Kamala Harris looks on as President Joe Biden signs an executive order on the use of artificial intelligen­ce on Oct. 30.
BRENDAN SMIALOWSKI/AFP VIA GETTY IMAGES FILE Vice President Kamala Harris looks on as President Joe Biden signs an executive order on the use of artificial intelligen­ce on Oct. 30.

Newspapers in English

Newspapers from United States