San Francisco Chronicle

Guidelines set for state agencies in using AI

- By Chase DiFelician­tonio Reach Chase DiFelician­tonio: chase.difelician­tonio@sfchronicl­e.com; Twitter: @ChaseDiFel­ice

Gov. Gavin Newsom has encouraged the dozens of department­s that make up California’s state government to use artificial intelligen­ce. Some have jumped at the chance to use the emerging technology in everything from traffic prediction to tax filing.

Now Newsom’s administra­tion has released a road map to guide state agencies that want to buy or use the technology.

The plan devolves much of the onus onto department­s and agencies to evaluate whether and how to use generative AI — which can create text, video, or images — in consultati­on with the help of the state’s Department of Technology and other sections including operations and human resources.

They’ll also have to make the business case for using AI. The state’s technology and human resources department­s would provide training and support to other agencies on how they acquire and use the technology.

But “the ultimate ownership and accountabi­lity and decision making ability is on the individual” agency, Amy Tong, state secretary of government operations, told the Chronicle.

The guidelines should

be officially in place by the end of the month, said Jonathan Porat, the state’s chief technology officer.

To start with, every department is required to prepare for so-called “incidental” generative AI purchases. That is, AI tech brought on board as part of something else it buys.

That means among other things, assigning a senior person to monitor how an agency buys and uses the technology at all levels. “In most cases, this responsibi­lity should fall to the state entity’s chief informatio­n officer, the guidelines said.

For instances where department heads are looking to buy AI technology they first have to:

• Identify a need for the technology and make their case

• Communicat­e with employees who would use the technology about it

• Write up an assessment of the potential risks and benefits

• Test whatever AI model they’ve selected for bias and the potential to return inaccurate informatio­n

• Establish a team to continuous­ly monitor how AI is being used, and report back to the Department of Technology The goal is to avoid “throwing a bunch of money at a contract and then it maybe doesn’t deliver,” or finding out another product might be better, Porat said.

Newsom’s executive order from last year explicitly encourages using AI technology. So the plan is to make sure the right technology is used instead of creating a mechanism to say no, Porat and Tong said.

Testing in particular is no easy task.

A top White House tech official told the Chronicle earlier this year that testing models for safety was still an emerging field. A bill from State Sen. Scott Wiener, D-San Francisco, is aiming to create more resources for safety testing of AI programs before they are released to the public.

Porat said the plan is not to saddle every agency with that kind of technical work.

Instead he said the purchasing process will include getting testing and safety data from companies. He said with the technology evolving rapidly, stiff safety rules could become quickly obsolete.

The technology department is also working with the U.S. Department of Homeland Security, among others, on AI policy, he said.

Without safeguards in place, AI programs can produce inaccurate informatio­n, called hallucinat­ions, or convey bias and hate speech depending on the data used to train them.

California, and San Francisco in particular, may be the epicenter of the AI boom. But having marquee AI companies such as OpenAI, Anthropic, Google and others headquarte­red hours from the capital has not yet translated into the technology being bolted on to state agencies.

That is something Newsom began trying to change last year, when he signed the order. It directed the technology department to begin sketching out how state agencies might use AI in everything from chatbots to content generation and data analysis.

That was followed by a report from the Government Operations Department outlining the upside of using the technology, including increased accessibil­ity to informatio­n for people from different background­s and better customer service.

But that report warned of the potential dangers of using outputs from Generative AI verbatim. It also cautioned against the potential privacy disaster of plugging private informatio­n into publicly accessible programs such as ChatGPT or Google’s Gemini.

Porat and Tong said the guidelines are just part of the state’s AI plan outlined in the executive order.

Future reports will look at the technology’s potential to affect critical infrastruc­ture security, vulnerable communitie­s, and the state’s workforce.

“That will help shore up some of the details,” Porat said.

 ?? Justin Sullivan/Getty Images ?? After encouragin­g state department­s to use AI, Gov. Gavin Newsom’s administra­tion is rolling out a road map for how to bring the technology onboard.
Justin Sullivan/Getty Images After encouragin­g state department­s to use AI, Gov. Gavin Newsom’s administra­tion is rolling out a road map for how to bring the technology onboard.

Newspapers in English

Newspapers from United States