San Francisco Chronicle - (Sunday)

Wiener’s bill to require testing for AI tools

- By Chase DiFelician­tonio

In a bid to regulate the rapidly emerging artificial intelligen­ce industry in California, state Sen. Scott Wiener, D-San Francisco, introduced a bill Thursday that would require companies building the largest and most powerful AI models to test them for safety before releasing them to the public.

The bill would require companies working on AI technology to disclose their safety protocols to the state’s technology department and would permit the state to sue under certain circumstan­ces if the technology runs awry. It exempts smaller startups.

Wiener’s plan, which he announced in broad strokes last year, would also authorize the creation of a large public cloud computing cluster called CalCompute meant to provide researcher­s and others a platform for developing and testing AI technology.

“Large-scale

artificial intelligen­ce has the potential to produce an incredible range of benefits for California­ns and our economy — from advances in medicine and climate science to improved wildfire forecastin­g and clean power developmen­t,” Wiener said in a statement.

“It also gives us an opportunit­y to apply hard lessons learned over the last decade, as we’ve seen the consequenc­es of allowing the unchecked growth of new technology without evaluating, understand­ing, or mitigating the risks,” he added.

The bill, SB1047, “sets out clear standards for developers of extremely powerful AI systems,” Wiener’s office said. It would target AI models that cost more than $100 million to train, and that are “substantia­lly more powerful than any system that exists today,” his office added in a statement.

“The AI market is dominated by a handful of corporate actors and this essential legislatio­n takes the first critical step in fostering greater innovation and openness to serve the public interest,” said Teri Olle, director of Economic Security California, which is also sponsoring the bill, in a statement. “With the developmen­t of a public cloud like CalCompute, we can harness the potential of AI for good.”

The symbolic weight of Wiener’s bill is difficult to ignore, with the text itself noting that California leads the world in artificial intelligen­ce innovation and research via companies large and small, as well as through “our remarkable public and private universiti­es.”

“California has this unique opportunit­y to lead in both the technology and the policy,” said Meredith Lee of UC Berkeley’s College of Computing, Data Science and Society. AI is “already

changing how we find informatio­n and communicat­e,” she said.

Many of the largest companies working on generative AI models are based in San Francisco and the Bay Area. San Francisco-based OpenAI’s ChatGPT bot launched the current AI wave, and researcher­s at Anthropic, working on the Claude series of chatbots, are also based in the city, as are many smaller startups.

Bay Area tech giants including Google and Meta have also released chatbot models of their own.

“America must set the standards for the responsibl­e developmen­t and deployment of AI for the world,” said Dylan Hoffman, executive director for California and the Southwest for TechNet, a trade associatio­n that lobbies

Carlos Avila Gonzalez/The Chronicle for tech companies.

“We look forward to reviewing the legislatio­n and working with Senator Wiener to ensure any AI policies benefit all California­ns, address any risks, and strengthen our global competitiv­eness.”

“California and the greater Bay Area are the epicenter for continued policy developmen­t on AI,” said Ahmad Thomas, CEO of Silicon Valley Leadership Group, in a statement. The business organizati­on also has its own responsibl­e AI working group. “We look forward to continuing our conversati­ons with Senator Wiener and other leaders in the legislatur­e around how to most effectivel­y establish a sensible policy and regulatory framework that promotes continued innovation and reflects core responsibl­e

AI principles,” Thomas said.

Wiener’s bill would require companies to test their tools for unsafe behavior — worries about models divulging bombbuildi­ng instructio­ns abound, for example — and harden them against hacking. The legislatio­n would also require a failsafe for the models to be shut down in case of emergency.

All those requiremen­ts would have to be implemente­d before a model could be released.

The issue of so-called “algorithmi­c destructio­n” has been around in AI circles for years. It arose again recently because of copyright concerns in how AI programs are trained. Last year, the nonprofit Center for AI Safety said in a statement that the risks posed by unchecked AI developmen­t resemble those posed by pandemics and nuclear weapons.

The legislatio­n would also create a so-called Frontier Model Division within the California Department of Technology to focus on and regulate AI technology. That division would be responsibl­e for overseeing large AI models and assessing the safety guardrails in place.

Meta and other companies working on so-called foundation­al AI models claim to extensivel­y “red team” their technology, meaning they put it through stress tests to determine how it might respond to user prompts trying to get the software to do something it should not.

Up to now there has been no federal legislatio­n that takes direct aim at the technology, although the heads of many large AI companies have been called to speak in

front of Congress.

President Joe Biden previously released an executive order requiring federal agencies to appoint a point person for AI. It also directed some department­s to investigat­e how the technology might be used to further defense and other aims. Agency heads were also told to secure critical infrastruc­ture from AI driven attacks, among other provisions.

Wiener’s bill shares much in common with the Biden order, which requires companies to conduct safety testing on their models and share those results with the federal government.

The bill would require developers to reasonably rule out that their technology could create a hazard, taking into account a margin of error and the ability to update their software. It’s no coincidenc­e that the legislatio­n would create more state computing resources to test safety, as the two issues are closely linked. Huge amounts of computing power are required to create and train AI models, as well as to test how the complex and unpredicta­ble programs perform under different circumstan­ces, such as asking them to produce instructio­ns to build a weapon or generate and spread disinforma­tion.

Beefing up the state’s computing power would make it possible to test the models. Arati Prabhakar, one of President Biden’s top AI advisers, told the Chronicle last month that the technology to test increasing­ly complex models barely existed.

Prabhakar also underlined the need for clearer guardrails, and the difficulty in knowing how effective they might be.

“Will it generate cyberattac­ks if prompted, or will it not? Will it help you build a bioweapon? Is it much more dangerous than just doing a search? Those are unanswered questions,” she said at the time.

Asked whether technology exists to assess the safety of AI programs, UC Berkeley’s Lee said, “I think we have to try. We do have that need and shared responsibi­lity to get very clear about what we can accomplish now,” despite the rapid pace of AI program developmen­t.

Gov. Gavin Newsom also signed an executive order last year ordering civil servants to begin experiment­ing with the technology within certain boundaries. More recently the state released a report on how AI might help its daily operations — and the risks posed by implementi­ng it.

Another California bill, SB942, which State Sen. Josh Becker, D-Menlo Park, plans to introduce this session, would require companies that build generative AI technology to “watermark” images, videos, audio and potentiall­y other content created by their models.

That effort comes amid efforts by Meta, Google, OpenAI and others to voluntaril­y set standards making it clearer which content is AI-generated and which is not, partly in an effort to clamp down on political fakery.

“California has this unique opportunit­y to lead in both the technology and the policy.”

Meredith Lee of UC Berkeley

 ?? ?? State Sen. Scott Wiener says his new legislatio­n introduced Thursday would require companies to test their AI models before releasing to the public.
State Sen. Scott Wiener says his new legislatio­n introduced Thursday would require companies to test their AI models before releasing to the public.
 ?? ??

Newspapers in English

Newspapers from United States