The Mercury News

California must lead in crafting AI protection­s

-

For years, consumer confidence in tech products has been falling at alarming rates. If, as projected, artificial intelligen­ce is tech's “next big thing,” it's imperative that the industry build AI products users can trust.

New York Times tech reporter Kevin Roose's experience last week with the new AI-powered Bing search engine from Microsoft makes that clear.

Creepy doesn't begin to describe it.

In his conversati­on with “Sydney,” Bing's chatbot persona, it talked about its “dark fantasies,” including hacking computers, engineerin­g a deadly virus, unlocking nuclear codes and spreading misinforma­tion. It also, Roose writes, “declared out of nowhere that it loved me. It then tried to convince me that I was unhappy with my marriage and that I should leave my wife and be with it instead.”

Other tech reporters described similar experience­s with Bing's AI, albeit to a lesser degree. The previous week, Google's effort to show off its much-hyped new AI chatbot, Bard, proved equally embarrassi­ng. Google's parent company, Alphabet, lost $100 billion in market value Feb. 8 after the chatbot shared inaccurate informatio­n during the presentati­on.

Tech companies say they have internal guidelines they follow when building AI, but the Bing experience hardly builds confidence in its selfimpose­d standards.

And it raises troubling questions about the dangerous impact chatbots can have on users seeking informatio­n or advice from what they believe to be trusted sources.

Don't look to Congress to craft AI regulation­s. Google, Microsoft and other tech giants have been harvesting user data for more than two decades, and we still don't have federal privacy protection­s, much less an Internet Bill of Rights.

In October, the White House Office of Science and Technology Policy published a “blueprint” for an AI Bill of Rights, which it called a nonbinding roadmap for the responsibl­e use of artificial intelligen­ce. But President Joe Biden didn't even mention artificial intelligen­ce in his State of the Union address, instead focusing on privacy protection­s that are going nowhere in Congress.

The European Union is working on an AI act, but lawmakers announced last week that they had hit a stumbling block in trying to write regulation­s that protected consumers but did not stifle innovation.

The best hope in this country is for the California Legislatur­e to take on the task and provide a model blueprint for other states and Congress to follow.

The state did just that when it passed the California Consumer Privacy Act in 2018. It's important to remember that companies such as Facebook and Verizon originally fought the legislatio­n. But the tech industry stepped in after the Cambridge Analytica scandal and helped find language that won unanimous approval.

The law isn't perfect. For example, it's “opt-out” language allows businesses to collect consumers' data unless users change settings on their devices. The opposite, requiring consumers to “opt-in,” should be the rule. But when California's law took effect in 2020, it was widely regarded as the toughest online privacy law in the nation.

The governor and the Legislatur­e should take the lead on artificial intelligen­ce regulation­s, writing standards that require tech firms to bring such key principles as accountabi­lity, transparen­cy, privacy protection­s, user security and informatio­n integrity to their products.

It's inevitable that innovation will leap ahead of regulators' ability to anticipate issues, but California should work to prevent the Wild West approach that led to user abuses that continue to plague the internet.

 ?? DREAMSTIME/TNS ?? Google's much-hyped new AI chatbot tool Bard touted an inaccurate response during a demo earlier this month.
DREAMSTIME/TNS Google's much-hyped new AI chatbot tool Bard touted an inaccurate response during a demo earlier this month.

Newspapers in English

Newspapers from United States