The Mendocino Beacon

Big Tech should stop talking, act on danger of AI

- — San Jose Mercury News

The tech industry has known for the past decade that artificial intelligen­ce carries significan­t risks. Three years before his 2018 death, Stephen Hawking went so far as to warn that, “The developmen­t of full artificial intelligen­ce could spell the end of the human race.”

So why hasn’t Big Tech acted to quell those fears? An industry that had public trust as a primary concern would have taken steps to develop safety protocols to offset the potential dangers.

But today’s tech leaders instinctiv­ely recoil from establishi­ng any regulation­s or industry standards that hinder their ability to maximize profits, which explains why the United States still doesn’t have an internet Bill of Rights to protect consumers. And, sadly, Congress has proven itself incapable of regulating technology to protect the public.

Yet, last week, hundreds of technology leaders and researcher­s, including Steve Wozniak and Elon Musk, signed on to a letter calling for a six-month pause on advanced research in AI to come up with safety protocols and governance systems to rein in potential risks.

To which we say, better late than never.

But let’s get real. A pause of any kind is unenforcea­ble. Trying to ensure that U.S. firms were complying would be hard enough. But the United States and China are engaged in an AI arms race with global leadership at stake. Both nations are betting that AI will drive their economic and military growth. It’s hard to imagine AI firms in either country pausing their research for six months on faith alone.

Tech leaders don’t need a pause to act. They should move swiftly to organize a group of knowledgea­ble, independen­t experts and government officials to develop socially responsibl­e protocols.

The signees seeking the sixmonth pause say that powerful AI systems “should be developed only once we are confident that their effects will be positive and their risks will be manageable.” That’s a laudable goal, but it’s naive. Who will determine what is “positive,” given that even some of the most positive technologi­es have negative side effects?

The signees have valid concerns, such as whether we should let machines “flood our informatio­n channels with propaganda and untruth,” given social media’s ongoing inability to rein in misinforma­tion. It’s hard to overlook the irony of that statement coming from, among others, Musk, who, since acquiring Twitter, has undone many of the safeguards designed to guard against such misinforma­tion.

That said, we welcome his and the others’ seriously focusing on the dangers of AI and developing meaningful guidelines. The guidelines should center around making systems accurate, secure, transparen­t, trustworth­y and protective of privacy. The National Institute of Standards and Technology in the U.S. Department of Commerce has developed an “Artificial Intelligen­ce Risk Management Framework” that could serve as a starting point for the effort. Mandated by Congress, it is designed for “voluntary use to address risks in the design, developmen­t, use and evaluation of AI products, services and systems.”

The technology industry and the nation have a lot riding on the success of artificial intelligen­ce. The AI global market is expected to generate nearly $200 billion this year and is projected to generate $1.8 trillion by 2030. That creates the potential for unpreceden­ted changes in the way people live and work — for better or for worse.

The technology leaders’ call for action is overdue. It’s time for them to walk the talk. Now.

Newspapers in English

Newspapers from United States