Big Tech should stop talking, act on danger of AI
The tech industry has known for the past decade that artificial intelligence carries significant risks. Three years before his 2018 death, Stephen Hawking went so far as to warn that, “The development of full artificial intelligence could spell the end of the human race.”
So why hasn’t Big Tech acted to quell those fears? An industry that had public trust as a primary concern would have taken steps to develop safety protocols to offset the potential dangers.
But today’s tech leaders instinctively recoil from establishing any regulations or industry standards that hinder their ability to maximize profits, which explains why the United States still doesn’t have an internet Bill of Rights to protect consumers. And, sadly, Congress has proven itself incapable of regulating technology to protect the public.
Yet, last week, hundreds of technology leaders and researchers, including Steve Wozniak and Elon Musk, signed on to a letter calling for a six-month pause on advanced research in AI to come up with safety protocols and governance systems to rein in potential risks.
To which we say, better late than never.
But let’s get real. A pause of any kind is unenforceable. Trying to ensure that U.S. firms were complying would be hard enough. But the United States and China are engaged in an AI arms race with global leadership at stake. Both nations are betting that AI will drive their economic and military growth. It’s hard to imagine AI firms in either country pausing their research for six months on faith alone.
Tech leaders don’t need a pause to act. They should move swiftly to organize a group of knowledgeable, independent experts and government officials to develop socially responsible protocols.
The signees seeking the sixmonth pause say that powerful AI systems “should be developed only once we are confident that their effects will be positive and their risks will be manageable.” That’s a laudable goal, but it’s naive. Who will determine what is “positive,” given that even some of the most positive technologies have negative side effects?
The signees have valid concerns, such as whether we should let machines “flood our information channels with propaganda and untruth,” given social media’s ongoing inability to rein in misinformation. It’s hard to overlook the irony of that statement coming from, among others, Musk, who, since acquiring Twitter, has undone many of the safeguards designed to guard against such misinformation.
That said, we welcome his and the others’ seriously focusing on the dangers of AI and developing meaningful guidelines. The guidelines should center around making systems accurate, secure, transparent, trustworthy and protective of privacy. The National Institute of Standards and Technology in the U.S. Department of Commerce has developed an “Artificial Intelligence Risk Management Framework” that could serve as a starting point for the effort. Mandated by Congress, it is designed for “voluntary use to address risks in the design, development, use and evaluation of AI products, services and systems.”
The technology industry and the nation have a lot riding on the success of artificial intelligence. The AI global market is expected to generate nearly $200 billion this year and is projected to generate $1.8 trillion by 2030. That creates the potential for unprecedented changes in the way people live and work — for better or for worse.
The technology leaders’ call for action is overdue. It’s time for them to walk the talk. Now.