OH THE INHUMANITY
Experts see scary future with open-source AI
Artificial-intelligence experts are raising alarms after Mark Zuckerberg said Meta plans to make advanced AI tools “widely available” to the public — despite warnings they could eventually pose a threat to humanity.
In a video posted to his Facebook account, Zuckerberg said the development and widespread release of artificial “general intelligence” — typically meaning AI systems with human-level cognitive abilities — is necessary to build “the next generation of services.”
“This technology is so important and the opportunities are so great that we should open source and make it as widely available as we responsibly can so that everyone can benefit,” Zuckerberg said.
But Wendy Hall, a renowned UKbased computer scientist who serves on the United Nations’ AI advisory panel, told The Guardian Zuckerberg’s plans were “really very scary” given the potential risks of misuse.
“The thought of open source AGI being released before we have worked out how to regulate these very powerful AI systems is really very scary,” Hall said. “In the wrong hands technology like this could do a great deal of harm. It is so irresponsible for a company to suggest it.”
“Open source” refers to the concept of making the underlying source code of a product available to all to see and use as they see fit.
Some experts are wary of opensourcing AI, arguing that opensourced AI tools would exacerbate risks such as the spread of misinformation, election meddling, job losses or even the loss of humanity’s control over society.
Elon Musk and ex-Google boss Eric Schmidt are among those who have
Meta CEO Mark Zuckerberg says he sees benefits and opportunities in making AI tools available to the public at large, but critics are warning that such a move could wind up ushering in a world filled with artificial beings as frightening as the killer robot from the “Terminator” films.
The thought of open source AGI being released before we have worked out how to regulate these very powerful AI systems is really very scary.
— UN AI advisory panel member Wendy Hall
warned that advanced AI could pose a world-ending risk without proper safeguards in place.
Up to governments
Hall noted that the achievement of a true advanced general intelligence was “still many years away,” giving governments time to craft proper regulations for the burgeoning technology.
Another AI expert, Andrew Rogoyski of the UK’s University of Surrey, argued that regulators, not Meta, should decide whether open-sourcing was safe.
“There are deep and complex arguments about the merits of open-sourcing current AI models, pushing that into the realm of AGI could be worldsaving or catastrophic,” Rogoyski told the outlet. “These decisions need to be taken by international consensus, not in the boardroom of a tech giant.”
Meta didn’t respond to a request for comment.
Zuckerberg appeared to hedge his bets during a separate interview with The Verge on Meta’s AI ambitions — telling the tech site that he had yet to make a final decision on whether to open-source advanced AI.
“For as long as it makes sense and is the safe and responsible thing to do, then I think we will generally want to lean towards open source,” Zuckerberg said. “Obviously, you don’t want to be locked into doing something because you said you would.”
Last year, Meta released an opensource version of Llama 2, its largelanguage AI model. As part of his plans to boost innovation at Meta, Zuckerberg said he has ordered the company’s two main AI units, FAIR and GenAI, to work more closely.
Meta is locked in intense competition with rivals such as Google and Microsoft-backed OpenAI to developed advanced AI tools.