Mint Mumbai

Behind fears of open-source AI’s perils

- Belle Lin feedback@livemint.com

Two of venture capital’s most prominent figures, Marc Andreessen and Vinod Khosla , spent the past several days throwing jabs on X over one of Silicon Valley’s most divisive topics: Should artificial intelligen­ce be developed in the open or behind closed doors?

Proponents of open-source AI technologi­es, such as Andreessen, say they stand for things like open sharing of science, greater transparen­cy, and a means to prevent Big Tech interests from monopolizi­ng a powerful technology. Closed AI supporters, such as Khosla, say companies or other private entities offer a way to guard against its potential dangers and abuse.

Open-source AI is freely distribute­d for the public to build upon and share, whereas closed,*

or proprietar­y, AI is privately controlled and shared by its creators. But the two approaches aren’t mutually exclusive, they can exist together.

An example is where companies have built private systems on top of open-source code.

The debate on X was first

spurred by Elon Musk ’s lawsuit against OpenAI and its chief executive Sam Altman, and underscore­s the difficulty of finding clear answers to questions over the distributi­on and safety of AI—especially as regulators, Big Tech firms, scientists and government­s still don’t know how far, and how quickly, the technology will develop.

Among the tech giants with a stake in the debate, Meta has championed open-source AI, and released its Llama 2 model for the public to download and modify. Paris-based Mistral AI has released models with open “weights”, which are numerical parameters that make up a model’s inner workings. Meanwhile, the industry’s biggest AI startups, OpenAI and Anthropic, both sell closedsour­ce AI models.

Andreessen posted on Saturday that Khosla was “lobbying to ban open-source”. The comment from Andreessen Horowitz’s co-founder came after Khosla voiced support for Altman and OpenAI in the wake of Musk’s lawsuit, which alleges both breached the company’s founding agreement to commit to public, open-source

AI by prioritizi­ng profit.

The Khosla Ventures founder, who is also a backer of OpenAI’s for-profit arm, responded that AI is akin to nuclear weapons, so opensourci­ng it risks national security. Khosla’s recent post refers to a comment made by Ilya Sutskever, OpenAI’s technical visionary, that “it’s totally OK to not share the science”.

A Khosla Ventures spokespers­on pointed to a prior Khosla post supporting opensource technologi­es, yet arguing that large AI models are a “national security and technology” advantage to be closely guarded.

Andreessen Horowitz didn’t respond to a request for comment.

Between both camps, what is generally agreed upon is that large language models—the algorithms that power ChatGPT and are trained on massive amounts of data— aren’t a fully-developed technology. ChatGPT and other AI tools can spit out hallucinat­ions, biased results, and toxic or offensive output. Plus, they are incredibly costly to use and train, and consume huge amounts of energy.

To some open-source supporters, such technical gaps in these AI models mean they must be developed in the open, among a community of scientists and academics, before they are closed up by commercial interests, and before it perhaps reaches artificial general intelligen­ce, a hypothetic­al form of AI in which a machine can learn and think like a human.

“We believe that for the first time, we are deploying a technology at scale that we don’t truly understand,” said Ali Farhadi, CEO of the Allen Institute for AI, a non-profit research organizati­on founded in 2014 by late Microresea­rchers

soft co-founder Paul Allen . “We don’t know how to control these systems.”

Farhadi and other opensource advocates are quick to point out that AI has been developed for decades by scientists sharing their research—well before the “transforme­r” model that underpins large language models was shared by Google

in 2017.

Some US lawmakers agree with Khosla that freely-distribute­d AI could allow for its developmen­t among foreign adversarie­s, and should be protected accordingl­y. Core to this belief, shared among many closed-source AI companies, is that the technology presents an existentia­l risk to humanity and could lead to catastroph­e.

Altman has said OpenAI takes its safety obligation­s seriously, and that AI should be developed with great caution, but also says it offers immense commercial possibilit­ies.

The open-source movement in software, which began decades ago through the popularity of projects like Linux, offers some clues on where this iteration of the open versus closed debate could go: Open-source software underpins nearly every form of technology, including cloud-computing, whose commercial­ization has helped makecompan­ies like Amazon into behemoths.

But it has also led to cybersecur­ity risks for businesses and government­s because they are easy to download and modify.

Closed and open-sourced technologi­es, experts say, have always coexisted. Ahmad Al-Dahle, Meta’s vice president of generative AI, considers it a “false dichotomy” that either side will win. “I think there’s room for both,” he said.

“Fundamenta­lly, opensource will have a very important role,” said Ori Goshen, co-founder and co-CEO of AI21, an AI startup that builds proprietar­y models. “There is a world where even proprietar­y providers like ourselves today, the base models will become open source, but everything else will be your most treasured intellectu­al property.”

©2024 DOW JONES & COMPANY, INC.

 ?? ?? Vinod Khosla and Marc Andreessen have had a ‘war’ on X over how generative AI should be developed and distribute­d.
Vinod Khosla and Marc Andreessen have had a ‘war’ on X over how generative AI should be developed and distribute­d.
 ?? GETTY ?? Closed and open-sourced technologi­es, experts say, have always coexisted.
GETTY Closed and open-sourced technologi­es, experts say, have always coexisted.

Newspapers in English

Newspapers from India