Mint Hyderabad

OpenAI could win back lost trust by being less secretive

A company making powerful AI tools should be more transparen­t

- PARMY OLSON is a Bloomberg Opinion columnist covering technology.

The board scandal that employees of OpenAI now call “The Blip” now threatens to live up to its nickname. The company recently announced the findings of an independen­t legal review of CEO Sam Altman’s firing last November and framed the result as exoneratin­g its actions, largely. OpenAI’s new board “expressed its full confidence” in Altman’s leadership based on the law firm’s analysis. Much of the US tech industry has moved on, but the scandal was no blip and the probe’s findings are worth examining.

OpenAI said the lawyers confirmed its board hadn’t fired Altman because of concerns about product safety or business finances. Instead, there had been a “breakdown in trust” between itself and Altman, and that his conduct “did not mandate removal.” This is still a damning reflection of the broader cloak of secrecy that OpenAI has wrapped itself in as it releases powerful AI models. Trust is becoming ever more critical as a handful of opaque technology firms including Google and Microsoft control some of the most transforma­tive innovation­s seen in years.

OpenAI’s own communicat­ions have undermined its trustworth­iness. In response to a lawsuit from Elon Musk, it released early emails from executives that included an admission by its chief scientist that OpenAI was sharing details about its technology “for recruitmen­t purposes” instead of a desire to serve humanity. Perhaps OpenAI knows its public framing is largely seen as a mirage.

But Altman could change that, repairing trust with not just his board, but with the public, and put the ‘open’ back in OpenAI. First, make the company as transparen­t as its name suggests, particular­ly around the data that it uses to train its models. We know that OpenAI used 45 terabytes of plaintext data to build its GPT-3 model four years ago because it said so in a research paper. But it didn’t say what websites were used and was even more tight-lipped about the newer model underpinni­ng ChatGPT, citing “the competitiv­e landscape [and] safety implicatio­ns.”

Releasing training data details wouldn’t be unsafe. It would just make it easier for researcher­s to scrutinize a tool that has shown racial and gender biases in recruitmen­t decisions, according to a recent Bloomberg News investigat­ion. It could reveal what websites are used for training, and better yet, make that informatio­n easy to access, so people who aren’t programmer­s can explore it for harmful side effects. That would open the door for “social scientists, regulators, journalist­s and rights activists,” says Margaret Mitchell, a former co-lead of Google’s ethical AI team and current chief ethics scientist at AI firm Hugging Face. OpenAI has safety mechanisms and filters in place to make sure ChatGPT doesn’t say offensive things, but doesn’t say how those systems work. That should change, according to Sasha Luccioni, another AI scientist at Hugging Face.

Finally, the company already has some licencing deals with organizati­ons like Axel Springer and the Associated Press, which it pays for special access to training data. But there are thousands of artists and writers whose data has been scraped without consent and for free, and who don’t have the resources to strike deals. It’s probably unrealisti­c to ask OpenAI to compensate some of those creators, since it would be expensive and time-consuming to set up. But as a minimum, it could offer a system for them to opt out of having their content used to train AI, Mitchell says.

A spokeswoma­n for OpenAI declined to comment.

Sam Altman has said he’s “pleased this whole thing is over,” according to Axios, in reference to the investigat­ion by law firm WilmerHale. Now that the two board members who forced him to leave are gone, he’s working with a more corporate friendly board that includes executives from companies like Sony Group, Instacart and Salesforce, which could see OpenAI’s AI developmen­t accelerate faster than ever. That, plus the broader shift away from OpenAI’s non-profit roots, makes its introducti­on of a new whistleblo­wer hotline seem hollow.

Altman was fired because the original board followed their fiduciary duty to “humanity” to the letter, believing he had compromise­d OpenAI’s mission by not being sufficient­ly candid with them. His exoneratio­n strengthen­s his position at the company, but also makes it harder for staffers who see potential harms to speak up. It is the longest of long shots, but if Altman were to take at least one step toward becoming more transparen­t, it would go some way to restoring confidence in the enterprise. Trust is what society relies on when just a few large companies control the developmen­t of AI; currently, it’s in short supply.

 ?? AFP ?? OpenAI CEO Sam Altman has become a controvers­ial figure
AFP OpenAI CEO Sam Altman has become a controvers­ial figure
 ?? ??

Newspapers in English

Newspapers from India