OpenAI could win back lost trust by being less secretive
A company making powerful AI tools should be more transparent
The board scandal that employees of OpenAI now call “The Blip” now threatens to live up to its nickname. The company recently announced the findings of an independent legal review of CEO Sam Altman’s firing last November and framed the result as exonerating its actions, largely. OpenAI’s new board “expressed its full confidence” in Altman’s leadership based on the law firm’s analysis. Much of the US tech industry has moved on, but the scandal was no blip and the probe’s findings are worth examining.
OpenAI said the lawyers confirmed its board hadn’t fired Altman because of concerns about product safety or business finances. Instead, there had been a “breakdown in trust” between itself and Altman, and that his conduct “did not mandate removal.” This is still a damning reflection of the broader cloak of secrecy that OpenAI has wrapped itself in as it releases powerful AI models. Trust is becoming ever more critical as a handful of opaque technology firms including Google and Microsoft control some of the most transformative innovations seen in years.
OpenAI’s own communications have undermined its trustworthiness. In response to a lawsuit from Elon Musk, it released early emails from executives that included an admission by its chief scientist that OpenAI was sharing details about its technology “for recruitment purposes” instead of a desire to serve humanity. Perhaps OpenAI knows its public framing is largely seen as a mirage.
But Altman could change that, repairing trust with not just his board, but with the public, and put the ‘open’ back in OpenAI. First, make the company as transparent as its name suggests, particularly around the data that it uses to train its models. We know that OpenAI used 45 terabytes of plaintext data to build its GPT-3 model four years ago because it said so in a research paper. But it didn’t say what websites were used and was even more tight-lipped about the newer model underpinning ChatGPT, citing “the competitive landscape [and] safety implications.”
Releasing training data details wouldn’t be unsafe. It would just make it easier for researchers to scrutinize a tool that has shown racial and gender biases in recruitment decisions, according to a recent Bloomberg News investigation. It could reveal what websites are used for training, and better yet, make that information easy to access, so people who aren’t programmers can explore it for harmful side effects. That would open the door for “social scientists, regulators, journalists and rights activists,” says Margaret Mitchell, a former co-lead of Google’s ethical AI team and current chief ethics scientist at AI firm Hugging Face. OpenAI has safety mechanisms and filters in place to make sure ChatGPT doesn’t say offensive things, but doesn’t say how those systems work. That should change, according to Sasha Luccioni, another AI scientist at Hugging Face.
Finally, the company already has some licencing deals with organizations like Axel Springer and the Associated Press, which it pays for special access to training data. But there are thousands of artists and writers whose data has been scraped without consent and for free, and who don’t have the resources to strike deals. It’s probably unrealistic to ask OpenAI to compensate some of those creators, since it would be expensive and time-consuming to set up. But as a minimum, it could offer a system for them to opt out of having their content used to train AI, Mitchell says.
A spokeswoman for OpenAI declined to comment.
Sam Altman has said he’s “pleased this whole thing is over,” according to Axios, in reference to the investigation by law firm WilmerHale. Now that the two board members who forced him to leave are gone, he’s working with a more corporate friendly board that includes executives from companies like Sony Group, Instacart and Salesforce, which could see OpenAI’s AI development accelerate faster than ever. That, plus the broader shift away from OpenAI’s non-profit roots, makes its introduction of a new whistleblower hotline seem hollow.
Altman was fired because the original board followed their fiduciary duty to “humanity” to the letter, believing he had compromised OpenAI’s mission by not being sufficiently candid with them. His exoneration strengthens his position at the company, but also makes it harder for staffers who see potential harms to speak up. It is the longest of long shots, but if Altman were to take at least one step toward becoming more transparent, it would go some way to restoring confidence in the enterprise. Trust is what society relies on when just a few large companies control the development of AI; currently, it’s in short supply.