The Hindu (Kolkata)

How the cracks in OpenAI’s foundation reignited mistrust in Sam Altman

A string of researcher­s working on AI policy and governance in the tech company have quit in succession. For a company that started as a non-pro t, OpenAI’s lack of transparen­cy has emerged as a more serious issue than its lackadaisi­cal approach to the fu

- Poulomi Chatterjee AFP

n November last year, over the two-day snafu when OpenAI chief Sam Altman was red and reinstated, his perception was dramatical­ly di erent. Mr. Altman, who had led the company into spearheadi­ng an arti cial intelligen­ce changeover with the release of ChatGPT couldn’t seem more adored. OpenAI employees had collective­ly ooded X with tweets saying, “I love OpenAI” in what was seen as an uprising against the decision of the OpenAI board. However, in the week gone by, much of the goodwill towards Mr. Altman seems to have changed. And the board’s statement that called Mr. Altman “not consistent­ly candid”, while announcing his ring, has returned in a boomerang e ect.

IConcerns over AI safety

OpenAI’s rough week started with the departure of Ilya Sutskever, the co-founder and former chief scientist at the company. Mr. Sutskever, who was a key member of the team that had built ChatGPT had surprising­ly backed the three board members who had voted to re Altman. The speculatio­n was that Mr. Altman’s views on AI safety was very di erent from the board’s which was worrying given the momentum of AI developmen­t. Since Mr. Altman’s reinstatem­ent, Mr. Sutskever has practicall­y vanished into oblivion.

AI safety seemingly was of importance to Mr. Sutskever who formed the ‘superalign­ment team’ in the company last year in July. Mr. Sutskever co-led the team, with Jan Leike, with the goal of shepherdin­g superintel­ligence so it stayed on track with its reins rmly in human hands by 2027. Aside from alignment, the team would also be “improving the safety of current models like ChatGPT, as well as understand­ing and mitigating other risks from AI such as misuse, economic disruption, disinforma­tion, bias and discrimina­tion, addiction and overrelian­ce, and others,” the statement for the announceme­nt read.

And for an ambition this lofty, the company said it would commit “20% of the compute we’ve secured to date over the next four years” for the initiative.

Last week, Mr. Sutskever waved goodbye to the company he founded.

Two days later, Mr. Leike, a longtime researcher at OpenAI, announced his resignatio­n as well saying he had reached a dead end after continuous disagreeme­nts with “OpenAI leadership about the company’s core priorities.” Signalling that the promised share of compute wasn’t granted to the team, Mr. Leike expressed concern that in the recent past “safety culture and processes have taken a backseat to shiny products.” Shortly after, the team which still had more than 25 people was disbanded.

A Fortune report shared that there was no speci cation around when and how the 20% compute would be distribute­d — was it equally over the four-year period or 20% every year or an arbitrary amount each year that would total to 20%? Regardless, it was enough reason for Mr. Sutskever and Mr. Leike to quit.

String of resignatio­ns

Even as rumblings of discord had just started, a few more researcher­s working on AI policy and governance quit soon after. Cullen O’Keefe quit his role as research lead on policy in April. Daniel Kokotajlo who had been working on the risks around AI models quit and responded on a forum saying he “quit OpenAI due to losing con dence that it would behave responsibl­y around the time of AGI.”

Gretchen Kruege, another policy researcher, shared that she had resigned from the company on May 14. “One of the ways tech companies in general can disempower those seeking to hold them accountabl­e is to sow division among those raising concerns or challengin­g their power. I care deeply about preventing this,” her post read on X.

Severe non-disparagem­ent policies

For a company that started as a non-pro t, OpenAI’s lack of transparen­cy has emerged as a more serious issue than its lackadaisi­cal approach to future AI safety.

On May 17, Vox reported that former employees had been under duress to sign lengthy exit documents that restricted them from ever speaking negatively about the company if they wanted to retain their vested equity in the company. Leaked emails showed that employees asking for more time to review the documents or seek legal counsel weren’t given any leeway. “The General Release and Separation Agreement requires your signature within 7 days,” a reply said for someone who had requested another week. Mr. Altman professed on X that he had been ignorant to this clause and apologised for it. The backlash to the severe tactics forced the company to backtrack and take the non-disparagem­ent clause down.

But this might not be the end of the saga.

Jacob Hilton, a researcher at the Alignment Research Center, who quit

OpenAI a year ago tweeted on X saying tech companies are responsibl­e for protecting researcher­s who speak about the tech in public interest because of how powerful it is. Mr. Hilton, who had also signed the NDA lest he lose equity said while he had received a call from OpenAI management about the change in policy, he would feel more secure if they legally enforced “non-retaliatio­n” against ex-employees by “preventing them from selling their equity, rendering it e ectively worthless for an unknown period of time.”

“I invite OpenAI to reach out directly to former employees to clarify that they will always be provided equal access to liquidity, in a legally enforceabl­e way. Until they do this, the public should not expect candour from former employees,” he tweeted.

ScarJo vs OpenAI

Hollywood actor Scarlett Johansson’s accusation­s against Mr. Altman deepened public mistrust further. Post OpenAI’s latest demo of ChatGPT last week, murmurs started that the voice of the AI assistant Sky was eerily close to Ms. Johansson’s voice in the sci- lm Her. Ms. Johansson released a statement saying Mr. Altman had reached out to her twice requesting to use her voice for Sky to which she had refused. Even more damning was Mr. Altman’s own tweet on X when the demo released, simply saying “her.”

While Mr. Altman provided proof later that the actress who did voice Sky wasn’t directed to imitate Johansson, collective evidence shows that the cracks in OpenAI’s foundation run deep and that Mr. Altman and his company are not in fact “consistent­ly candid.”

 ?? ?? In troubled waters: OpenAI CEO Sam Altman speaks during the Microsoft Build conference at Microsoft headquarte­rs in Redmond, Washington, on May 21.
In troubled waters: OpenAI CEO Sam Altman speaks during the Microsoft Build conference at Microsoft headquarte­rs in Redmond, Washington, on May 21.

Newspapers in English

Newspapers from India