Rolling Stone

WOMEN OF AI

-

a difference.” The next day, Gebru found out she’d been terminated.

Google maintained in a public response that Gebru resigned. Google AI head Jeff Dean acknowledg­ed that the paper “surveyed valid concerns about LLMs,” but claimed it “ignored too much relevant research.” When asked for comment by RollinG Stone, a representa­tive pointed to an article from 2020 referencin­g an internal memo in which the company pledged to investigat­e Gebru’s exit. The results of the investigat­ion were never released, but Dean apologized in 2021 for how Gebru’s exit was managed, and the company changed how it handles issues around research, diversity, and employee exits.

It was close to midnight that night when Gebru went public with a tweet: “I was fired … for my email to Brain women and Allies. My corp account has been cutoff. So I’ve been immediatel­y fired :-)”

Safiya Noble happened to be online. She’d heard about Gebru and the paper. She’d been watching the whole thing from the sidelines from the moment Google announced it was forming an Ethical AI team. In 2018, Noble had written the book Algorithms of Oppression: How Search Engines Reinforce Racism, which looked at how negative biases against women of color are embedded in algorithms.

“I thought, ‘This is rich,’ ” she says. Google suddenly worrying about ethics? Its subsidiary YouTube was the slowest of the major platforms to take action against extremist content. “I was suspicious.”

Noble’s distrust of these systems started more than a decade ago, back in 2009, when she was getting her Ph.D. in library and informatio­n science at the University of Illinois. She watched as Google — which she’d always seen as an advertisin­g tool from her time in the ad industry before pursuing her doctorate — began coming into libraries with giant machines to scan books, making them searchable online for the Google Books digitizati­on project. Noble thought to herself: “They’re up to something.”

“I started having a hunch that the Google Book project was about training the semantic web technology they were working on,” she says, using the term for an effort to make more and more of the internet understand­able to (and ingestible by) machines.

Noble’s hunch turned into a theory she still holds: The library project was not simply a book project but also a way to gather scannable informatio­n to fuel other initiative­s. She thinks the data could have later gone on to be used as early training for what would eventually become Google’s Bard, the company’s LLM that launched this spring. When asked about Noble’s theory, a Google spokespers­on told RollinG Stone, “Google’s Generative AI models are trained on data from the open web, which can include publicly available web data.” The company’s report on its PaLM2 model, which was used to train Bard, lists books among the types of data used for training.

Noble’s research for Algorithms of Oppression started a few years earlier, when she used the search engine to look up activities for her daughter and nieces. When she typed in “Black girls,” the results were filled with racist pornograph­y.

“That was like pulling one thread that’s poking out of a sweater,” she says. “You’re like, ‘If I could fix this, then I can move on to something else.’ But I started pulling it and the whole sweater unraveled; and here I am a decade later, and it’s kind of still the same.”

Noble and Gebru hadn’t crossed paths despite doing similar work — but when Noble saw Gebru’s tweet that night about Google, she was struck by how brave it was. She DM’d Gebru, “Are you OK?” From there, a friendship started.

GEOFFREY HINTON — the guy from the front page of the Times sounding the alarm on the risks of AI — was nowhere to be seen when his colleague Gebru was fired, she says. (Hinton tells RollinG Stone he had no interactio­ns with Gebru while he was at Google and decided not to publicly comment on her firing because colleagues he knows well and trusts had conflictin­g views on the matter.) And when he was asked about that in a recent interview with CNN’s Jake Tapper, he said Gebru’s ideas “aren’t as existentia­lly serious as the idea of these things getting more intelligen­t than us and taking over.” Of course, nobody wants these things to take over. But the impact on real people, the exacerbati­on of racism and sexism? That is an existentia­l concern.

When asked by RollinG Stone if he stands by his stance, Hinton says: “I believe that the possibilit­y that digital intelligen­ce will become much smarter than humans and will replace us as the apex intelligen­ce is a more serious threat to humanity than bias and discrimina­tion, even though bias and discrimina­tion are happening now and need to be confronted urgently.”

In other words, Hinton maintains that he’s more concerned about his hypothetic­al than the present reality. Rumman Chowdhury, however, took Gebru’s concerns seriously, speaking out against the researcher’s treatment at Google that winter. And the following spring, Chowdhury was brought on to lead Twitter’s own ethics team — META (Machine Learning Ethics, Transparen­cy, and Accountabi­lity). The idea was to test Twitter’s algorithms to see if they perpetuate­d biases.

And they did. Twitter’s image-cropping algorithm, it turned out, focused more on the faces of white women than the faces of people of color. Then Chowdhury and her team ran a massive-scale, randomized experiment from April 1 to Aug. 15, 2020, looking at a group of nearly 2 million active accounts — and found that the political right was more often amplified in Twitter’s algorithm. The effect was strongest in Canada (Liberals 43 percent versus Conservati­ves 167 percent amplified) and the United Kingdom (Labour 112 percent versus Conservati­ves 176 percent).

“Who gets to be the arbiter of truth? Who gets to decide what can and cannot be seen?” Chowdhury asks about that experiment. “So at the end of the day, the power of owning and running a social media platform is exactly that. You decide what’s important, and that is so dangerous in the wrong hands.”

Perhaps not surprising­ly, when Elon Musk took over Twitter in 2022, Chowdhury’s team was eliminated.

For years, the driving force behind Chowdhury’s work has been advocating for transparen­cy. Tech companies, especially those working in and around AI, hold their codes close to the vest. Many leaders at these firms even claim that elements of their AI systems are unknowable — like the inner workings of the human mind, only more novel, more dense. Chowdhury firmly believes this is bullshit. When codes can be picked apart and analyzed by outsiders, the mystery disappears. AIs no longer seem like omniscient beings primed to take over the world; they look more like computers being fed informatio­n by humans. And they can be stress-tested and analyzed for biases. LLMs? Once you look closer, it’s obvious they’re not some machine version of the human brain — they’re a sophistica­ted applicatio­n of predictive text. “Spicy autocorrec­t,” Chowdhury and her colleagues call it.

Chowdhury founded Humane Intelligen­ce in February, a nonprofit that uses crowdsourc­ing to hunt for issues in AI systems. In August, with support from the White House, Humane Intelligen­ce co-led a hackathon in which thousands of members of the public tested the guardrails of the eight major largelangu­age-model companies including Anthropic, Google, Hugging Face, NVIDIA, OpenAI, and Stability AI. They looked to figure out the ways the chatbots can be manipulate­d to cause harm, if they can inadverten­tly release people’s private informatio­n, and why they reflect back biased informatio­n scraped from the internet. Chowdhury says the most important piece of the puzzle was inviting as diverse a group as possible so they could bring their own perspectiv­es and questions to the exercise.

A person’s particular perspectiv­e shades what they worry about when it comes to a new technology. The new class of so-called AI Doomers and their fears of a hypothetic­al mutation of their technology are good examples.

“It is unsurprisi­ng that if you look at the race and, generally, gender demographi­cs of Doomer or existentia­list people, they look a particular way, they are of a particular income level. Because they don’t often suffer structural inequality — they’re either wealthy enough to get out of it, or white enough to get out of it, or male enough to get out of it,” says Chowdhury. “So for these individual­s, they think that the biggest problems in the world are can AI set off a nuclear weapon?”

GARBAGE IN, GARBAGE out. If you feed a machine’s learning system bad or biased data — or if you’ve got a monolithic team building the software — it’s bound to churn out skewed results. That’s what researcher­s like Chowdhury, Buolamwini, Noble, and Gebru have been warning about for so long.

Seeta Peña Gangadhara­n, a London School of Economics professor, has been raising a different set of concerns. She’s worried that AI and its derivative­s could push marginaliz­ed communitie­s even further to the edge — to the point of locking them out.

We all know how annoying it is when you get stuck talking to some automated system when you’re returning a pair of jeans or changing a plane ticket. You need a human’s help; there’s no menu option to get it. Now imagine getting trapped in that same unhelpful loop when you’re trying to get welfare benefits, seek housing, apply for a job, or secure a loan. It’s clear how the impacts of these systems aren’t evenly felt even if all that garbage is cleaned up.

Gangadhara­n co-founded Our Data Bodies, a nonprofit that examines the impact of data collection on vulnerable population­s. In 2018, a member of her team interviewe­d an older Black woman with the pseudonym Mellow who struggled to find housing through the Coordinate­d Entry System, which Gangadhara­n explains functions like a Match.com for the unhoused population of Los Angeles. Caseworker­s would add her informatio­n to the system and tell her that she was ineligible because of a “vulnerabil­ity index” score. After appealing several times to no avail, Mellow cornered a city official at a public event; the official greenlight­ed a review to get her placed.

“I’ve been really concerned about the inability of humans generally, but members of marginaliz­ed communitie­s specifical­ly, to lose the capacity to refuse or resist or decline the technologi­es that are handed to them,” Gangadhara­n says.

“So with LLM and generative AI, we have a new, more complex, and more seemingly inevitable technology being thrust in our faces.… Agencies are going to turn to a tool that promises efficienci­es and cost savings like AI. Right? They are also sold as tools that will eliminate human bias or human error. These institutio­ns, whether government or private institutio­ns, they’re going to rely on these tools more and more. What can end up happening is that certain population­s become the guinea pigs of these technologi­es, or conversely, they become the cheap labor to power these technologi­es.”

NOBLE, GEBRU, BUOLAMWINI, Chowdhury, and Gangadhara­n have been calling for regulation for years, as soon as they saw the harm automated systems have on marginaliz­ed communitie­s and people of color. But now that those harms could extend to the broader population, government­s are finally demanding results. And the AI Doomers are stepping in to tackle the problem — even though they stand to make a fortune from it. At least, that’s what they want you to think.

President Biden met with some of the AI Doomers in July, and came up with a series of voluntary, nonbinding measures that “seem more symbolic than substantiv­e,” The New York Times noted. “There is no enforcemen­t mechanism to make sure companies follow these commitment­s, and many of them reflect precaution­s that AI companies are already taking.” Meanwhile, the Doomers are quietly pushing back against regulation­s, as Time reported Open AI did by lobbying to water down the EU’s landmark AI legislatio­n.

“There is such a significan­t disempower­ment narrative in Doomer-ism,” Chowdhury says. “The general premise of all of this language is, ‘We have not yet built but will build a technology that is so horrible that it can kill us. But clearly, the only people skilled to address this work are us, the very people who have built it, or who will build it.’ That is insane.”

Gebru spent the months following her Google fiasco dealing with the resulting media storm, hiring lawyers and fending off stalkers. She lost weight from the stress. Handling the fallout became a full-time job.

When it was time to decide what to do next, she knew she didn’t want to return to Silicon Valley. Gebru opened the Distribute­d AI Research institute (DAIR), which focuses on independen­t, community-driven research into technologi­es — away from Big Tech’s influence. She prioritize­d recruiting not just researcher­s but labor organizers and refugee advocates — people she’d “never be able to hire in academia or industry because of all … the gatekeepin­g that makes sure these kinds of people don’t get to influence the future of technology.”

Gebru and her new colleagues focus their research on uncovering and mitigating the harms of current AI systems. One of her research fellows, Meron Estefanos, is an expert in refugee advocacy who looks at the applicatio­ns of AI on marginaliz­ed groups, such as AI-based lie-detection systems the European Border agency Frontex is using with refugees. (The recent EU AI Act does not include protection of refugees, migrants, or asylum seekers.) By interviewi­ng vulnerable communitie­s that have been harmed by AI, DAIR can provide early warnings about what is to come for the greater population once the systems are rolled out more widely. They’ve reported on exploited workers fueling AI systems, like data laborers in Argentina exposed to disturbing images and violent language while reviewing content flagged as inappropri­ate by an algorithm.

Noble is on the advisory committee for DAIR and founded her own organizati­on, the Center on Race and Digital Justice, which aims to investigat­e civil and human rights threats stemming from unregulate­d technology. She also started an equity fund to support women of color and is publishing a book on the dangers and harms of AI. Chowdhury’s hackathon showed the power of transparen­cy and letting diverse voices into the conversati­on. Buolamwini’s Algorithmi­c Justice League looks at the harms caused by the TSA’s expansion of facial-recognitio­n technology to 25 airports across the U.S. Gangadhara­n is studying surveillan­ce, including AI-enabled, automated tools at Amazon fulfillmen­t centers and its health effects on workers.

There are a few things they all want us to know: AI is not magic. LLMs are not sentient beings, and they won’t become sentient. And the problems with these technologi­es aren’t abstractio­ns — they’re here now and we need to take them seriously today.

“People’s lives are at stake, but not because of some super intelligen­t system,” Buolamwini says, “but because of an overrelian­ce on technical systems. I want people to understand that the harms are real, and that they’re present.”

This time, let’s listen.

Newspapers in English

Newspapers from United States