The Guardian Australia

For truly ethical AI, its research must be independen­t from big tech

- Timnit Gebru

A year ago I found out, from one of my direct reports, that I had apparently resigned. I had just been fired from Google in one of the most disrespect­ful ways I could imagine.

Thanks to organizing done by former and current Google employees and many others, Google did not succeed in smearing my work or reputation, although they tried. My firing made headlines because of the worker organizing that has been building up in the tech world, often due to the labor of people who are already marginaliz­ed, many of whose names we do not know. Since I was fired last December, there have been many developmen­ts in tech worker organizing and whistleblo­wing. The most publicized of these was Frances Haugen’s testimony in Congress; echoing what Sophie Zhang, a data scientist fired from Facebook, had previously said, Haugen argued that the company prioritize­s growth over all else, even when it knows the deadly consequenc­es of doing so.

I’ve seen this happen firsthand. On 3 November 2020, a war broke out in Ethiopia, the country I was born and raised in. The immediate effects of unchecked misinforma­tion, hate speech and “alternativ­e facts” on social media have been devastatin­g. On 30 October of this year, I and many others reported a clear genocidal call in Amharic to Facebook. The company responded by saying that the post did not violate its policies. Only after many reporters asked the company why this clear call to genocide didn’t violate Facebook’s policies – and only after the post had already been shared, liked and commented on by many – did the company remove it.

Other platforms like YouTube have not received the scrutiny they warrant, despite studies and articles showing examples of how they are used by various groups, including regimes, to harass citizens. Twitter and especially TikTok, Telegram and Clubhouse have the same issues but are discussed much less. When I wrote a paper outlining the harms posed by models trained using data from these platforms, I was fired by Google.

When people ask what regulation­s need to be in place to safeguard us from the unsafe uses of AI we’ve been seeing, I always start with labor protection­s and antitrust measures. I can tell that some people find that answer disappoint­ing – perhaps because they expect me to mention regulation­s specific to the technology itself. While those are important, the #1 thing that would safeguard us from unsafe uses of AI is curbing the power of the companies who develop it and increasing the power of those who speak up against the harms of AI and these companies’ practices. Thanks to the hard work of Ifeoma Ozoma and her collaborat­ors, California recently passed the Silenced No More Act, making it illegal to silence workers from speaking out about racism, harassment and other forms of abuse in the workplace. This needs to be universal. In addition, we need much stronger punishment of companies that break already existing laws, such as the aggressive union busting by Amazon. When workers have power, it creates a layer of checks and balances on the tech billionair­es whose whimdriven decisions increasing­ly affect the entire world.

I see this monopoly outside big tech as well. I recently launched an AI research institute that hopes to operate under incentives that are different from those of big tech companies and the elite academic institutio­ns that feed them. During this endeavor, I noticed that the same big tech leaders who push out people like me are also the leaders who control big philanthro­py and the government’s agenda for the future of AI research. If I speak up and antagonize a potential funder, it is not only my job on the line, but the jobs of others at the institute. And although there are some – albeit inadequate – laws that attempt to protect worker organizing, there is no such thing in the fundraisin­g world.

So what is the way forward? In order to truly have checks and balances, we should not have the same people setting the agendas of big tech, research, government and the non-profit sector. We need alternativ­es. We need government­s around the world to invest in communitie­s building technology that genuinely benefits them, rather than pursuing an agenda that is set by big tech or the military. Contrary to big tech executives’ cold-war style rhetoric about an arms race, what truly stifles innovation is the current arrangemen­t where a few people build harmful technology and others constantly work to prevent harm, unable to find the time, space or resources to implement their own vision of the future.

We need an independen­t source of government funding to nourish independen­t AI research institutes that can be alternativ­es to the hugely concentrat­ed power of a few large tech companies and the elite universiti­es closely intertwine­d with them. Only when we change the incentive structure will we see technology that prioritize­s the wellbeing of citizens – rather than a continued race to figure out how to kill more people more efficientl­y, or make the most amount of money for a handful of corporatio­ns around the world.

Timnit Gebru is the founder and executive director of the Distribute­d AI Research Institute (DAIR). She was formerly co-lead of Google’s Ethical AI team

 ?? Photograph: Marcio José Sánchez/AP ?? ‘California recently passed the Silenced No More Act, making it illegal to silence workers fromspeaki­ng out about racism, harassment and other forms of abuse in the workplace.’
Photograph: Marcio José Sánchez/AP ‘California recently passed the Silenced No More Act, making it illegal to silence workers fromspeaki­ng out about racism, harassment and other forms of abuse in the workplace.’

Newspapers in English

Newspapers from Australia