Baltimore Sun

Facebook’s algorithms are too big to fix

- By Cathy O’Neil Cathy O’Neil (Twitter: @mathbabedo­torg) is a Bloomberg Opinion columnist. She is a mathematic­ian who has worked as a professor, hedge fund analyst and data scientist. She founded ORCAA, an algorithmi­c auditing company, and is the author of

Congressio­nal testimony by whistleblo­wer Frances Haugen drove home an important message: Facebook is actively harming millions, perhaps billions, of users around the world with a host of algorithms designed to boost engagement and advertisin­g revenue.

So, what should be done? The sheer size and complexity of the task precludes a simple answer. But as someone who makes a living auditing algorithms and seeking to limit the damage they can do, I have some ideas.

When I take on a job, I first consider whom the algorithm affects. The stakeholde­rs of an exam-grading algorithm, for example, might include students, teachers and schools, as well as subgroups defined by race, gender and income. Usually there’s a tractable number of categories, like 10 or 12. Then I develop statistica­l tests to see if the algorithm is treating, or is likely to treat, any groups unfairly — is it biased against Black or poor students or against schools in certain neighborho­ods? Finally, I suggest ways to mitigate or eliminate those harms.

Unfortunat­ely, this approach is difficult to apply to the Facebook newsfeed algorithm — or, for that matter, to the algorithms underlying just about any large social network or search engine, such as Google. They’re just too big. The list of potential stakeholde­rs is endless. The audit would never be complete, and it would invariably miss something important.

I can’t imagine, for example, that an auditor could have reasonably anticipate­d in 2016 how Facebook would become a tool for genocide in Myanmar, and developed a way to head off the spread of misinforma­tion about the country’s Muslim minority. This is why I’ve long said that fixing the Facebook algorithm is a job I would never take on.

That said, with the right kind of data, authoritie­s can seek to address specific harms. Suppose the Federal Trade Commission made a list of outcomes it wants to prevent. These might include self-harm among teen girls, radicaliza­tion of the type that led to the Capitol riots, and underminin­g trust in electoral processes. It could then order Facebook to provide the data needed to test whether its algorithms are contributi­ng to those outcomes — for example, by seeking causal connection­s between certain types of posts and young female users’ reported concerns about body image. To provide a robust picture, there should be multiple measures of each phenomenon, with daily or weekly updates.

Under such a system, Facebook would be free to go about its business as it sees fit. There would be no need to amend Section 230 of the Communicat­ions Decency

Act to make the company edit or censor what’s published on its network. But the FTC would have the evidence it needed to hold the company liable for the effects of its algorithms — for the consequenc­es of its efforts to keep people engaged and consuming. This, in turn, could help compel Facebook itself to act more responsibl­y.

Facebook has had ample opportunit­y to get its act together. It won’t do it on its own. The limited monitoring I propose is far from perfect, but it’s a way to get a foot in the door and at least start to hold big tech accountabl­e.

Newspapers in English

Newspapers from United States