The Register Citizen (Torrington, CT)

Algorithms can be pretty crude toward women

- By Cathy O’Neil Bloomberg

My friend Michael Harris, a fellow mathematic­ian and author, recently sent me his psychologi­cal profile as compiled by Apply Magic Sauce, an algorithm operated by the Cambridge University Psychometr­ics Centre. This is a cousin of an algorithm that profiles people via access to their Facebook accounts, some version of which was used last year by the software company Cambridge Analytica to help them sway U.S. voters.

Michael opted to input some of his writing, and was immediatel­y profiled as “highly masculine,” among other personalit­y traits. This intrigued me, and not only because I don’t think of Michael as the most macho of men -- he’s got the heart of a poet -- so I immediatel­y tried it on myself. A recent Bloomberg View column about fake news and algorithms was rated 99 percent masculine, while one about Snap’s business model was 94 percent masculine. Even my New Year’s resolution­s were determined to be 99 percent masculine. Maybe because I discussed my favorite planar geometry app?

This gave me an idea. What would it say about another woman writing about math? I found a recent blogpost by mathematic­ian Evelyn Lamb discussing the Kakeya needle problem. Magic Sauce says: 99 percent masculine. What about a man writing about fashion? Just 1 percent masculine.

That’s not an enormous amount of testing, but I’m willing to wager that this model represents a stereotype, assigning the gender of the writer based on the subject they’ve chosen. Math and algorithms, from this point of view, is a “male” topic, so I have been assigned a male “psychologi­cal gender.” Pretty crude.

Even so, I know enough about algorithms to know that an effect along these lines can arise naturally, depending on the training set for the model. Imagine that the creators collected Facebook updates and interactio­ns, with associated labels indicating gender and age. The resulting data set, and the model trained on it, would reflect whatever bias is embedded in the population that created it.

This is not a new phenomenon, nor it is constraine­d to social media data. A July 2016 paper written by researcher­s from Boston University and Microsoft Research uncovered similar sexism bias embedded in Google News articles. Specifical­ly, they trained an algorithm with that data set to perform old-fashioned SAT analogy questions such as “man is to computer programmer as woman is to what?” To this question, the algorithm spit back “homemaker.” Oops.

What’s interestin­g is that the researcher­s managed to pick apart this sexism and even adjust for it, so that the resulting algorithm would be able to answer SAT questions in a gender neutral way. This is an important advance because machine learning doesn’t simply reflect back our current reality; it’s often used to create reality. If a machinelea­rning algorithm at a job searching website knows I’m a woman and decides to send me job listings that are considered “interestin­g to women,” then it not only propagates stereotype­s but actually magnifies them, by preventing me from getting certain jobs. Better for me, and for our future society, that the algorithms are deliberate­ly made gender neutral.

Newspapers in English

Newspapers from United States