Neural networks can do a lot – let’s not let them do this
It’s not often researchers scare themselves, but Michal Kosinski and Yilun Wang created a neural network that “disturbed” them so much they nearly didn’t release it.
Many think they shouldn’t have. The researchers from Stanford University trained a neural network to analyse two photos, guessing which one was the face of a gay or straight person. The paper ( pcpro.link/278gay) in the Journal of Personality and Social Psychology claims the neural network was accurate significantly more often than humans.
The neural network saw accuracy of 91% for men and 83% for women, well above the 61% and 54% accuracy rates of human judges.
The researchers admit in the paper’s introduction that the “findings expose a threat to the privacy and safety of gay men and women”. Indeed, imagine the abuses of such a technology, given eight countries consider sex between gay people criminal enough to warrant the death penalty.
Kosinski and Wang’s aim isn’t to brag about the success of their neural network, but to act as a call to action for policy-makers and a warning to LGBTQ communities “of the risks that they are facing”.
It’s also a warning that we’re training our own biases into AI. We have to teach neural networks before they’re useful, and they therefore include our own assumptions. For this project, the researchers used photos from a dating website — raising a host of ethical issues – relying on how the singletons on the site self-identify. There’s plenty of gay people who don’t describe themselves as such for good reason, as well as a host of other sexual orientations the researchers decided to ignore, likely skewing the data.
Let’s learn these lessons before it’s too late. This research makes it clear the combination of facial recognition with smart systems can be used against people to discriminate, so it’s safe to say one way or another it will be. The researchers should focus on preventing that next.