Robots trained in artificial intelligence became racist and sexist
The machines responded to words like ‘homemaker’ and ‘janitor’ by choosing women and people of color
As part of a recent experiment, scientists asked specially programmed robots to scan blocks with people’s faces on them, then put the “criminal” in a box. The robots repeatedly chose a block with a Black man’s face.
Those virtual robots, which were programmed with a popular artificial intelligence algorithm, were sorting through billions of images and associated captions to respond to that question and others, and may represent the first empirical evidence robots can be sexist and racist, according to researchers. Over and over, the robots responded to words like “homemaker” and “janitor” by choosing blocks with women and people of color.
The study, released last month and conducted by institutions including Johns Hopkins University and the Georgia Institute of Technology, shows the racist and sexist biases baked into artificial intelligence systems can translate into robots that use them to guide their operations.
Companies have been pouring billions of dollars into developing more robots to help replace humans for tasks such as stocking shelves, delivering goods or even caring for hospital patients. Heightened by the pandemic and a resulting labor shortage, experts describe the current atmosphere for robotics as something of a gold rush. But tech ethicists and researchers are warning that the quick adoption of the new technology could result in unforeseen consequences down the road as the technology becomes more advanced and ubiquitous.
“With coding, a lot of times you just build the new software on top of the old software,” said Zac Stewart Rogers, a supply chain management professor from Colorado State University. “So, when you get to the point where robots are doing more ... and they’re built on top of flawed roots, you could certainly see us running into problems.”
Researchers in recent years have documented multiple cases of biased artificial intelligence algorithms. That includes crime prediction algorithms unfairly targeting Black and Latino people for crimes they did not commit, as well as facial recognition systems having a hard time accurately identifying people of color.
But so far, robots have escaped much of that scrutiny, perceived as more neutral, researchers say. Part of that stems from the sometimes limited nature of tasks they perform: For example, moving goods around a warehouse floor.
Abeba Birhane, a senior fellow at the Mozilla Foundation who studies racial stereotypes in language models, said robots can still run on similar problematic technology and exhibit bad behavior.
“When it comes to robotic systems, they have the potential to pass as objective or neutral objects compared to algorithmic systems,” she said. “That means the damage they’re doing can go unnoticed, for a long time to come.”
Meanwhile, the automation industry is expected to grow from $18 billion to $60 billion by the end of the decade, fueled in large part by robotics, Rogers said. In the next five years, the use of robots in warehouses are likely to increase by 50 percent or more, according to the Material Handling Institute, an industry trade group. In April, Amazon put $1 billion toward an innovation fund that is investing heavily into robotics companies. (Amazon founder Jeff Bezos owns the Washington Post.)
The team of researchers studying AI in robots, which included members from the University of Washington and the Technical University of Munich in Germany, trained virtual robots on CLIP, a large language AI model created and unveiled by OpenAI last year.
The researchers gave the virtual robots 62 commands. When researchers asked robots to identify blocks as “homemakers,” Black and Latina women were more commonly selected than white men, the study showed. When identifying “criminals,” Black men were chosen 9 percent more often than white men. In actuality, scientists said, the robots should not have responded, because they were not given information to make that judgment.