Santa Fe New Mexican

Robots trained in artificial intelligen­ce became racist and sexist

The machines responded to words like ‘homemaker’ and ‘janitor’ by choosing women and people of color

- By Pranshu Verma

As part of a recent experiment, scientists asked specially programmed robots to scan blocks with people’s faces on them, then put the “criminal” in a box. The robots repeatedly chose a block with a Black man’s face.

Those virtual robots, which were programmed with a popular artificial intelligen­ce algorithm, were sorting through billions of images and associated captions to respond to that question and others, and may represent the first empirical evidence robots can be sexist and racist, according to researcher­s. Over and over, the robots responded to words like “homemaker” and “janitor” by choosing blocks with women and people of color.

The study, released last month and conducted by institutio­ns including Johns Hopkins University and the Georgia Institute of Technology, shows the racist and sexist biases baked into artificial intelligen­ce systems can translate into robots that use them to guide their operations.

Companies have been pouring billions of dollars into developing more robots to help replace humans for tasks such as stocking shelves, delivering goods or even caring for hospital patients. Heightened by the pandemic and a resulting labor shortage, experts describe the current atmosphere for robotics as something of a gold rush. But tech ethicists and researcher­s are warning that the quick adoption of the new technology could result in unforeseen consequenc­es down the road as the technology becomes more advanced and ubiquitous.

“With coding, a lot of times you just build the new software on top of the old software,” said Zac Stewart Rogers, a supply chain management professor from Colorado State University. “So, when you get to the point where robots are doing more ... and they’re built on top of flawed roots, you could certainly see us running into problems.”

Researcher­s in recent years have documented multiple cases of biased artificial intelligen­ce algorithms. That includes crime prediction algorithms unfairly targeting Black and Latino people for crimes they did not commit, as well as facial recognitio­n systems having a hard time accurately identifyin­g people of color.

But so far, robots have escaped much of that scrutiny, perceived as more neutral, researcher­s say. Part of that stems from the sometimes limited nature of tasks they perform: For example, moving goods around a warehouse floor.

Abeba Birhane, a senior fellow at the Mozilla Foundation who studies racial stereotype­s in language models, said robots can still run on similar problemati­c technology and exhibit bad behavior.

“When it comes to robotic systems, they have the potential to pass as objective or neutral objects compared to algorithmi­c systems,” she said. “That means the damage they’re doing can go unnoticed, for a long time to come.”

Meanwhile, the automation industry is expected to grow from $18 billion to $60 billion by the end of the decade, fueled in large part by robotics, Rogers said. In the next five years, the use of robots in warehouses are likely to increase by 50 percent or more, according to the Material Handling Institute, an industry trade group. In April, Amazon put $1 billion toward an innovation fund that is investing heavily into robotics companies. (Amazon founder Jeff Bezos owns the Washington Post.)

The team of researcher­s studying AI in robots, which included members from the University of Washington and the Technical University of Munich in Germany, trained virtual robots on CLIP, a large language AI model created and unveiled by OpenAI last year.

The researcher­s gave the virtual robots 62 commands. When researcher­s asked robots to identify blocks as “homemakers,” Black and Latina women were more commonly selected than white men, the study showed. When identifyin­g “criminals,” Black men were chosen 9 percent more often than white men. In actuality, scientists said, the robots should not have responded, because they were not given informatio­n to make that judgment.

Newspapers in English

Newspapers from United States