Psychologists have documented that human perception is highly unreliable, and perceptions about people are especially prone to a variety of biases, errors, and logical shortcuts. There are many perceptual cues that people (and I mean all people) use to judge other people, often unconsciously. Unfortunately, at the top of the list of perceptual traps is skin color: people everywhere are highly susceptible to making inferences and generalizations based on a person’s skin color.
This spring researchers from the HIT lab in NZ (famous for groundbreaking AR) and elsewhere report that a similar effect is seen in perception of robots.
“Determining whether people perceive robots to have race, and if so, whether the same race-related prejudices extend to robots, is thus an important matter.” (, p. 196)
In one part of the study the participants were willing to ascribe a race to a robot, with only 11% choosing “does not apply”! (Sigh.)
The study also found a bias very similar to ones seen in studies with images of humans. I.e., dark skinned robots were treated similarly to dark skinned humans, and different from light skinned entities.
“Participants were able to easily and confidently identify the race of robots according to their racialization and their performance in the shooter bias task was informed by such social categorization processes.” (, p. 201)
Racial categories are highly problematic, and certainly deeply affected by culture. But however you define race for people, robots obviously cannot have a “race”. Yet people ascribe the label.
“For us, the main question was if the participants choose anything but the “Does not apply” option.” (, p. 203)
These findings are certainly significant given that humanoid and household robots are almost all white skinned. This is in strong contrast to real demographics of butlers and nannies.
One problem with this lack of diversity is the effects of social stereotypes. “If robots are supposed to function as teachers, friends, or carers, for instance, then it will be a serious problem if all of these roles are only ever occupied by robots that are racialized as White.” (, p. 202). They raise another point that some times a social robot should have a “race”. In these cases, the “race” must be reliably conveyed in order to enable the bot to function correctly in the social setting.
It is a bit surprising that so many people were willing and able to ascribe a “race” to a picture of a robot. (What is wrong with people????) In part, this must be due to the anthropomorphism of the robot. I doubt that the same effect would be seen for, say, autonomous vehicles, no matter what their skin color. (But maybe not—people seem to be able to imagine personalities to speaking interfaces, so who knows what human personalities might be unconsciously assigned to different robots.)
Clearly, the coming technological utopia will be just a morally complex as the bad old days. As some have pointed out, exploiting enslaved sentient machines isn’t any more moral than human slavery. One wonders how the racial and other unconscious social cues might play into these interactions. (E.g., adding darker skins to “menial” robots–ick.)
And for all the faux anxiety about The Robot Uprising, I have a bad feeling that people will have much, much more fear of and violent reactions to robots with different “racial” features. Many people will be much more subservient to White Male robots, whether they should be or not.
I even wonder whether these prejudices are a factor in the implicit competition between household robots and human servants. Are white skinned robots an attractive alternative to dark skinned humans? Double ick.
At the very least, designers of social robots must remain aware that they cannot avoid ancient social cues, definitely including the awful mess of gender and racial stereotypes.
On that point, it is rather worrying that the research was not well received by the conference reviewers, and proposals for discussions were prohibited . I sympathise with the discomfort (look at how many “icks” appear above). But I don’t think that head-in-the-sand rejection is going to work.
This is important, dammit.
- Evan Ackerman, Humans Show Racial Bias Towards Robots of Different Colors: Study, in IEEE Spectrum – Robotics. 2018. https://spectrum.ieee.org/automaton/robotics/humanoids/robots-and-racism
- Christoph Bartneck, Kumar Yogeeswaran, Qi Min Ser, Graeme Woodward, Robert Sparrow, Siheng Wang, and Friederike Eyssel, Robots And Racism, in Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. 2018: Chicago, IL, USA. p. 196-204. https://dl.acm.org/citation.cfm?id=3171260