A “Moral Uncanny Valley”?

As I have commented before, these days there is a minor industry in what I call the Social Psychology of Robots:  exploring and recapitulating the last century of experimental social psychology adding in robots.  This has provided quite a bit of grist for our mills of speculation about how people think about and act toward machines.  And it provokes all kinds of thoughts about what is “human” about a robot, and what that means about our concept of “human”.

This winter researchers from Helsinki report a study of what they call a “moral uncanny valley” [2].  The study asks subjects to rate the “morality” of decisions, attributed to an image of a human or one of an array of human resembling robots.  I.e., a situation and a decision are described, with an image of the entity that allegedly made the decision.  The participants rated the morality of the decision.

The main finding is that the appearance of the robot strongly influenced the ratings.  The intriguing thing is that the decisions most human mimetic robot were evaluated as less moral than robots with more abstract appearance.  In fact, the most abstract robot face was rated about the same as the image of a human.

This finding evokes the famous “uncanny valley”, the supposed discomfort in interactions with robots that are extremely close to human appearance.   Thus, these respondents are not necessarily averse to robots making decisions, but the appearance of the robots may influence this attitude.

This finding is notable in that the most human appearing, the most ‘uncanny’, robots are least accepted.  This is ironic, if not paradoxical.  Making robots faithfully mimic humans is generally done in order to improve the human robot interaction.

It isn’t clear what might cause this effect.  The researchers note that some theorists suggest that the almost-but-not-quite human appearance causes uncertainty and cognitive load, which is unpleasant.

If this hypothesis is generally true, then the ‘uncanny valley’ should decline with experience.  I.e., the more familiar the robot in question is, the less uncanny it should become.

I would note that, along this same line, it is possible that the specific stimuli may be ‘uncanny’ or simply more or less attractive because of familiarity or previous context.  To the degree that a robot is associated with, say, a familiar fictional character, it might feel less threatening.  For example, specific robots from Star Wars movies are well known, and have well developed, highly moral, personalities.

If some of the stimuli happen to resemble fictional or real world experiences of the participants, they may then carry positive or negative associations that have nothing to do with “uncanniness”.  I’ll note that the “uncanny” stimuli look, to me, like fictional robots from popular movies, in fact they look to me like terrifying monsters from some movies.

In other words, just as you would try to make the human image generic and not resemble any popularly know person, the robot faces should be screened for public associations.

Finally, I’ll note in passing that all of the robots and also the human stimuli are recognizably white skinnedFrom other studies, we suspect that dark skinned versions of these stimuli might very well be evaluated much more negatively overall.  I don’t know if the uncanny effect would be seen or would disappear with, say, all brown skinned stimuli.


  1. Michael Laakasuo, Tuire Korvuo, and Niina Niskanen, The appearance of robots affects our perception of the morality of their decisions, in University of Helsinki – News, February 19, 2021. https://www2.helsinki.fi/en/news/language-culture/the-appearance-of-robots-affects-our-perception-of-the-morality-of-their-decisions
  2. Michael Laakasuo, Jussi Palomäki, and Nils Köbis, Moral Uncanny Valley: A Robot’s Appearance Moderates How its Decisions are Judged. International Journal of Social Robotics, 2021/02/16 2021. https://doi.org/10.1007/s12369-020-00738-6

 

Robot Wednesday

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.