Tag Archives: Evan Ackerman

Humans Are Never Colorblind

Psychologists have documented that human perception is highly unreliable, and perceptions about people are especially prone to a variety of biases, errors, and logical shortcuts.  There are many perceptual cues that people (and I mean all people) use to judge other people, often unconsciously. Unfortunately, at the top of the list of perceptual traps is skin color:  people everywhere are highly susceptible to making inferences and generalizations based on a person’s skin color.

This spring researchers from the HIT lab in NZ (famous for groundbreaking AR) and elsewhere report that a similar effect is seen in perception of robots.

“Determining whether people perceive robots to have race, and if so, whether the same race-related prejudices extend to robots, is thus an important matter.” ([2], p. 196)

In one part of the study the participants were willing to ascribe a race to a robot, with only 11% choosing “does not apply”!  (Sigh.)

The study also found a bias very similar to ones seen in studies with images of humans.  I.e., dark skinned robots were treated similarly to dark skinned humans, and different from light skinned entities.

 “Participants were able to easily and confidently identify the race of robots according to their racialization and their performance in the shooter bias task was informed by such social categorization processes.” ([2], p. 201)

Racial categories are highly problematic, and certainly deeply affected by culture.  But however you define race for people, robots obviously cannot have a “race”.  Yet people ascribe the label.

“For us, the main question was if the participants choose anything but the “Does not apply” option.” ([2], p. 203)

These findings are certainly significant given that humanoid and household robots are almost all white skinned.  This is in strong contrast to real demographics of butlers and nannies.

One problem with this lack of diversity is the effects of social stereotypes.  “If robots are supposed to function as teachers, friends, or carers, for instance, then it will be a serious problem if all of these roles are only ever occupied by robots that are racialized as White.”  ([2], p. 202).  They raise another point that some times a social robot should have a “race”.  In these cases, the “race” must be reliably conveyed in order to enable the bot to function correctly in the social setting.

It is a bit surprising that so many people were willing and able to ascribe a “race” to a picture of a robot.  (What is wrong with people????)  In part, this must be due to the anthropomorphism of the robot.  I doubt that the same effect would be seen for, say, autonomous vehicles, no matter what their skin color.  (But maybe not—people seem to be able to imagine personalities to speaking interfaces, so who knows what human personalities might be unconsciously assigned to different robots.)

Clearly, the coming technological utopia will be just a morally complex as the bad old days. As some have pointed out, exploiting enslaved sentient machines isn’t any more moral than human slavery.  One wonders how the racial and other unconscious social cues might play into these interactions.  (E.g., adding darker skins to  “menial” robots–ick.)

And for all the faux anxiety about The Robot Uprising, I have a bad feeling that people will have much, much more fear of and violent reactions to robots with different “racial” features.  Many people will be much more subservient to White Male robots, whether they should be or not.

I even wonder whether these prejudices are a factor in the implicit competition between household robots and human servants.  Are white skinned robots an attractive alternative to dark skinned humans?   Double ick.

At the very least, designers of social robots must remain aware that they cannot avoid ancient social cues, definitely including the awful mess of gender and racial stereotypes.

On that point, it is rather worrying that the research was not well received by the conference reviewers, and proposals for discussions were prohibited [1].  I sympathise with the discomfort (look at how many “icks” appear above).  But  I don’t think that head-in-the-sand rejection is going to work.

This is important, dammit.


  1. Evan Ackerman, Humans Show Racial Bias Towards Robots of Different Colors: Study, in IEEE Spectrum – Robotics. 2018. https://spectrum.ieee.org/automaton/robotics/humanoids/robots-and-racism
  2. Christoph Bartneck, Kumar Yogeeswaran, Qi Min Ser, Graeme Woodward, Robert Sparrow, Siheng Wang, and Friederike Eyssel, Robots And Racism, in Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. 2018: Chicago, IL, USA. p. 196-204. https://dl.acm.org/citation.cfm?id=3171260

 

Robot Wednesday

More Morphin Copters

Apparently, reconfiguring drones is an idea whose time has come.

Earlier I noted an admirably simple folding quad copter, from a French team.  This week I read of a group in Tokyo who see your quad copter and raise you four—a snaky octocopter that can configure in a zillion ways—the flying DRAGON [2] .  So there!

This flying snake thing has modules connected by gimbals, each with  two rotors, also on gimbals.  Altogether, the assembly can bend in 6DOF, just like a robot arm.   A flying robot arm.

The researchers conceptualize this robot as a sort of overactuated flying arm that can both form new shapes and use those shapes to interact with the world around it by manipulating objects.” (from [1])

Reconfiguring in flight is, well, complicated.

A key feature of this design is that the rotors aren’t all in the same plane as in a rigid quadcopter. This is actually a key to stability:  the rotors point in multiple directions and the body is rigid, yielding stable flight and hovering.

“To achieve an arbitrary 6DoF pose in the air, rotor disks cannot be aligned in the same plane, which is the case for traditional multirotors.” ([2], p. 1177)

The control system is modular, featuring “spinal” and “link” controllers, as well as a high level processor.  Indeed, the device looks like nothing so much as a hovering spine.

The demo video shows an impressive maneuver, slinking thorough a small horizontal hole, unfurling while hovering and slipping link by link up through the floor.  Pretty cool.

What’s more, the software autonomously determines the transformation needed. Very impressive.

This flying robot arm has the potential to be used as a flying robot arm:  it can poke and grasp and carry cargo.

It will be interesting to see how this approach compares to swarms of rigid copters.  What are the advantages and disadvantages of a handful of really complicated snakey fliers versus a constellation of many simpler fliers.   (A swarm is probably harder to shoot down.)

I predict that this will soon be a moot question, because there will be swarms that can lock together into spines, and disperse again into drones, as needed.


  1. Evan Ackerman, Flying Dragon Robot Transforms Itself to Squeeze Through Gaps, in IEEE Spectrum – Robotics. 2018. https://spectrum.ieee.org/automaton/robotics/drones/flying-dragon-robot-transforms-itself-to-squeeze-through-gaps
  2. M. Zhao, T. Anzai, F. Shi, X. Chen, K. Okada, and M. Inaba, Design, Modeling, and Control of an Aerial Robot DRAGON: A Dual-Rotor-Embedded Multilink Robot With the Ability of Multi-Degree-of-Freedom Aerial Transformation. IEEE Robotics and Automation Letters, 3 (2):1176-1183, 2018. https://ieeexplore.ieee.org/document/8258850/

 

Robot Wednesday

 

“Divine” Robots?

The twenty first century is the era of robots entering every aspect of human life.  One of the most challenging, both technically and theoretically, are robots that seek to interact directly with humans in everyday settings.  Just how “human” can and should a non-human agent appear?  This question is being explored on a hundred fronts.

Robots have begun to enter into extremely intimate parts of human life, and, indeed, into intimate relationships.  But these have generally been secular settings, work, transportation, entertainment, and home.  Religious situations, broadly defined, have mostly been reserved for humans only.

Indeed, for some people, religious and sacred activities are, by definition, expressions of humanity and human relations.  For all the handwringing about robots uprisings, there has been little anxiety about robots taking over churches, temples, or mosques.

Maybe we should worry about that more than we do.

This summer researchers from Peru discuss robots that are not purely function, not anthropomorphic, nor even zoomorphic, but “theomorphic” [3].  Their idea is that robots may be designed to represent religious concepts, in the same way that other “sacred objects” do.

“[A] theomorphic robot can be: – accepted favourably, because of its familiar appearance associated to the user’s background culture and religion; – recognised as a protector, supposedly having superior cognitive and perceptual capabilities; – held in high regard, in the same way a sacred object is treated with higher regard than a common object.” ([3], p.29)

The researchers note that the psychology that impels humans to create robots, and to endow them with imagined humanity, is similar to the drive to imagine supernatural divinities with human characteristics. The act of creating robots is a pseudo-divine enterprise, and interacting with intelligent robots is definitely akin to interacting with manifestations of supernatural forces.

“[R]obots always raised questions on divine creation and whether it can be replicated by humans,” (p. 31)

In many religious traditions, concepts of the divine have been represented by the most technically advanced art of the time, including stories, visual imagery, music, and architecture [2]. It seems inevitable that robots will be deployed in this role. Trovato et al. want to explore “design principles” for how this might be done.

Much of the paper is backward looking, unearthing precedents from the history of religious art and religious analysis of art.

One obvious design principle must be “a specific purpose that depends on the context and on the user” (p. 33)  This principle is critical for the ethical rule that the robot should not be intended to deceive.  It is one thing to create a sublime experience, it is entirely another to pretend that a mechanical object has supernatural powers.

They give a useful list of general use cases: religious education, preaching (persuasion); and company for religious practice (formal or informal ritual).  In addition, there may be a related goal, such as augmenting health care.  This is certainly something that will ultimately be incorporated as an option for, say, elderly assistant devices.

A paper about design principles must inevitably consider affordances.  In this case, the question is intimately related to the identification and use of metaphors and references to earlier practices. One example is for a robotically animated statue may resemble traditional carvings, while its behavior and gestures should evoke tradition rituals.  These features make the robot identifiably part of the tradition, and therefore evoke appropriate psychological responses.

Other dos and don’ts are phrased in pseudo-theological language.  “A theomorphic robot shall not mean impersonating a deity with the purpose of deceiving or manipulating the user.” (p.33)

The list of key principles is:

  • Identity
  • Naming
  • Symbology
  • Context
  • User Interaction
  • Use of The Light (I)

The role of symbolism is, of course, critical. A sacred object almost always has a symbolic association. In some cases, this is represented by imagery or other features of the object itself. It may also be conferred by context, such as a ritual of blessing to confer a sacred status to an otherwise mundane object.  Getting the symbolism right is pretty much the alpha and omega of creating any sacred object, including a robot.

The researchers are rather less concerned about human interaction than I expected.  After all, a robot can interact with humans in many ways, some of which mimic humans, and some of which are non-human and even super-human (e.g., great strength or the ability to fly).

A sacred robot must display its powers and communicate in ways that are consistent with the underlying values it is representing.  Indeed, there needs to be an implicit or explicit narrative that explains exactly what the relationship is between the robot’s actions and messages and the divine powers at play.  Getting this narrative wrong will be the comeuppance of these robots.  Imagine a supposedly sacred robot that misquotes scripture, or clumsily reveals the purely mundane source of what is supposed to be a “divine” capability.


It seems clear that digital technology will be incorporated into religious practices far more than has happened to date, in many ways.  Robots will likely be recruited for such uses, as this paper suggests.  So will virtual worlds and, unfortunately, Internet of Things technology (the Internet of Holy Things?  Yoiks!)

This paper made me think a bit (which is a good thing), and I think there are some important omissions.

Of course, the paper suffers a bit from a pretty restricted view of “religion”.  The research team exhibits personal knowledge of Buddhism and Roman Catholicism [1], with only sketchy knowledge of Islam, Judaism, other flavors of Christianity, and, of course, the many other variants (Wicca [4]? Scientology?)

There are general engineering principles that need to be taken seriously. The issues of privacy are bad enough for “smart toasters”, they become extremely touchy for “holy toasters”.  If we are unhappy having our online shopping tracked, we will be really, really unhappy if our prayers are tracked by software.

There are also problems of hacking, and authentication in general.  How ever a holy robot is designed to work, it must be preserved from malicious interference.  The ramifications of a robot that is secretly polluted with heresy are catastrophic.  Wars have been started by less.

At the same time, there are interesting opportunities for authentication protocols.  If a robot is certified and then ritually blessed by a religious authority, can we represent this with a cryptographical signature (yes). In fact, technology being developed for provenance and supply chain authentication is just the thing for documenting a chain of sacred authority.  Cool!

As far as the context and human interaction, it has to be recognized that there is a very serious “Eliza” situation here. There is surely a strong possibility of placebo effects here, possibly driven by totally unintended events.  I predict that there will be cases of people coming to worship robots, not because they are designed to be “theomorphic”, but because the robot was part of a “miraculous” event or situation.

Finally, it is interesting to think about the implications of robots with superhuman capabilities, cognitive, strength, or motive.  Even within more or less human abilities, robot bodies (and minds) are different and alien.  Why should a robot not be designed to demand the deference ordinarily given to divine entities?

This proposition violates Trovato et al’s first rule, as well as their general ethics.  But who says robots or designers are bound by this norm?

A sufficiently powerful robot is indistinguishable from a god

…and has a right to be treated as one.


  1. Evan Ackerman, Can a Robot Be Divine?, in IEEE Spectrum – Robotics. 2018. https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/can-a-robot-be-divine
  2. Norman M. Klein, The Vatican to Vegas: A History of Special Effects, New York, The New Press, 2004.
  3. Gabriele Trovato, Cesar Lucho, Alexander Huerta-Mercado, and Francisco Cuellar, Design Strategies for Representing the Divine in Robots, in 2018 ACM/IEEE International Conference on Human-Robot Interaction. 2018: Chicago, IL, USA. p. 29-35. https://dl.acm.org/citation.cfm?id=3173386.317
  4. Kirsten C. Uszkalo, Bewitched and Bedeviled: A Cognitive Approach to Embodiment in Early English Possession. First ed, New York, Palgrave Macmillan, 2015.

 

Robot Wednesday

Interplanetary Copters!

The last decade has seen an incredible bloom in small autonomous and remote controlled helicopters, AKA drones. It isn’t far wrong to call them ubiquitous, and probably the characteristic technology of the 2010s. (Sorry Siri.)

It isn’t surprising, then that NASA (the National Aeronautics and Space Admin.) has some ideas about what to do with robot helicopters.

This month it is confirmed that the next planned Mars rover will have a copter aboard [3].  (To date, this appears to be known as “The Mars Helicopter”, but surely it will need to be christened with some catchy moniker. “The Red Planet Baron”?  “The Martian Air Patrol”? “The Red Planet Express”?)

This won’t be a garden variety quad copter.  Mars in not Earth, and, in particular, Mars “air” is not Earth air. The atmosphere is thin, real thin, which means less lift.  On the other hand, gravity is less than on Earth. The design will feature larger rotors spinning much faster than Terra copters.

Operating on Mars will have to be autonomous, and the flying conditions could be really hairy. Martian air is not only thin, it is cold and dusty.  And the terrain is unknown.  The odds of operating without mishap are small. The first unexpected sand storm, and it may be curtains for the flyer.  Mean time to failure may be hours or less.

Limits of power and radios means that the first mission will be short range. Unfortunately, a 2 kilo UAV will probably only do visual inspections of the surface, albeit with an option for tight close ups.  Still it will extend the footprint of the rover by quite a bit, and potentially enable atmospheric sampling.


This isn’t the only extraterrestrial copter in the works.  If Mars has a cold, thin atmosphere, Saturn’s moon Titan may have methane lakes and weather, and possibly an ocean under the icy surface.   Titan also has a cold thick atmosphere, and really low gravity—favorable for helicopters!

Planning for a landing on this intriguing world is looking at a copter, called “Dragonfly” [1, 2]. The Dragonfly design is a bit larger, and is an octocopter. <<link>>  (It is noted that it should be able to continue to operate even if one or more rotors break.)  Dragonfly is also contemplated to have a nuclear power source—Titan is too far away for solar power to be a useful option.

Titan is a lot farther away than Mars, and communications will be difficult due to radiation and other interference.  The Dragonfly will have to be really, really autonomous.

Flying conditions on Titan are unknown, but theoretically could include clouds, rain, snow, storms, who knows.  The air is methane and hydrocarbons which could gum up the flyer. Honestly, mean time to failure could be zero—it may not be able to even take off.


Both these copters are significantly different from what you might buy at the hobby store or build in your local makerspace.  But prototypes can be flown on Earth, and the autonomous control algorithms are actually not that different from Earth bound UAVs. This is a good thing, because we have to program them here, before we actually send them off.

In fact, I think this is one of the advantages of small helicopters for this use. Flying is flying, once you adjust for pressure, density, etc. It’s probably not as tricky as driving on unknown terrain.  We should be able to design autonomous software that works OK on Mars and Titan.  (Says Bob, who doesn’t have to actually make it work.)


Finally, I’ll note that a mission to Titan should ideally include an autonomous submarine or better, a tunneling submarine, to explore the lakes and cracks. I’m sure this is under study, but I don’t know that it will be possible on the first landing.


  1. Evan Ackerman, How to Conquer Titan With a Nuclear Quad Octocopter, in IEEE Spectrum – Automation. 2017. https://spectrum.ieee.org/automaton/robotics/space-robots/how-to-conquer-titan-with-a-quad-octocopter
  2. Dragonfly. Dragonfly Titan Rotorcraft Lander. 2017, http://dragonfly.jhuapl.edu/.
  3. Karen Northon, Mars Helicopter to Fly on NASA’s Next Red Planet Rover Mission, in NASA News Releases. 2018. https://www.nasa.gov/press-release/mars-helicopter-to-fly-on-nasa-s-next-red-planet-rover-mission

 

We must go to Titan! We must go to Europa!

Ice Worlds, Ho!

Robot Wednesday

Fribo: culturally specific social robotics?

This spring a research group from Korea report on a home robot that seeks to address social isolation of young adults [2].  Fribo is similar to many other home assistants such as Alexa, but is specifically networked to other Fribos that reside with people in the same social network.  (The network of Fribos overlays the human social network.)

The special feature is that Fribo listens to the activity in the home and certain sounds are transmitted to all the other Fribos.  For example, the sound of the refrigerator door is played to other Fribos, offering a low key cue about the activity of the person.

Actually, it’s a little more elaborate: the Fribo actually narrates the cue.  The sound of the refrigerator is accompanied by a message such as, “Oh, someone just opened the refrigerator door. I wonder which food your friend is going to have”.  ([2], p. 116)

The idea is that, the network of friends– who live alone– gain an awareness of the presence and activity of each other.  It may also encourage more social contact with others.

The “creepy” factor with this product seems obvious to me.  Yoiks. But I know that there is a very dramatic difference in attitudes about creepiness among younger people, so who knows?

There are also significant issues with privacy (how much to you trust the filtering?) and security (if one Fribo is hacked, the whole network is probably exposed).   I wouldn’t touch it with a barge pole, myself.

But the field study reported is very interesting for another reason.  First, the fact that people were even willing to try this device indicates an interest in this kind of social awareness.  In particular, there seems to be an implicit sense of belonging and trust in a group of peers.  Not only that, but the participants seem to share similar concerns about the isolation of living alone, and the idea that these kind of cues are a way of feeling connected.  The study also suggests that being aware of others stimulates more contact, such as phone calls.

I have to say that the reports of the users experiences don’t resonate with my own experience.  Aside from the obvious digital-nativism of the young users, there seems to be a definitely cultural factor, i.e., young adults in Korea.  There is a level of mutual trust and solidarity among the users that I’m not sure is universal.  If so, then Fribo might be a hit in Korea, but a flop in the US, for instance.

By the way, the users refer to how quite their one-person apartment is.  My own experience is that even living alone there is plenty of noise from neighbors, for better or worse.  If anything, there is probably way to much awareness of strangers in most living spaces.  Deliberately adding in awareness of your friends might or might not be an attractive feature, depending on just how much other “awareness” there is.

If my speculation is correct, then this is an interesting example of using ubiquitous digital technology in a culturally specific manner.   As the researchers suggest, it would be very interesting to test this hypothesis by replicating the study in other places in the world.

Finally, I have to point out that if what you want to do is achieve a sense of joint living, it is always possible to live together.

A group house or dormitory could provide awareness of others, as well as even easier opportunities to socialize.  Why not explore alternative living arrangements, rather than install intrusive digital systems in isolated units?  This would make another interesting comparison condition for future studies.


  1. Evan Ackerman, Fribo: A Robot for People Who Live Alone, in IEEE Spectrum – Home Robotics. 2018. https://spectrum.ieee.org/automaton/robotics/home-robots/fribo-a-robot-for-people-who-live-alone
  2. Kwangmin Jeong, Jihyun Sung, Hae-Sung Lee, Aram Kim, Hyemi Kim, Chanmi Park, Yuin Jeong, JeeHang Lee, and iJnwoo Kim, Fribo: A Social Networking Robot for Increasing Social Connectedness through Sharing Daily Home Activities from Living Noise Data, in Tthe 2018 ACM/IEEE International Conference on Human-Robot Interaction. 2018: Chicago. p. 114-122. https://dl.acm.org/citation.cfm?id=3171254

Robot Concepts: Legs Plus Lift

Lunacity seems to be lunacy, or at least fantasy. “Personal jetpacks” are at the edge of possibility, requiring impractically huge amounts of power to lift a person (and, once lifted, are impossible to control).  But that doesn’t mean that moderate sized personal jetpacks have no possible use.

Two recent projects illustrate how copter tech can be combined with articulated bodies to create interesting hybrid robots.

One interesting concept is to add ducted fans to the feet of a bipedal (or any number of pedal) robot.  The lift is used to aid the robot when it needs to stretch for a long step over a gap.  The video makes this idea pretty clear:  one foot is anchored, and the other uses the thrust to keep balanced while stepping over the void.

This is the “Lunacity” idea applied to each foot independently, and it is plausible (if noisy and annoying).  There isn’t much hope of lifting the whole robot, but the thrusters probably can add useful “weightlessness” to parts of the robot.  In this case, the feet, but the same idea might add lifting power to arms or sensor stalks.


A second project sort of goes the other way;  adding a light weigh, foldable “origami” arm to a flying UAV [2].   The idea is to have a compact arm that extends the capabilities of the flyer, within the weight and space limits of a small aircraft.  The design unfolds and folds with only a single motor.  Origami is so cool!

Instead of adding lifters to the robot, the robot arm is added to the flyer, to make a hybrid flying grasper.  I think there is no reason why there couldn’t be two arms, or the arms can’t be legs, or some other combination.


I look forward to even more creative hybridization, combining controllable rigid structures with lifting bodies in transformer-like multimode robots.


  1. Evan Ackerman, Bipedal Robot Uses Jet-Powered Feet to Step Over Large Gaps, in IEEE Spectrum – robotis. 2018. https://spectrum.ieee.org/automaton/robotics/humanoids/bipedal-robot-uses-jetpowered-feet-to-step-over-large-gaps
  2. Suk-Jun Kim, Dae-Young Lee, Gwang-Pil Jung, and Kyu-Jin Cho, An origami-inspired, self-locking robotic arm that can be folded flat. Science Robotics, 3 (16) 2018. http://robotics.sciencemag.org/content/3/16/eaar2915.abstract

 

Robot Wednesday

 

Self parking slippers(!)

Now this is what I call a neat demo!

I’ve never tried one of these new self-parking cars, so I don’t really get it.  Sure, parking is tricky, but that’s life, no?  Is this something I need, or even want?  I suspect that this is something that once you experience it, you can’t live without.

Nissan has put out an interesting demo that illustrates the idea of the technology, but in a different context.

This application is arguably even more useless than parking your car, but it is so cool to watch, it is compelling.  It also puts you outside and above the action, with a ‘god’s eye view’, which makes the magic all that more visible.  And I’ve seen cars park many times, but never seen a slipper park, autonomously or otherwise!

I like it!

Now, I can’t really tell exactly how this is done (and neither can Sensei Evan [1]).  The press materials imply that this is based on the same technology that the self parking automobile uses.  But that can’t be literally true, since the slippers clearly don’t have multiple cameras and sonar sensors, and I’d be surprised if they have microchips “autonomously” running anything like Nissan Leaf firmware.  Presumably, the slippers are guided by a simulation using cameras in the room, or something.  That would be reasonably cool in itself, and nothing to be ashamed of.

Anyway, I love the demo, regardless of how it was done.

I never knew how badly I needed a self-parking slipper until now”  (Evan Ackerman [1])


  1. Evan Ackerman, Nissan Embeds Self-Parking Tech in Pillows and Slippers, in IEEE Spectrum – Cars That Think. 2018. https://spectrum.ieee.org/cars-that-think/transportation/self-driving/nissan-embeds-selfparking-tech-in-pillows-and-slippers

 

Robot Wednesday