Tag Archives: Evan Ackerman

“Divine” Robots?

The twenty first century is the era of robots entering every aspect of human life.  One of the most challenging, both technically and theoretically, are robots that seek to interact directly with humans in everyday settings.  Just how “human” can and should a non-human agent appear?  This question is being explored on a hundred fronts.

Robots have begun to enter into extremely intimate parts of human life, and, indeed, into intimate relationships.  But these have generally been secular settings, work, transportation, entertainment, and home.  Religious situations, broadly defined, have mostly been reserved for humans only.

Indeed, for some people, religious and sacred activities are, by definition, expressions of humanity and human relations.  For all the handwringing about robots uprisings, there has been little anxiety about robots taking over churches, temples, or mosques.

Maybe we should worry about that more than we do.

This summer researchers from Peru discuss robots that are not purely function, not anthropomorphic, nor even zoomorphic, but “theomorphic” [3].  Their idea is that robots may be designed to represent religious concepts, in the same way that other “sacred objects” do.

“[A] theomorphic robot can be: – accepted favourably, because of its familiar appearance associated to the user’s background culture and religion; – recognised as a protector, supposedly having superior cognitive and perceptual capabilities; – held in high regard, in the same way a sacred object is treated with higher regard than a common object.” ([3], p.29)

The researchers note that the psychology that impels humans to create robots, and to endow them with imagined humanity, is similar to the drive to imagine supernatural divinities with human characteristics. The act of creating robots is a pseudo-divine enterprise, and interacting with intelligent robots is definitely akin to interacting with manifestations of supernatural forces.

“[R]obots always raised questions on divine creation and whether it can be replicated by humans,” (p. 31)

In many religious traditions, concepts of the divine have been represented by the most technically advanced art of the time, including stories, visual imagery, music, and architecture [2]. It seems inevitable that robots will be deployed in this role. Trovato et al. want to explore “design principles” for how this might be done.

Much of the paper is backward looking, unearthing precedents from the history of religious art and religious analysis of art.

One obvious design principle must be “a specific purpose that depends on the context and on the user” (p. 33)  This principle is critical for the ethical rule that the robot should not be intended to deceive.  It is one thing to create a sublime experience, it is entirely another to pretend that a mechanical object has supernatural powers.

They give a useful list of general use cases: religious education, preaching (persuasion); and company for religious practice (formal or informal ritual).  In addition, there may be a related goal, such as augmenting health care.  This is certainly something that will ultimately be incorporated as an option for, say, elderly assistant devices.

A paper about design principles must inevitably consider affordances.  In this case, the question is intimately related to the identification and use of metaphors and references to earlier practices. One example is for a robotically animated statue may resemble traditional carvings, while its behavior and gestures should evoke tradition rituals.  These features make the robot identifiably part of the tradition, and therefore evoke appropriate psychological responses.

Other dos and don’ts are phrased in pseudo-theological language.  “A theomorphic robot shall not mean impersonating a deity with the purpose of deceiving or manipulating the user.” (p.33)

The list of key principles is:

  • Identity
  • Naming
  • Symbology
  • Context
  • User Interaction
  • Use of The Light (I)

The role of symbolism is, of course, critical. A sacred object almost always has a symbolic association. In some cases, this is represented by imagery or other features of the object itself. It may also be conferred by context, such as a ritual of blessing to confer a sacred status to an otherwise mundane object.  Getting the symbolism right is pretty much the alpha and omega of creating any sacred object, including a robot.

The researchers are rather less concerned about human interaction than I expected.  After all, a robot can interact with humans in many ways, some of which mimic humans, and some of which are non-human and even super-human (e.g., great strength or the ability to fly).

A sacred robot must display its powers and communicate in ways that are consistent with the underlying values it is representing.  Indeed, there needs to be an implicit or explicit narrative that explains exactly what the relationship is between the robot’s actions and messages and the divine powers at play.  Getting this narrative wrong will be the comeuppance of these robots.  Imagine a supposedly sacred robot that misquotes scripture, or clumsily reveals the purely mundane source of what is supposed to be a “divine” capability.


It seems clear that digital technology will be incorporated into religious practices far more than has happened to date, in many ways.  Robots will likely be recruited for such uses, as this paper suggests.  So will virtual worlds and, unfortunately, Internet of Things technology (the Internet of Holy Things?  Yoiks!)

This paper made me think a bit (which is a good thing), and I think there are some important omissions.

Of course, the paper suffers a bit from a pretty restricted view of “religion”.  The research team exhibits personal knowledge of Buddhism and Roman Catholicism [1], with only sketchy knowledge of Islam, Judaism, other flavors of Christianity, and, of course, the many other variants (Wicca [4]? Scientology?)

There are general engineering principles that need to be taken seriously. The issues of privacy are bad enough for “smart toasters”, they become extremely touchy for “holy toasters”.  If we are unhappy having our online shopping tracked, we will be really, really unhappy if our prayers are tracked by software.

There are also problems of hacking, and authentication in general.  How ever a holy robot is designed to work, it must be preserved from malicious interference.  The ramifications of a robot that is secretly polluted with heresy are catastrophic.  Wars have been started by less.

At the same time, there are interesting opportunities for authentication protocols.  If a robot is certified and then ritually blessed by a religious authority, can we represent this with a cryptographical signature (yes). In fact, technology being developed for provenance and supply chain authentication is just the thing for documenting a chain of sacred authority.  Cool!

As far as the context and human interaction, it has to be recognized that there is a very serious “Eliza” situation here. There is surely a strong possibility of placebo effects here, possibly driven by totally unintended events.  I predict that there will be cases of people coming to worship robots, not because they are designed to be “theomorphic”, but because the robot was part of a “miraculous” event or situation.

Finally, it is interesting to think about the implications of robots with superhuman capabilities, cognitive, strength, or motive.  Even within more or less human abilities, robot bodies (and minds) are different and alien.  Why should a robot not be designed to demand the deference ordinarily given to divine entities?

This proposition violates Trovato et al’s first rule, as well as their general ethics.  But who says robots or designers are bound by this norm?

A sufficiently powerful robot is indistinguishable from a god

…and has a right to be treated as one.


  1. Evan Ackerman, Can a Robot Be Divine?, in IEEE Spectrum – Robotics. 2018. https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/can-a-robot-be-divine
  2. Norman M. Klein, The Vatican to Vegas: A History of Special Effects, New York, The New Press, 2004.
  3. Gabriele Trovato, Cesar Lucho, Alexander Huerta-Mercado, and Francisco Cuellar, Design Strategies for Representing the Divine in Robots, in 2018 ACM/IEEE International Conference on Human-Robot Interaction. 2018: Chicago, IL, USA. p. 29-35. https://dl.acm.org/citation.cfm?id=3173386.317
  4. Kirsten C. Uszkalo, Bewitched and Bedeviled: A Cognitive Approach to Embodiment in Early English Possession. First ed, New York, Palgrave Macmillan, 2015.

 

Robot Wednesday

Interplanetary Copters!

The last decade has seen an incredible bloom in small autonomous and remote controlled helicopters, AKA drones. It isn’t far wrong to call them ubiquitous, and probably the characteristic technology of the 2010s. (Sorry Siri.)

It isn’t surprising, then that NASA (the National Aeronautics and Space Admin.) has some ideas about what to do with robot helicopters.

This month it is confirmed that the next planned Mars rover will have a copter aboard [3].  (To date, this appears to be known as “The Mars Helicopter”, but surely it will need to be christened with some catchy moniker. “The Red Planet Baron”?  “The Martian Air Patrol”? “The Red Planet Express”?)

This won’t be a garden variety quad copter.  Mars in not Earth, and, in particular, Mars “air” is not Earth air. The atmosphere is thin, real thin, which means less lift.  On the other hand, gravity is less than on Earth. The design will feature larger rotors spinning much faster than Terra copters.

Operating on Mars will have to be autonomous, and the flying conditions could be really hairy. Martian air is not only thin, it is cold and dusty.  And the terrain is unknown.  The odds of operating without mishap are small. The first unexpected sand storm, and it may be curtains for the flyer.  Mean time to failure may be hours or less.

Limits of power and radios means that the first mission will be short range. Unfortunately, a 2 kilo UAV will probably only do visual inspections of the surface, albeit with an option for tight close ups.  Still it will extend the footprint of the rover by quite a bit, and potentially enable atmospheric sampling.


This isn’t the only extraterrestrial copter in the works.  If Mars has a cold, thin atmosphere, Saturn’s moon Titan may have methane lakes and weather, and possibly an ocean under the icy surface.   Titan also has a cold thick atmosphere, and really low gravity—favorable for helicopters!

Planning for a landing on this intriguing world is looking at a copter, called “Dragonfly” [1, 2]. The Dragonfly design is a bit larger, and is an octocopter. <<link>>  (It is noted that it should be able to continue to operate even if one or more rotors break.)  Dragonfly is also contemplated to have a nuclear power source—Titan is too far away for solar power to be a useful option.

Titan is a lot farther away than Mars, and communications will be difficult due to radiation and other interference.  The Dragonfly will have to be really, really autonomous.

Flying conditions on Titan are unknown, but theoretically could include clouds, rain, snow, storms, who knows.  The air is methane and hydrocarbons which could gum up the flyer. Honestly, mean time to failure could be zero—it may not be able to even take off.


Both these copters are significantly different from what you might buy at the hobby store or build in your local makerspace.  But prototypes can be flown on Earth, and the autonomous control algorithms are actually not that different from Earth bound UAVs. This is a good thing, because we have to program them here, before we actually send them off.

In fact, I think this is one of the advantages of small helicopters for this use. Flying is flying, once you adjust for pressure, density, etc. It’s probably not as tricky as driving on unknown terrain.  We should be able to design autonomous software that works OK on Mars and Titan.  (Says Bob, who doesn’t have to actually make it work.)


Finally, I’ll note that a mission to Titan should ideally include an autonomous submarine or better, a tunneling submarine, to explore the lakes and cracks. I’m sure this is under study, but I don’t know that it will be possible on the first landing.


  1. Evan Ackerman, How to Conquer Titan With a Nuclear Quad Octocopter, in IEEE Spectrum – Automation. 2017. https://spectrum.ieee.org/automaton/robotics/space-robots/how-to-conquer-titan-with-a-quad-octocopter
  2. Dragonfly. Dragonfly Titan Rotorcraft Lander. 2017, http://dragonfly.jhuapl.edu/.
  3. Karen Northon, Mars Helicopter to Fly on NASA’s Next Red Planet Rover Mission, in NASA News Releases. 2018. https://www.nasa.gov/press-release/mars-helicopter-to-fly-on-nasa-s-next-red-planet-rover-mission

 

We must go to Titan! We must go to Europa!

Ice Worlds, Ho!

Robot Wednesday

Fribo: culturally specific social robotics?

This spring a research group from Korea report on a home robot that seeks to address social isolation of young adults [2].  Fribo is similar to many other home assistants such as Alexa, but is specifically networked to other Fribos that reside with people in the same social network.  (The network of Fribos overlays the human social network.)

The special feature is that Fribo listens to the activity in the home and certain sounds are transmitted to all the other Fribos.  For example, the sound of the refrigerator door is played to other Fribos, offering a low key cue about the activity of the person.

Actually, it’s a little more elaborate: the Fribo actually narrates the cue.  The sound of the refrigerator is accompanied by a message such as, “Oh, someone just opened the refrigerator door. I wonder which food your friend is going to have”.  ([2], p. 116)

The idea is that, the network of friends– who live alone– gain an awareness of the presence and activity of each other.  It may also encourage more social contact with others.

The “creepy” factor with this product seems obvious to me.  Yoiks. But I know that there is a very dramatic difference in attitudes about creepiness among younger people, so who knows?

There are also significant issues with privacy (how much to you trust the filtering?) and security (if one Fribo is hacked, the whole network is probably exposed).   I wouldn’t touch it with a barge pole, myself.

But the field study reported is very interesting for another reason.  First, the fact that people were even willing to try this device indicates an interest in this kind of social awareness.  In particular, there seems to be an implicit sense of belonging and trust in a group of peers.  Not only that, but the participants seem to share similar concerns about the isolation of living alone, and the idea that these kind of cues are a way of feeling connected.  The study also suggests that being aware of others stimulates more contact, such as phone calls.

I have to say that the reports of the users experiences don’t resonate with my own experience.  Aside from the obvious digital-nativism of the young users, there seems to be a definitely cultural factor, i.e., young adults in Korea.  There is a level of mutual trust and solidarity among the users that I’m not sure is universal.  If so, then Fribo might be a hit in Korea, but a flop in the US, for instance.

By the way, the users refer to how quite their one-person apartment is.  My own experience is that even living alone there is plenty of noise from neighbors, for better or worse.  If anything, there is probably way to much awareness of strangers in most living spaces.  Deliberately adding in awareness of your friends might or might not be an attractive feature, depending on just how much other “awareness” there is.

If my speculation is correct, then this is an interesting example of using ubiquitous digital technology in a culturally specific manner.   As the researchers suggest, it would be very interesting to test this hypothesis by replicating the study in other places in the world.

Finally, I have to point out that if what you want to do is achieve a sense of joint living, it is always possible to live together.

A group house or dormitory could provide awareness of others, as well as even easier opportunities to socialize.  Why not explore alternative living arrangements, rather than install intrusive digital systems in isolated units?  This would make another interesting comparison condition for future studies.


  1. Evan Ackerman, Fribo: A Robot for People Who Live Alone, in IEEE Spectrum – Home Robotics. 2018. https://spectrum.ieee.org/automaton/robotics/home-robots/fribo-a-robot-for-people-who-live-alone
  2. Kwangmin Jeong, Jihyun Sung, Hae-Sung Lee, Aram Kim, Hyemi Kim, Chanmi Park, Yuin Jeong, JeeHang Lee, and iJnwoo Kim, Fribo: A Social Networking Robot for Increasing Social Connectedness through Sharing Daily Home Activities from Living Noise Data, in Tthe 2018 ACM/IEEE International Conference on Human-Robot Interaction. 2018: Chicago. p. 114-122. https://dl.acm.org/citation.cfm?id=3171254

Robot Concepts: Legs Plus Lift

Lunacity seems to be lunacy, or at least fantasy. “Personal jetpacks” are at the edge of possibility, requiring impractically huge amounts of power to lift a person (and, once lifted, are impossible to control).  But that doesn’t mean that moderate sized personal jetpacks have no possible use.

Two recent projects illustrate how copter tech can be combined with articulated bodies to create interesting hybrid robots.

One interesting concept is to add ducted fans to the feet of a bipedal (or any number of pedal) robot.  The lift is used to aid the robot when it needs to stretch for a long step over a gap.  The video makes this idea pretty clear:  one foot is anchored, and the other uses the thrust to keep balanced while stepping over the void.

This is the “Lunacity” idea applied to each foot independently, and it is plausible (if noisy and annoying).  There isn’t much hope of lifting the whole robot, but the thrusters probably can add useful “weightlessness” to parts of the robot.  In this case, the feet, but the same idea might add lifting power to arms or sensor stalks.


A second project sort of goes the other way;  adding a light weigh, foldable “origami” arm to a flying UAV [2].   The idea is to have a compact arm that extends the capabilities of the flyer, within the weight and space limits of a small aircraft.  The design unfolds and folds with only a single motor.  Origami is so cool!

Instead of adding lifters to the robot, the robot arm is added to the flyer, to make a hybrid flying grasper.  I think there is no reason why there couldn’t be two arms, or the arms can’t be legs, or some other combination.


I look forward to even more creative hybridization, combining controllable rigid structures with lifting bodies in transformer-like multimode robots.


  1. Evan Ackerman, Bipedal Robot Uses Jet-Powered Feet to Step Over Large Gaps, in IEEE Spectrum – robotis. 2018. https://spectrum.ieee.org/automaton/robotics/humanoids/bipedal-robot-uses-jetpowered-feet-to-step-over-large-gaps
  2. Suk-Jun Kim, Dae-Young Lee, Gwang-Pil Jung, and Kyu-Jin Cho, An origami-inspired, self-locking robotic arm that can be folded flat. Science Robotics, 3 (16) 2018. http://robotics.sciencemag.org/content/3/16/eaar2915.abstract

 

Robot Wednesday

 

Self parking slippers(!)

Now this is what I call a neat demo!

I’ve never tried one of these new self-parking cars, so I don’t really get it.  Sure, parking is tricky, but that’s life, no?  Is this something I need, or even want?  I suspect that this is something that once you experience it, you can’t live without.

Nissan has put out an interesting demo that illustrates the idea of the technology, but in a different context.

This application is arguably even more useless than parking your car, but it is so cool to watch, it is compelling.  It also puts you outside and above the action, with a ‘god’s eye view’, which makes the magic all that more visible.  And I’ve seen cars park many times, but never seen a slipper park, autonomously or otherwise!

I like it!

Now, I can’t really tell exactly how this is done (and neither can Sensei Evan [1]).  The press materials imply that this is based on the same technology that the self parking automobile uses.  But that can’t be literally true, since the slippers clearly don’t have multiple cameras and sonar sensors, and I’d be surprised if they have microchips “autonomously” running anything like Nissan Leaf firmware.  Presumably, the slippers are guided by a simulation using cameras in the room, or something.  That would be reasonably cool in itself, and nothing to be ashamed of.

Anyway, I love the demo, regardless of how it was done.

I never knew how badly I needed a self-parking slipper until now”  (Evan Ackerman [1])


  1. Evan Ackerman, Nissan Embeds Self-Parking Tech in Pillows and Slippers, in IEEE Spectrum – Cars That Think. 2018. https://spectrum.ieee.org/cars-that-think/transportation/self-driving/nissan-embeds-selfparking-tech-in-pillows-and-slippers

 

Robot Wednesday

Singaporean Robot Swans

Evan Ackerman calls attention to a project at National University of Singapore, that is deploying robotic water quality sensors that are designed to look like swans.

The robots cruise surface reservoirs, monitoring the water chemistry, and storing data as it is collected into the cloud via wifi.  (Singapore has wifi everywhere!)  The robots are encased in imitation swans, which is intended ‘to be “aesthetically pleasing” in order to “promote urban livability.”’ I.e., to look nice.

This is obviously a nice bit of work, and a good start.  The fleet of autonomous robots can maneuver to cover a large area, and concentrate on hot spots when needed, all at a reasonable cost. I expect that the datasets will be amenable to data analysis machine learning, which can mean a continuous improvement in knowledge about the water quality.

As far as the plastic swan bodies…I’m not really sold.

For starters, they don’t actually look like real swans.  They are obviously artificial swans.

Whether plastic swans are actually more aesthetically pleasing than other possible configurations seems like an open question to me.  I tend to thing that a nicely designed robot might be just as pleasing or even better than a fake swan.  And it would look like a water quality monitor, which is a good thing.

Perhaps this is an opportunity to collaborate with artists and architects to develop some attractive robots that say “I’m keeping your water safe.”


  1. Evan Ackerman, Bevy of Robot Swans Explore Singaporean Reservoirs, in IEEE Spectrum – Automation. 2018. https://spectrum.ieee.org/automaton/robotics/industrial-robots/bevy-of-robot-swans-explore-singaporean-reservoirs
  2. NUS Environmental Research Institute, New Smart Water Assessment Network (NUSwan), in NUS Environmental Research Institute – Research Tracks -Environmental Surveillance and Treatment 2018. http://www.nus.edu.sg/neri/Research/nuswan.html

 

Robot Wednesday

Robot Blimp For Exploring Hidden Spaces

I noted earlier the discovery of what seems to be a chamber in the Great Pyramid at Giza. The discovery opens the question of how to further explore the hidden space without damaging the ancient structure.  One idea is to drill a small shaft, and push through a tiny robot explorer.

A research group at INRIA and Cairo University is developing a robotic blimp for such a mission.  The deflated blimp can be pushed through a 3cm shaft, then inflate and reconnoiter the hidden space.  The task requires a very compact and light system, and likely will operate autonomously.

Evan Ackerman interviewed senior investigator Jean-Baptiste Mouret for IEEE Spectrum [1].  He notes that a blimp is a good choice, because it is “pillowy”, and less likely to damage the structure.

Mouret describes the challenges imposed by the size and weight limits.  Conventional sensors, including GPS, would be too heavy and power hungry.  They are developing “bioinspired” sensors, based on bees and flies.  These include a miniature optic-flow sensor that can operate in low-light conditions.

Getting the robot into the space is one thing, making sure that it is retrieved is more difficult.  It is important not to litter the structure with a lost robot, so the robot will need to return to the tiny access hole, dock with the base, and fold up so it can be pulled out.  It will be designed with backup behaviors to search for the dock, even if damaged.

It will be years before any expedition to the Great Pyramid happens. The robot is still being developed and the measurements of the Pyramid are being refined.   The Pyramid is over 4,000 years old, so there is no need for haste.


  1. Evan Ackerman, Robotic Blimp Could Explore Hidden Chambers of Great Pyramid of Giza, in IEEE Spectrum – Automation. 2017. https://spectrum.ieee.org/automaton/robotics/drones/robotic-blimp-could-explore-hidden-chambers-of-great-pyramid

 

Robot Wednesday