Category Archives: Haptic Interfaces

Tail Therapy?

Social robots are the flavor of the year these days.  If robots are to live with humans (which is not a foregone conclusion, IMO), they need to mesh with human psychology.  This means they need to appear harmless and attractive, they need to understand and emit unconscious signals, and generally play nicely.  It doesn’t matter what they do, so much as how they do it.

This has led to a variety of interesting research.  Some pursue the goal of mimicking human behavior.  Other approaches use non-human forms with intelligible behavior.  There is a great range of possibilities, with more and less human-like appearance.

There are actually some really interesting questions here about the psychology of humans interacting with non-human machines, intelligent or not.  It seems pretty clear that trying to faithfully imitate human forms and nuances isn’t necessary, nor is speech.  (See perhaps the thoughtful work of Sense Thecla Schiphorst [2.3]).

This principle is clear in a new product, “Qoobo: A Tailed Cushion That Heals Your Heart”.  While this has been described as a “robot”, it certainly lies at the edge of that term.  It has only one behavior; waving the tail.  No face.  No dialog.  Certainly no “useful” functions.

The Tokyo based inventor, Prof. Nobuhiro Sakata, (who apparently also created necomimi in 2011) believes that this is comforting.  In fact, it is supposed to “heals your heart”, whatever that means exactly.

This is harmless, I guess, though vacuous.

But there are so many dubious aspects of this product, I can’t let it pass


First, they have tried to carefully reproduce the motion of a cat’s tail.  It’s clear from the video that they haven’t succeeded in that effort, but in any case they seem to have no understanding of cats at all.  Swishing the tail means the cat is agitated, not happy or friendly.  A contented cat rubs and purrs, and does not swish the tail.  If you pet a cat and its tail starts moving, it is unhappy and probably going to fight and/or run.

Second, setting aside the complete misunderstanding of natural cat behavior, the project claims that the responsive behavior of the tail enhances the human’s feelings.  The crux of the case is that you “would project your emotions onto how the tail moves, and you could get a sense of healing from that”.   Well, maybe so, though there is no evidence that this is actually true.

Third, the claimed benefits are nebulous and new agey.  What exactly does “heals your heart” or a “sense of healing” mean?  How ever these benefits may be defined, has Qoobo been shown to actually work as advertised?  Furthermore, is it better than a placebo, such as a cushion without a tail, or a plush animal without animation?  And how does it compare to alternatives such as a real cat or even to a virtual conversation via social media?

You might hope that the product would be proved safe and effective before it is sold, but that’s not how we do things these days. In fact, they are doing a kickstarter, and part of the work will be the unspecified pledge, “We will be conducting a proof of concept to ensure Qoobo is providing a sense of comfort to its users as intended.

Sigh.


Qoobo is charming and cute and nice and all that. I really hate to criticize it.  But I really think you should not make claims about supposed psychological or other benefits without legitimate evidence.

  1. Qoobo. Qoobo : A pillow with a wagging tail. 2017, https://www.kickstarter.com/projects/1477302345/qoobo?ref=484yf0.
  2. Thecla Schiphorst, soft(n): toward a somaesthetics of touch, in Proceedings of the 27th international conference extended abstracts on Human factors in computing systems. 2009, ACM: Boston, MA, USA. https://dl.acm.org/citation.cfm?doid=1520340.1520345
  3. Thecla Henrietta Helena Maria Schiphorst, THE VARIETIES OF USER EXPERIENCE: BRIDGING EMBODIED METHODOLOGIES FROM SOMATICS AND PERFORMANCE TO HUMAN COMPUTER INTERACTION, in Center for Advanced Inquiry in the Integrative Arts (CAiiA). 2009, University of Plymouth: Plymouth.  https://www.academia.edu/207432/The_Varieties_of_User_Experience_Bridging_Embodied_Methodologies_from_Somatics_and_Performance_to_Human_Computer_Interaction

 

CuddleBits: Much More Than Meets The Eye

Paul Bucci and colleagues from University of British Colombia report this month on Cuddlebots, “simple 1-DOF robots” that “can express affect” [1] As Evan Ackerman says, “build your own tribble!” (Why hasn’t there been a zillion Tribble analogs on the market???)

This caught my eye just because they are cute. Then I looked at the paper presented this month at CHI. Whoa! There’s a lot of interesting stuff here.[1]

First of all, this is a minimalist, “how low can we go” challenge. Many social robots have focused on adding many, many degrees of freedom, for example, to simulate human facial expressions as faithfully as possible. This project goes the other way, trying to create social bonds with only one DOF.

“This seems plausible: humans have a powerful ability to anthropomorphize, easily constructing narratives and ascribing complex emotions to non-human entities.” (p. 3681)

In this case, the robot has programmable “breathing” motions (highly salient in emotional relationships among humans and other species). The challenge is, of course, that emotion is a multidimensional phenomenon, so how can different emotions be expressed with just breathing? And, assuming they can be created, will these patterns be “read” correctly by a human?

This is a great piece of work. They developed theoretical understanding of “relationships between robot behaviour control parameters, and robot-expressed emotion”, which makes possible a DIY “kit” for creating the robots – a theory of Tribbleology, and a factory for fabbing Tribbles!

I mark their grade card with the comment, “Shows mastery of subject”.

As already noted, the design is “naturalistic”, but not patterned after any specific animal. That said, the results are, of course, Tribbleoids, a fictional life form (with notorious psychological attraction).

The paper discusses their design methods and design patterns. They make it all sound so simple, “We iterated on mechanical form until satisfied with the prototypes’ tactility and expressive possibilities of movement.” This statement understates the immense skill of the designers to be able to quickly “iterate” these physical designs.

The team fiddled with design tools that were not originally intended for programming robots. The goal was to be able to generate patterns of “breathing”, basically sine waves, that could drive the robots. This isn’t the kind of motion needed for most robots, but it is what haptics and vocal mapping tools do.

Several studies were done to investigate the expressiveness of the robots, and how people perceived them. The results are complicated, and did not yield any completely clear cut design principles. This isn’t terribly surprising, considering the limited repertoire of the robots. Clearly, the ability to iterate is the key to creating satisfying robots. I don’t think there is going to be a general theory of emotion.

I have to say that the authors are extremely hung up on trying to represent human emotions in these simple robots. I guess that might be useful, but I’m not interested in that per se. I just want to create attractive robots that people like.

One of the interesting things to think about is the psychological process that assigns emotion to these inanimate objects at all. As they say, humans anthropomorphize, and create their own implicit story. It’s no wonder that limited and ambiguous behavior of the robots isn’t clearly read by the humans: they each have their own imaginary story, and there are lots of other factors.

For example, they noted that variables other than the mechanics and motion While people recognized the same general emotions, “we were much more inclined to baby a small FlexiBit over the larger one.” That is, the size of the robot elicited different behaviors from the humans, even with the same design and behavior from the robot.

The researchers are tempted to add more DOF, or perhaps “layer” several 1-DOF systems. This might be an interesting experiment to do, and it might lead to some kind of additive “behavior blocks”. Who knows

Also, if you are adding one more “DOF”, I would suggest adding simple vocalizations, purring and squealing. This is not an original, this is what was done in “The Trouble With Tribbles” (1967) [2].


  1. Paul Bucci, Xi Laura Cang, Anasazi Valair, David Marino, Lucia Tseng, Merel Jung, Jussi Rantala, Oliver S. Schneider, and Karon E. MacLean, Sketching CuddleBits: Coupled Prototyping of Body and Behaviour for an Affective Robot Pet, in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2017, ACM: Denver, Colorado, USA. p. 3681-3692.
  2. Joseph Pevney, The Trouble With Tribbles, in Star Trek. 1967.

 

Robot Wednesday

CES 2017: AxonVR Haptics

I’ve been exposed to cutting edge Virtual Reality for more than two decades now, from the labs and far-seeing mind of Sensei Alan Craig [1-3] and other local gurus. After all these years, we are finally seeing these technologies coming together, not just 3D goggles, but whole body experiences. “Tomorrow” is almost here. After all, what are “haptics” other than “virtual reality” for body senses?

The big news lately isn’t so much any nov ideas, it is that developers are finally able to bring together many senses in the same virtual world interface. And it is coming fast!

Case in point, AconVR demonstrated their technology at CES 2017.  They are shooting for the whole body experience, including locomotion, touch, and temperature.

Evan Ackerman reported that “With AxonVR, the touch sensation is presented with such precision and specificity that I’m tempted to say that imagination isn’t even necessary.”

It says a lot that the  well informed and experienced Ackerman found it so cool that “I spent most of the demo giggling like a little kid.

Beyond the CES demo, AxonVR is working on a full body exosuit eith force feedback, which will simulate standing, walking, and so on.

Phew! The works!

If the full suit is as impressive as the touch and heat demonstration seems to be, this will be very impressive. (Not least because they seem to have their heads screwed on right about the software development environment.)

With wonders coming so fast, how are we mere mortals supposed to keep up?

Cool as this sounds, readers of this blog are well aware that there is yet more senses that could be folded in. Within a few years we will see a full body interface like AxonVR, which also has taste and chewiness, kissing, and, of course naughty knickers.

One of the mountains that still has to be climbed is to do creative things with multiple people in the VR. This might mean “riding along” inside someone else’s experience, or side by side, touching the same world (and each other). This multi-person version opens the way for new forms of dance and other performance art….


  1. Alan B. Craig. Understanding Augmented Reality: Concepts and Applications, San Francisco, Morgan Kaufman, 2013.
  2. Alan B. Craig, William R. Sherman, and Jeffrey D. Will, Developing Virtual Reality, Burlington, MA, Morgan Kaufmann, 2009.
  3. William R.  Shermn and Alan B. Craig, Understanding Virtual Reality: Interface Application and Design, San Francisco, Morgan Kaufmann, 2003.

 

Kissenger -Add A Kiss to Your Message

The haptic internet is coming fast, and it can get seriously creepy. I have talked about remote haptics for a couple of years, and looked at puppets, taste, and, of course, gropey underwear.

Let’s add to the array of networked haptics with “Kissenger”, the Kissing Messenger app. Technically, this haptic kissing interface is supposed to receive and give the feeling of kissing lips, “a realistic kissing sensation”. The main intended use is to agent a personal conversation on your mobile device with a kiss. The web page also suggests that it is “for families” and “for fans”.

Photo: Emma Yann Zhang

Looking at the images, I can’t help but wonder just how “realistic” this might be. I’m confident that no one would ever be fooled into thinking this was a real, face-to-face kiss—there is no breath, or slobber, or warm skin. It’s a pretty chaste kiss, if I may say so.

Naturally, I immediately think about the “wrong” ways to use this device.   I mean, you can press the device against any part of your body, no? Or against any thing at all. Things can get pretty nasty, pretty fast.

Their use case “for fans” is pretty troubling when you think about it, “To connect idols and their fans from all around the world.” Ick!.

That makes me think of using this as a token of submission. The dictatorial CEO expects you to bow and (remotely) kiss his whatever at the end of the meeting. Ugh!

What you can’t do, though, is really kiss each other with passion and intensity. There is kissing your auntie, and then there is really kissing your lover. The latter is part of a complete embrace, and can be edgy, unpredictable, and messy. (And scarcely restricted to the other person’s lips.)

This work is part of PhD research by Emma Zhang at Professor Adrian Cheok’s lab. We know Cheok from earlier research, so we are not surprised to see imaginative and daring ideas. We also can be confident that there will be some careful experiments to assess just how people perceive the experience, and how well they like it.

Chewing Food in Virtual Reality

Continuing in the theme of “Virtual Reality is not just 3D vision”, Arinobu Niijima and Takefumi Ogawa of The University of Tokyo propose to use electrical muscle stimulation to simulate the texture of food! [1]

Like the sense of taste, the chewiness and hardness of food is an intimate experience, relying on sensations from the mouth and face muscles. Nevertheless, a virtual reality experience of “eating” would be completely meaningless without feeling the food as you chew it.

As the researchers comment, direct manipulation (e.g., with a mouth piece) is intrusive, inflexible and potentially uncomfortable.

This month Niijima and Ogawa describe a system that uses electircal stimulation of the face muscles to crate sensations of resistance that simulate the sensations of chewing various foods. Their insight is that the varying the stimulation in two ways feels like chewing. The “strength” (frequency of pulses) resembles hardness, and the duration resembles “chewiness”.

Combined the the ideas from the Singapore group, this basic idea opens the possibility of programming a “meal” in virtual reality, delivering simulated taste and texture!

“Put together, all of these technologies could one day be incorporated into a virtual reality headset to create a multisensory dining experience.” (From Victoria Turk [3].)

Furthermore, I would add haptics such as ultrahaptics, to give the sensation of picking up and moving virtual food to the mouth.

Amazing!


  1. Arinobu Niijima and Takefumi Ogawa, Study on Control Method of Virtual Food Texture by Electrical Muscle Stimulation, in Proceedings of the 29th Annual Symposium on User Interface Software and Technology. 2016, ACM: Tokyo, Japan. p. 199-200. http://dl.acm.org/citation.cfm?id=2984768
  2. Victoria Turk,  (2016) Face electrodes let you taste and chew in virtual reality. New Scientist, https://www.newscientist.com/article/2111371-face-electrodes-let-you-taste-and-chew-in-virtual-reality/

Tasting Food in Virtual Reality

Virtual Reality is not just about 3D vision and sound, it is about simulating all human senses. This is one reason why haptics are so important and interesting, as well as wind systems, treadmills and other whole body interfaces. (See Sensei Alan Craig [2] for extensive discussion of the theory and practice.)

One of the most difficult senses to “virtualize” has been smell and taste. These chemical senses are intimate and “slow moving”—you have to stimulate receptors on the tongue, and the “signal” operates at the speed of molecular transport (rather than the speed of light). Also, classically, the signals are triggered by a vast array of specific molecules, which is a messy and unintuitive “alphabet”.

My ignorance of this topic was revealed by an amazing paper from Nimesha Ranasinghe and Ellen Yi-Luen Do of National University of Singapore, describing their “Virtual Sweet” system. This device uses thermal stimulation to the tongue to induce the sensation of “sweetness” [1].

The system builds on research that has shown that themal stimulation (heating and cooling) areas of the tongue can trigger the electro chemical response of taste buds, producing various taste sensations.

The researchers created a precisely controlled array of heating / cooling elements that can be programmed to generate the required patterns. In this case, they generate sensations of sweetness.

Cool.

As the researchers point out, this might be used to satisfy desires for sweet tasting food, without consuming as much sugar.

It could also be used, as I have implied, to add “flavor” to a VR experience. Now you can taste that imaginary soda drink!

This technology is a bit iffy at this point. Accessing the tongue is awkward (though soft bots and nanotech would be a lot less obtrusive), and there is considerable “slop” in the location of the areas to stimulate.

The researchers note individual differences, and I would expect that there will be considerable variabilty in broader populations, and also depending on experience and environment (just how much sweet food the subjects regularly eat).

Still, this is pretty cool to make it work even partly.


  1. Nimesha Ranasinghe and Ellen Yi-Luen Do, Virtual Sweet: Simulating Sweet Sensation Using Thermal Stimulation on the Tip of the Tongue, in Proceedings of the 29th Annual Symposium on User Interface Software and Technology. 2016, ACM: Tokyo, Japan. p. 127-128. http://dl.acm.org/citation.cfm?id=2985729
  2. William R. Sherman and Alan B. Craig, Understanding Virtual Reality: Interface Application and Design, San Francisco, Morgan Kaufmann, 2003. http://store.elsevier.com/Understanding-Virtual-Reality/William-R_-Sherman/isbn-9780080520094/