Category Archives: Haptic Interfaces

Funcushion: Active pillow or soft interface

I’ve long been a fan of “soft interfaces”, including cushion-based interfaces.

Last fall a group of researchers from Tokyo report on another cool take on the idea, Funcushion [1].

The basic idea is soft cushions which respond to touches with glowing visual patterns.  In this design, the patterns are printed on the fabric with transparent fluorescent ink which is activated by ultraviolet illumination from inside the cushion.

 

The process itself is simple, the patterns can be designed with any program and printed on the cloth with an ink jet printer.  The user touch is detected with a module that senses changes in reflected IR.  The same module turns on and off a UV beam that excited the printed pattern.  The cushion is filled with a soft material that reflects IR and transmits UV.  Cotton is apparently a good material for the surface, but synthetics fibers are better for the stuffing.

This is kind of neat, though the sensors, projector, and Arduino controller suck power and generate heat.  Inevitably, an “active pillow” is going to be less pillowy.  But this idea looks like it is at least competitive with equipping cloth with LEDs (such as Google or IBM have tried).

I’ll note that using an Arduino opens the way for other features.  For one thing, the cushion can respond sonically, too. (See, for example, Schiphorst’s soft(n) [2, 3, 4].)  It can also have a lot more complex logic, and other sensors, including accelerometers  (again, see Schiphorst).


  1. Kohei Ikeda, Naoya Koizumi, and Takeshi Naemura, FunCushion: Fabricating Functional Cushion Interfaces with Fluorescent-Pattern Displays, in Advances in Computer Entertainment Technology, A.D. Cheok, M. Inami, and T. Romão, Editors. 2018, Springer International Publishing: Cham. p. 470-487. https://link.springer.com/chapter/10.1007/978-3-319-76270-8_33
  2. Thecla Schiphorst, Soft, softer, softly: whispering between the lines, in aRt+D: Research and Development in Art. V2_Publishing, NAi Publishers, Rotterdam, 2005, 166-176. http://www.sfu.ca/~tschipho/publications/aRt&D-Schiphorst-Chapter-pp166-176.pdf
  3. Thecla Schiphorst, soft(n): toward a somaesthetics of touch, in Proceedings of the 27th international conference extended abstracts on Human factors in computing systems. 2009, ACM: Boston, MA, USA. https://dl.acm.org/citation.cfm?id=1520345
  4. Thecla Henrietta Helena Maria Schiphorst, THE VARIETIES OF USER EXPERIENCE: BRIDGING EMBODIED METHODOLOGIES FROM SOMATICS AND PERFORMANCE TO HUMAN COMPUTER INTERACTION, in Center for Advanced Inquiry in the Integrative Arts (CAiiA). 2009, University of Plymouth: Plymouth. https://www.academia.edu/207432/The_Varieties_of_User_Experience_Bridging_Embodied_Methodologies_from_Somatics_and_Performance_to_Human_Computer_Interaction

Disney “Force Jacket”

Disney Force Jacket

It’s CHI-season!  Or the Season Of The CHI!  Or something

The annual Conference on Human Computing Interfaces (CHI) always generates a flood of interesting, wonderful, and not so wonderful projects.

These projects are a perennial source of blog-fodder (blodder?).


Starting this year’s batch:  a “force jacket” from Disney Research in Pittsburgh [1].

This project is a haptic interface for the torso, using pneumatic bags to simulate touch. The inflatable bladders are capable of exerting significant force, and can be inflated and deflated very rapidly to vibrate as well.  The control system manipulates changes in pressure and vibration to simulate sensations, “such as punch, hug, and snake moving across the body”.

This technology is intended to be used as part of a VR system, to add haptic sensations in coordination with the visual and auditory program.  E.g., when you see a snake crawl onto you, you should feel the snake.  (Obviously, correlated stimuli to multiple senses is important for both realism and user sanity.)

These kinds of techniques have been used before, of course.  On contribution is that this project implements vibratatory stimuli by vibrating the air sacs, rather than via motors.  Thus the pneumatic system provides two types of signals.


Another interesting part of the work is the software tool for “editing” haptic signals.  The jacket has more than two dozen bladders, with a vast range of possible actuation.  This is a huge logical “space” for what they user might feel.

Within this “space” they examined 16 different concepts, including “hug”, “heartbeat”, “rain”, impacts, and creepy crawlies.  The tactile representations of these mental categories were developed from human ratings for settings.  (We may wonder just how “expert” these subjects were, e.g., about snakes crawling on them.)

The paper reports several user studies that validate these mappings of concepts to sensations.

The resulting haptic signals were deployed in a VR system, e.g., a virtual snowball fight in which the user could feel the impact of the missile if it hit him.

The researchers note that the current system is far too awkward.  The user is connected to a compressor via a nest of tubes, which kind of ruins the “immersive” experience.  This is a common issue with pneumatic actuators:  they are powerful and amazingly flexible, but the power system is scarcely wearable.

Nevertheless, this is an interesting study, not least because of the sophisticated approach to designing and “debugging” complicated haptic experiences with their design tool and human studies.


I’ll add a cautionary note, though. The researchers implicitly assume that the sensory signals are pretty much universal, i.e., everybody will find that the “heavy rain” feels like real rain.  They are seeking “canonical” mappings between the words/concepts and the stimuli, and therefore are assuming that these mappings are unaffected by culture, language, or personal experience.  (That’s what the term canonical means.)

This may or may not be a valid assumption, at least for some of the concepts.  For example, experience with actual snakes might matter, as might a phobic reaction to snakes.  There are also many kinds of rain, so a person might have considerably different experiences with what rain feels like.  Desert inhabitants have dramatically different experiences from people who live on stormy coasts.  Urban residents anywhere may have very little experience of rain on their body—but plenty of experience of the artificial rain of a bathroom shower.

In this context, I note that the highest “good” rating, i.e., ‘perceived realism’, was for “Motorcycle”. The paper does not report how familiar the raters were with real motorcycles.  It is likely that they had a range of experience with motorcycles, and these differences are certainly relevant to their ratings of realism.

In short, the notion that there should be a universal, canonical mapping is debatable, but would be interesting to investigate.

Aside from the interesting research that might be done about “cultural haptics”, “experiential haptics” (i.e., the effects of, say, training and vocabulary), this issue could be quite important for commercial systems.  If a VR system is designed based on one population, it may or may not have the same effects for other populations of users.  I would suggest that some broader and also cross cultural research would be in order.

I found it interesting to see the design tools for manipulating this high dimensional design space.  It made me think.

It might be interesting to see if machine learning can be applied to (a) learn the mappings from examples, (b) discover patterns such as sub-populations and hidden dimensions, (c) predict mappings for new concepts, and (d) predict novel stimuli that should be pleasant.  I can imagine a version of “the Amazon trick”:  “people like you liked this sensation”.

If nothing else, this project seems to open some very intriguing lines of psychological research.


  1. Alexandra Delazio, Ken Nakagaki, Roberta L. Klatzky, Scott E. Hudson, Jill Fain Lehman, and Alanson P. Sample, Force Jacket: Pneumatically-Actuated Jacket for Embodied Haptic Experiences, in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 2018: Montreal. p. 1-12. https://dl.acm.org/citation.cfm?id=3173894

 

PS. Wouldn’t “Force Jacket” (or maybe FORCE JACKET) be a great name for a band?

Tail Therapy?

Social robots are the flavor of the year these days.  If robots are to live with humans (which is not a foregone conclusion, IMO), they need to mesh with human psychology.  This means they need to appear harmless and attractive, they need to understand and emit unconscious signals, and generally play nicely.  It doesn’t matter what they do, so much as how they do it.

This has led to a variety of interesting research.  Some pursue the goal of mimicking human behavior.  Other approaches use non-human forms with intelligible behavior.  There is a great range of possibilities, with more and less human-like appearance.

There are actually some really interesting questions here about the psychology of humans interacting with non-human machines, intelligent or not.  It seems pretty clear that trying to faithfully imitate human forms and nuances isn’t necessary, nor is speech.  (See perhaps the thoughtful work of Sense Thecla Schiphorst [2.3]).

This principle is clear in a new product, “Qoobo: A Tailed Cushion That Heals Your Heart”.  While this has been described as a “robot”, it certainly lies at the edge of that term.  It has only one behavior; waving the tail.  No face.  No dialog.  Certainly no “useful” functions.

The Tokyo based inventor, Prof. Nobuhiro Sakata, (who apparently also created necomimi in 2011) believes that this is comforting.  In fact, it is supposed to “heals your heart”, whatever that means exactly.

This is harmless, I guess, though vacuous.

But there are so many dubious aspects of this product, I can’t let it pass


First, they have tried to carefully reproduce the motion of a cat’s tail.  It’s clear from the video that they haven’t succeeded in that effort, but in any case they seem to have no understanding of cats at all.  Swishing the tail means the cat is agitated, not happy or friendly.  A contented cat rubs and purrs, and does not swish the tail.  If you pet a cat and its tail starts moving, it is unhappy and probably going to fight and/or run.

Second, setting aside the complete misunderstanding of natural cat behavior, the project claims that the responsive behavior of the tail enhances the human’s feelings.  The crux of the case is that you “would project your emotions onto how the tail moves, and you could get a sense of healing from that”.   Well, maybe so, though there is no evidence that this is actually true.

Third, the claimed benefits are nebulous and new agey.  What exactly does “heals your heart” or a “sense of healing” mean?  How ever these benefits may be defined, has Qoobo been shown to actually work as advertised?  Furthermore, is it better than a placebo, such as a cushion without a tail, or a plush animal without animation?  And how does it compare to alternatives such as a real cat or even to a virtual conversation via social media?

You might hope that the product would be proved safe and effective before it is sold, but that’s not how we do things these days. In fact, they are doing a kickstarter, and part of the work will be the unspecified pledge, “We will be conducting a proof of concept to ensure Qoobo is providing a sense of comfort to its users as intended.

Sigh.


Qoobo is charming and cute and nice and all that. I really hate to criticize it.  But I really think you should not make claims about supposed psychological or other benefits without legitimate evidence.

  1. Qoobo. Qoobo : A pillow with a wagging tail. 2017, https://www.kickstarter.com/projects/1477302345/qoobo?ref=484yf0.
  2. Thecla Schiphorst, soft(n): toward a somaesthetics of touch, in Proceedings of the 27th international conference extended abstracts on Human factors in computing systems. 2009, ACM: Boston, MA, USA. https://dl.acm.org/citation.cfm?doid=1520340.1520345
  3. Thecla Henrietta Helena Maria Schiphorst, THE VARIETIES OF USER EXPERIENCE: BRIDGING EMBODIED METHODOLOGIES FROM SOMATICS AND PERFORMANCE TO HUMAN COMPUTER INTERACTION, in Center for Advanced Inquiry in the Integrative Arts (CAiiA). 2009, University of Plymouth: Plymouth.  https://www.academia.edu/207432/The_Varieties_of_User_Experience_Bridging_Embodied_Methodologies_from_Somatics_and_Performance_to_Human_Computer_Interaction

 

CuddleBits: Much More Than Meets The Eye

Paul Bucci and colleagues from University of British Colombia report this month on Cuddlebots, “simple 1-DOF robots” that “can express affect” [1] As Evan Ackerman says, “build your own tribble!” (Why hasn’t there been a zillion Tribble analogs on the market???)

This caught my eye just because they are cute. Then I looked at the paper presented this month at CHI. Whoa! There’s a lot of interesting stuff here.[1]

First of all, this is a minimalist, “how low can we go” challenge. Many social robots have focused on adding many, many degrees of freedom, for example, to simulate human facial expressions as faithfully as possible. This project goes the other way, trying to create social bonds with only one DOF.

“This seems plausible: humans have a powerful ability to anthropomorphize, easily constructing narratives and ascribing complex emotions to non-human entities.” (p. 3681)

In this case, the robot has programmable “breathing” motions (highly salient in emotional relationships among humans and other species). The challenge is, of course, that emotion is a multidimensional phenomenon, so how can different emotions be expressed with just breathing? And, assuming they can be created, will these patterns be “read” correctly by a human?

This is a great piece of work. They developed theoretical understanding of “relationships between robot behaviour control parameters, and robot-expressed emotion”, which makes possible a DIY “kit” for creating the robots – a theory of Tribbleology, and a factory for fabbing Tribbles!

I mark their grade card with the comment, “Shows mastery of subject”.

As already noted, the design is “naturalistic”, but not patterned after any specific animal. That said, the results are, of course, Tribbleoids, a fictional life form (with notorious psychological attraction).

The paper discusses their design methods and design patterns. They make it all sound so simple, “We iterated on mechanical form until satisfied with the prototypes’ tactility and expressive possibilities of movement.” This statement understates the immense skill of the designers to be able to quickly “iterate” these physical designs.

The team fiddled with design tools that were not originally intended for programming robots. The goal was to be able to generate patterns of “breathing”, basically sine waves, that could drive the robots. This isn’t the kind of motion needed for most robots, but it is what haptics and vocal mapping tools do.

Several studies were done to investigate the expressiveness of the robots, and how people perceived them. The results are complicated, and did not yield any completely clear cut design principles. This isn’t terribly surprising, considering the limited repertoire of the robots. Clearly, the ability to iterate is the key to creating satisfying robots. I don’t think there is going to be a general theory of emotion.

I have to say that the authors are extremely hung up on trying to represent human emotions in these simple robots. I guess that might be useful, but I’m not interested in that per se. I just want to create attractive robots that people like.

One of the interesting things to think about is the psychological process that assigns emotion to these inanimate objects at all. As they say, humans anthropomorphize, and create their own implicit story. It’s no wonder that limited and ambiguous behavior of the robots isn’t clearly read by the humans: they each have their own imaginary story, and there are lots of other factors.

For example, they noted that variables other than the mechanics and motion While people recognized the same general emotions, “we were much more inclined to baby a small FlexiBit over the larger one.” That is, the size of the robot elicited different behaviors from the humans, even with the same design and behavior from the robot.

The researchers are tempted to add more DOF, or perhaps “layer” several 1-DOF systems. This might be an interesting experiment to do, and it might lead to some kind of additive “behavior blocks”. Who knows

Also, if you are adding one more “DOF”, I would suggest adding simple vocalizations, purring and squealing. This is not an original, this is what was done in “The Trouble With Tribbles” (1967) [2].


  1. Paul Bucci, Xi Laura Cang, Anasazi Valair, David Marino, Lucia Tseng, Merel Jung, Jussi Rantala, Oliver S. Schneider, and Karon E. MacLean, Sketching CuddleBits: Coupled Prototyping of Body and Behaviour for an Affective Robot Pet, in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2017, ACM: Denver, Colorado, USA. p. 3681-3692.
  2. Joseph Pevney, The Trouble With Tribbles, in Star Trek. 1967.

 

Robot Wednesday

CES 2017: AxonVR Haptics

I’ve been exposed to cutting edge Virtual Reality for more than two decades now, from the labs and far-seeing mind of Sensei Alan Craig [1-3] and other local gurus. After all these years, we are finally seeing these technologies coming together, not just 3D goggles, but whole body experiences. “Tomorrow” is almost here. After all, what are “haptics” other than “virtual reality” for body senses?

The big news lately isn’t so much any nov ideas, it is that developers are finally able to bring together many senses in the same virtual world interface. And it is coming fast!

Case in point, AconVR demonstrated their technology at CES 2017.  They are shooting for the whole body experience, including locomotion, touch, and temperature.

Evan Ackerman reported that “With AxonVR, the touch sensation is presented with such precision and specificity that I’m tempted to say that imagination isn’t even necessary.”

It says a lot that the  well informed and experienced Ackerman found it so cool that “I spent most of the demo giggling like a little kid.

Beyond the CES demo, AxonVR is working on a full body exosuit eith force feedback, which will simulate standing, walking, and so on.

Phew! The works!

If the full suit is as impressive as the touch and heat demonstration seems to be, this will be very impressive. (Not least because they seem to have their heads screwed on right about the software development environment.)

With wonders coming so fast, how are we mere mortals supposed to keep up?

Cool as this sounds, readers of this blog are well aware that there is yet more senses that could be folded in. Within a few years we will see a full body interface like AxonVR, which also has taste and chewiness, kissing, and, of course naughty knickers.

One of the mountains that still has to be climbed is to do creative things with multiple people in the VR. This might mean “riding along” inside someone else’s experience, or side by side, touching the same world (and each other). This multi-person version opens the way for new forms of dance and other performance art….


  1. Alan B. Craig. Understanding Augmented Reality: Concepts and Applications, San Francisco, Morgan Kaufman, 2013.
  2. Alan B. Craig, William R. Sherman, and Jeffrey D. Will, Developing Virtual Reality, Burlington, MA, Morgan Kaufmann, 2009.
  3. William R.  Shermn and Alan B. Craig, Understanding Virtual Reality: Interface Application and Design, San Francisco, Morgan Kaufmann, 2003.

 

Kissenger -Add A Kiss to Your Message

The haptic internet is coming fast, and it can get seriously creepy. I have talked about remote haptics for a couple of years, and looked at puppets, taste, and, of course, gropey underwear.

Let’s add to the array of networked haptics with “Kissenger”, the Kissing Messenger app. Technically, this haptic kissing interface is supposed to receive and give the feeling of kissing lips, “a realistic kissing sensation”. The main intended use is to agent a personal conversation on your mobile device with a kiss. The web page also suggests that it is “for families” and “for fans”.

Photo: Emma Yann Zhang

Looking at the images, I can’t help but wonder just how “realistic” this might be. I’m confident that no one would ever be fooled into thinking this was a real, face-to-face kiss—there is no breath, or slobber, or warm skin. It’s a pretty chaste kiss, if I may say so.

Naturally, I immediately think about the “wrong” ways to use this device.   I mean, you can press the device against any part of your body, no? Or against any thing at all. Things can get pretty nasty, pretty fast.

Their use case “for fans” is pretty troubling when you think about it, “To connect idols and their fans from all around the world.” Ick!.

That makes me think of using this as a token of submission. The dictatorial CEO expects you to bow and (remotely) kiss his whatever at the end of the meeting. Ugh!

What you can’t do, though, is really kiss each other with passion and intensity. There is kissing your auntie, and then there is really kissing your lover. The latter is part of a complete embrace, and can be edgy, unpredictable, and messy. (And scarcely restricted to the other person’s lips.)

This work is part of PhD research by Emma Zhang at Professor Adrian Cheok’s lab. We know Cheok from earlier research, so we are not surprised to see imaginative and daring ideas. We also can be confident that there will be some careful experiments to assess just how people perceive the experience, and how well they like it.