Category Archives: Haptic Interfaces

CuddleBits: Much More Than Meets The Eye

Paul Bucci and colleagues from University of British Colombia report this month on Cuddlebots, “simple 1-DOF robots” that “can express affect” [1] As Evan Ackerman says, “build your own tribble!” (Why hasn’t there been a zillion Tribble analogs on the market???)

This caught my eye just because they are cute. Then I looked at the paper presented this month at CHI. Whoa! There’s a lot of interesting stuff here.[1]

First of all, this is a minimalist, “how low can we go” challenge. Many social robots have focused on adding many, many degrees of freedom, for example, to simulate human facial expressions as faithfully as possible. This project goes the other way, trying to create social bonds with only one DOF.

“This seems plausible: humans have a powerful ability to anthropomorphize, easily constructing narratives and ascribing complex emotions to non-human entities.” (p. 3681)

In this case, the robot has programmable “breathing” motions (highly salient in emotional relationships among humans and other species). The challenge is, of course, that emotion is a multidimensional phenomenon, so how can different emotions be expressed with just breathing? And, assuming they can be created, will these patterns be “read” correctly by a human?

This is a great piece of work. They developed theoretical understanding of “relationships between robot behaviour control parameters, and robot-expressed emotion”, which makes possible a DIY “kit” for creating the robots – a theory of Tribbleology, and a factory for fabbing Tribbles!

I mark their grade card with the comment, “Shows mastery of subject”.

As already noted, the design is “naturalistic”, but not patterned after any specific animal. That said, the results are, of course, Tribbleoids, a fictional life form (with notorious psychological attraction).

The paper discusses their design methods and design patterns. They make it all sound so simple, “We iterated on mechanical form until satisfied with the prototypes’ tactility and expressive possibilities of movement.” This statement understates the immense skill of the designers to be able to quickly “iterate” these physical designs.

The team fiddled with design tools that were not originally intended for programming robots. The goal was to be able to generate patterns of “breathing”, basically sine waves, that could drive the robots. This isn’t the kind of motion needed for most robots, but it is what haptics and vocal mapping tools do.

Several studies were done to investigate the expressiveness of the robots, and how people perceived them. The results are complicated, and did not yield any completely clear cut design principles. This isn’t terribly surprising, considering the limited repertoire of the robots. Clearly, the ability to iterate is the key to creating satisfying robots. I don’t think there is going to be a general theory of emotion.

I have to say that the authors are extremely hung up on trying to represent human emotions in these simple robots. I guess that might be useful, but I’m not interested in that per se. I just want to create attractive robots that people like.

One of the interesting things to think about is the psychological process that assigns emotion to these inanimate objects at all. As they say, humans anthropomorphize, and create their own implicit story. It’s no wonder that limited and ambiguous behavior of the robots isn’t clearly read by the humans: they each have their own imaginary story, and there are lots of other factors.

For example, they noted that variables other than the mechanics and motion While people recognized the same general emotions, “we were much more inclined to baby a small FlexiBit over the larger one.” That is, the size of the robot elicited different behaviors from the humans, even with the same design and behavior from the robot.

The researchers are tempted to add more DOF, or perhaps “layer” several 1-DOF systems. This might be an interesting experiment to do, and it might lead to some kind of additive “behavior blocks”. Who knows

Also, if you are adding one more “DOF”, I would suggest adding simple vocalizations, purring and squealing. This is not an original, this is what was done in “The Trouble With Tribbles” (1967) [2].


  1. Paul Bucci, Xi Laura Cang, Anasazi Valair, David Marino, Lucia Tseng, Merel Jung, Jussi Rantala, Oliver S. Schneider, and Karon E. MacLean, Sketching CuddleBits: Coupled Prototyping of Body and Behaviour for an Affective Robot Pet, in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2017, ACM: Denver, Colorado, USA. p. 3681-3692.
  2. Joseph Pevney, The Trouble With Tribbles, in Star Trek. 1967.

 

Robot Wednesday

CES 2017: AxonVR Haptics

I’ve been exposed to cutting edge Virtual Reality for more than two decades now, from the labs and far-seeing mind of Sensei Alan Craig [1-3] and other local gurus. After all these years, we are finally seeing these technologies coming together, not just 3D goggles, but whole body experiences. “Tomorrow” is almost here. After all, what are “haptics” other than “virtual reality” for body senses?

The big news lately isn’t so much any nov ideas, it is that developers are finally able to bring together many senses in the same virtual world interface. And it is coming fast!

Case in point, AconVR demonstrated their technology at CES 2017.  They are shooting for the whole body experience, including locomotion, touch, and temperature.

Evan Ackerman reported that “With AxonVR, the touch sensation is presented with such precision and specificity that I’m tempted to say that imagination isn’t even necessary.”

It says a lot that the  well informed and experienced Ackerman found it so cool that “I spent most of the demo giggling like a little kid.

Beyond the CES demo, AxonVR is working on a full body exosuit eith force feedback, which will simulate standing, walking, and so on.

Phew! The works!

If the full suit is as impressive as the touch and heat demonstration seems to be, this will be very impressive. (Not least because they seem to have their heads screwed on right about the software development environment.)

With wonders coming so fast, how are we mere mortals supposed to keep up?

Cool as this sounds, readers of this blog are well aware that there is yet more senses that could be folded in. Within a few years we will see a full body interface like AxonVR, which also has taste and chewiness, kissing, and, of course naughty knickers.

One of the mountains that still has to be climbed is to do creative things with multiple people in the VR. This might mean “riding along” inside someone else’s experience, or side by side, touching the same world (and each other). This multi-person version opens the way for new forms of dance and other performance art….


  1. Alan B. Craig. Understanding Augmented Reality: Concepts and Applications, San Francisco, Morgan Kaufman, 2013.
  2. Alan B. Craig, William R. Sherman, and Jeffrey D. Will, Developing Virtual Reality, Burlington, MA, Morgan Kaufmann, 2009.
  3. William R.  Shermn and Alan B. Craig, Understanding Virtual Reality: Interface Application and Design, San Francisco, Morgan Kaufmann, 2003.

 

Kissenger -Add A Kiss to Your Message

The haptic internet is coming fast, and it can get seriously creepy. I have talked about remote haptics for a couple of years, and looked at puppets, taste, and, of course, gropey underwear.

Let’s add to the array of networked haptics with “Kissenger”, the Kissing Messenger app. Technically, this haptic kissing interface is supposed to receive and give the feeling of kissing lips, “a realistic kissing sensation”. The main intended use is to agent a personal conversation on your mobile device with a kiss. The web page also suggests that it is “for families” and “for fans”.

Photo: Emma Yann Zhang

Looking at the images, I can’t help but wonder just how “realistic” this might be. I’m confident that no one would ever be fooled into thinking this was a real, face-to-face kiss—there is no breath, or slobber, or warm skin. It’s a pretty chaste kiss, if I may say so.

Naturally, I immediately think about the “wrong” ways to use this device.   I mean, you can press the device against any part of your body, no? Or against any thing at all. Things can get pretty nasty, pretty fast.

Their use case “for fans” is pretty troubling when you think about it, “To connect idols and their fans from all around the world.” Ick!.

That makes me think of using this as a token of submission. The dictatorial CEO expects you to bow and (remotely) kiss his whatever at the end of the meeting. Ugh!

What you can’t do, though, is really kiss each other with passion and intensity. There is kissing your auntie, and then there is really kissing your lover. The latter is part of a complete embrace, and can be edgy, unpredictable, and messy. (And scarcely restricted to the other person’s lips.)

This work is part of PhD research by Emma Zhang at Professor Adrian Cheok’s lab. We know Cheok from earlier research, so we are not surprised to see imaginative and daring ideas. We also can be confident that there will be some careful experiments to assess just how people perceive the experience, and how well they like it.

Chewing Food in Virtual Reality

Continuing in the theme of “Virtual Reality is not just 3D vision”, Arinobu Niijima and Takefumi Ogawa of The University of Tokyo propose to use electrical muscle stimulation to simulate the texture of food! [1]

Like the sense of taste, the chewiness and hardness of food is an intimate experience, relying on sensations from the mouth and face muscles. Nevertheless, a virtual reality experience of “eating” would be completely meaningless without feeling the food as you chew it.

As the researchers comment, direct manipulation (e.g., with a mouth piece) is intrusive, inflexible and potentially uncomfortable.

This month Niijima and Ogawa describe a system that uses electircal stimulation of the face muscles to crate sensations of resistance that simulate the sensations of chewing various foods. Their insight is that the varying the stimulation in two ways feels like chewing. The “strength” (frequency of pulses) resembles hardness, and the duration resembles “chewiness”.

Combined the the ideas from the Singapore group, this basic idea opens the possibility of programming a “meal” in virtual reality, delivering simulated taste and texture!

“Put together, all of these technologies could one day be incorporated into a virtual reality headset to create a multisensory dining experience.” (From Victoria Turk [3].)

Furthermore, I would add haptics such as ultrahaptics, to give the sensation of picking up and moving virtual food to the mouth.

Amazing!


  1. Arinobu Niijima and Takefumi Ogawa, Study on Control Method of Virtual Food Texture by Electrical Muscle Stimulation, in Proceedings of the 29th Annual Symposium on User Interface Software and Technology. 2016, ACM: Tokyo, Japan. p. 199-200. http://dl.acm.org/citation.cfm?id=2984768
  2. Victoria Turk,  (2016) Face electrodes let you taste and chew in virtual reality. New Scientist, https://www.newscientist.com/article/2111371-face-electrodes-let-you-taste-and-chew-in-virtual-reality/

Tasting Food in Virtual Reality

Virtual Reality is not just about 3D vision and sound, it is about simulating all human senses. This is one reason why haptics are so important and interesting, as well as wind systems, treadmills and other whole body interfaces. (See Sensei Alan Craig [2] for extensive discussion of the theory and practice.)

One of the most difficult senses to “virtualize” has been smell and taste. These chemical senses are intimate and “slow moving”—you have to stimulate receptors on the tongue, and the “signal” operates at the speed of molecular transport (rather than the speed of light). Also, classically, the signals are triggered by a vast array of specific molecules, which is a messy and unintuitive “alphabet”.

My ignorance of this topic was revealed by an amazing paper from Nimesha Ranasinghe and Ellen Yi-Luen Do of National University of Singapore, describing their “Virtual Sweet” system. This device uses thermal stimulation to the tongue to induce the sensation of “sweetness” [1].

The system builds on research that has shown that themal stimulation (heating and cooling) areas of the tongue can trigger the electro chemical response of taste buds, producing various taste sensations.

The researchers created a precisely controlled array of heating / cooling elements that can be programmed to generate the required patterns. In this case, they generate sensations of sweetness.

Cool.

As the researchers point out, this might be used to satisfy desires for sweet tasting food, without consuming as much sugar.

It could also be used, as I have implied, to add “flavor” to a VR experience. Now you can taste that imaginary soda drink!

This technology is a bit iffy at this point. Accessing the tongue is awkward (though soft bots and nanotech would be a lot less obtrusive), and there is considerable “slop” in the location of the areas to stimulate.

The researchers note individual differences, and I would expect that there will be considerable variabilty in broader populations, and also depending on experience and environment (just how much sweet food the subjects regularly eat).

Still, this is pretty cool to make it work even partly.


  1. Nimesha Ranasinghe and Ellen Yi-Luen Do, Virtual Sweet: Simulating Sweet Sensation Using Thermal Stimulation on the Tip of the Tongue, in Proceedings of the 29th Annual Symposium on User Interface Software and Technology. 2016, ACM: Tokyo, Japan. p. 127-128. http://dl.acm.org/citation.cfm?id=2985729
  2. William R. Sherman and Alan B. Craig, Understanding Virtual Reality: Interface Application and Design, San Francisco, Morgan Kaufmann, 2003. http://store.elsevier.com/Understanding-Virtual-Reality/William-R_-Sherman/isbn-9780080520094/

 

Tangible Swarm Interface

Many of us have been dreaming of tangible interfaces and smart matter for many years now (Ivan Sutherland dreamed of this over 50 years ago [2]). We are finally beginning to see inklings of real systems.  (link link)

This fall Mathieu Le Goc, and colleagues demonstrated their “Zooids: Building Blocks for Swarm User Interfaces”. The technology is a swarm (dozens) of 2.6 cm disks, each of which is a little mobile robot. They are deployed on a table top, where the act as a bunch of tangible pixels, for both input and output. The positions of the swarm are tracked via structured light, and guided by wireless instructions.

It’s pretty cool!

The hardware and software is open source (Prof. McGrath awards extra credit), and includes and interesting editor that is “[i]nspired by traditional stop motion animation tools”. ([1], p. 101)  Cool!

The researchers are interested in design principles for “swarmUIs”. Their paper notes some initial ideas ([1]):

  • Number and granularity of particles (Zooids are middling)
  • Fixed versus movable elements (and possibly some of each)
  • Fixed or variable number of elements
  • Interchangeable or identifiable elements
  • Direct, indirect, and hybrid manipulation (see video)
  • Distinguished roles
  • Other visual elements (e.g. projections, etc.)

These are only the beginning, there are lot’s of interesting design questions to explore.

I’d like to build truly interactive tangible interactions, it that makes sense. Try to envision this scenario:

the swarm should dance around, attracting attention and showing me where to “grab” or “push”. When I stick my hand in, elements would swarm around the hand, touching and caressing me, and leading me to one or more possible gestures. (This might well be coordinated with other visual and audio information, of course.) When I make a gesture, the elements can guide the movement, preventing gross error, and tangibly indicating that the system is getting the message.

Another interesting area to explore is multiuser interfaces. Lot’s of interesting things to try here:

  • Transmitting messages via “touch” – I push part of he swarm, another part of the swarm pushes on you.
  • Cooperative (or competitive) teamwork—complex actions that require the right combination of gestures from two or more people.
  • Displays that fade from attention while highlighting attention to the other people

We could also do a lot of interesting things with the swarm integrated into other digital systems, such as Augmented Reality. This would extend the effects of the gestures in many ways.

Finally, I would note that this would make a great game for cats.

Enough for now.

This is a very inspiring project, and I’m so glad that they open sourced it so others can try some of these other crazy idea.


  1. Mathieu Le, Goc, Lawrence H. Kim, Ali Parsaei, Jean-Daniel Fekete, Pierre Dragicevic, and Sean Follmer, Zooids: Building Blocks for Swarm User Interfaces, in Proceedings of the 29th Annual Symposium on User Interface Software and Technology. 2016, ACM: Tokyo, Japan. p. 97-109. http://dl.acm.org/citation.cfm?doid=2984511.2984547
  2. Ivan E. Sutherland, The Ultimate Display, in Proceedings of the IFIP Congress. 1965: New York. p. 506–508. http://worrydream.com/refs/Sutherland%20-%20The%20Ultimate%20Display.pdf

 

Robot Wednesday

Ultrahaptic Magic

Yet another project that is just plain jaw dropping. (My jaw is getting sore from all this dropping.)

For a decade and more, haptic interfaces have been coming along, pushing (get it?) the envelope of digital interfaces that enable you to feel the shape and texture of virtual objects. Everyone with a smartphone has options for “haptic” notification, but this is just a buzzer—baby stuff!

There are gloves and other wearables which are programmed to push on human skin, which can be programmed to push just like an object would—so you can feel the thing you see on the screen or 3D goggles.

Even more amazing are contactless systems that project the touch into the environment. (E.g., this or this One technology to achieve this is ultrasound: focused ultrasound creates pressure in the air, which the human skin can feel. Wow! Very spooky.

This research has examined what you need to do to “fool” the skin. What pattern of touch makes something feel smooth, rough, soft, furry? It turns out that this is quite doable, though sophisticated textures require not only fine spatial resolution, but also very fast changes in time.

Research out of the University of Bristol is coming to market  next year.  Rapid technical advances has pushed the refresh rate to 10,000 frames per second, and a range of 10 centimeters or more. This is good enough to project buttons and other objects to be handled.

No gloves, just bare hands! Cool!

Combined with motion tracking, this technology can be used in VR to let you feel the objects you are touching. The company also has ideas about “touchless” buttons, e.g., in hospitals, or for controlling a kitchen range without touching any surfaces.

The development kit (coming Real Soon Now) includes a “Sensation Editor”, which sounds intriguing!


This technology is a bit bulky for wearables yet, but lets think about smart clothing. The inner layer will be smart fabric which is both functional (wicking away moisture, keeping comfortable temperature) and also programmed for luxurious textures—the feel of silk lining and mink collar, digitally simulated.

But the outer side of the garment is programmable for appearance and also projects textures several hand widths out from the garment with ultrasound. It feels like silk to me on the inside, but the outside feels like sealskin, or ivory, or granite, or bare skin.

diagram1

And if I don’t want you to touch me, I switch to jagged broken glass. Hands off, nerd boy!

diagram2

A less desirable application would be a form of digital torture. If the VR can project buttons and “bubbles”, then it can also project ticklish or pricklish or just plain weird sensations. Combined with confusing and alarming visual and auditory sensations; mismatched with the haptics; you could make a pretty unpleasant situation. Add in physiological sensors, and the system could carefully tune the torture to maximize discomfort and stress.

What about the elephant in the room: digital sex play?

Well, there are certainly some uses for this kind of stimulation, e.g., some people might be excited by the artificial feeling of being slathered with chocolate syrup or whatever, without the mess to clean up. And obviously, being caressed by invisible or virtual fingers, tongues, etc. could be interesting in some situations.

But I’m thinking that the most interesting kinds of intimate touch are mutual, skin against skin. So, I look forward to the first demo of two-way ultrahaptics.

This can surely be done: track the movements of two people, modeling how their skin is touching, and project the relevant textures. With the option to augment the sensations, and to track the excitement of both “players”. Imagine autotune for holding hands…. I note that besides spatial and temporal granularity, latency will be deeply important—slight delays in the signal will be maddening!

This will let you touch each other across the internet—though latency will probably limit the effects.

Er. Um. I think I’m wandering a bit here.

Cool stuff, and there are way, way more interesting uses than virtual knobs on your car radio!