Category Archives: Interface Design

CuddleBits: Much More Than Meets The Eye

Paul Bucci and colleagues from University of British Colombia report this month on Cuddlebots, “simple 1-DOF robots” that “can express affect” [1] As Evan Ackerman says, “build your own tribble!” (Why hasn’t there been a zillion Tribble analogs on the market???)

This caught my eye just because they are cute. Then I looked at the paper presented this month at CHI. Whoa! There’s a lot of interesting stuff here.[1]

First of all, this is a minimalist, “how low can we go” challenge. Many social robots have focused on adding many, many degrees of freedom, for example, to simulate human facial expressions as faithfully as possible. This project goes the other way, trying to create social bonds with only one DOF.

“This seems plausible: humans have a powerful ability to anthropomorphize, easily constructing narratives and ascribing complex emotions to non-human entities.” (p. 3681)

In this case, the robot has programmable “breathing” motions (highly salient in emotional relationships among humans and other species). The challenge is, of course, that emotion is a multidimensional phenomenon, so how can different emotions be expressed with just breathing? And, assuming they can be created, will these patterns be “read” correctly by a human?

This is a great piece of work. They developed theoretical understanding of “relationships between robot behaviour control parameters, and robot-expressed emotion”, which makes possible a DIY “kit” for creating the robots – a theory of Tribbleology, and a factory for fabbing Tribbles!

I mark their grade card with the comment, “Shows mastery of subject”.

As already noted, the design is “naturalistic”, but not patterned after any specific animal. That said, the results are, of course, Tribbleoids, a fictional life form (with notorious psychological attraction).

The paper discusses their design methods and design patterns. They make it all sound so simple, “We iterated on mechanical form until satisfied with the prototypes’ tactility and expressive possibilities of movement.” This statement understates the immense skill of the designers to be able to quickly “iterate” these physical designs.

The team fiddled with design tools that were not originally intended for programming robots. The goal was to be able to generate patterns of “breathing”, basically sine waves, that could drive the robots. This isn’t the kind of motion needed for most robots, but it is what haptics and vocal mapping tools do.

Several studies were done to investigate the expressiveness of the robots, and how people perceived them. The results are complicated, and did not yield any completely clear cut design principles. This isn’t terribly surprising, considering the limited repertoire of the robots. Clearly, the ability to iterate is the key to creating satisfying robots. I don’t think there is going to be a general theory of emotion.

I have to say that the authors are extremely hung up on trying to represent human emotions in these simple robots. I guess that might be useful, but I’m not interested in that per se. I just want to create attractive robots that people like.

One of the interesting things to think about is the psychological process that assigns emotion to these inanimate objects at all. As they say, humans anthropomorphize, and create their own implicit story. It’s no wonder that limited and ambiguous behavior of the robots isn’t clearly read by the humans: they each have their own imaginary story, and there are lots of other factors.

For example, they noted that variables other than the mechanics and motion While people recognized the same general emotions, “we were much more inclined to baby a small FlexiBit over the larger one.” That is, the size of the robot elicited different behaviors from the humans, even with the same design and behavior from the robot.

The researchers are tempted to add more DOF, or perhaps “layer” several 1-DOF systems. This might be an interesting experiment to do, and it might lead to some kind of additive “behavior blocks”. Who knows

Also, if you are adding one more “DOF”, I would suggest adding simple vocalizations, purring and squealing. This is not an original, this is what was done in “The Trouble With Tribbles” (1967) [2].

  1. Paul Bucci, Xi Laura Cang, Anasazi Valair, David Marino, Lucia Tseng, Merel Jung, Jussi Rantala, Oliver S. Schneider, and Karon E. MacLean, Sketching CuddleBits: Coupled Prototyping of Body and Behaviour for an Affective Robot Pet, in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2017, ACM: Denver, Colorado, USA. p. 3681-3692.
  2. Joseph Pevney, The Trouble With Tribbles, in Star Trek. 1967.


Robot Wednesday

MIstform Display

Reported this week at CHI, Mistform is “a shape changing fog display that can support one or two users interacting with either 2D or 3D content.” ([1], p. 4383)  Cool!

The basic idea of this kind of display is to generate a “fog” of water droplets in front of the person, and project information from the back. With cleaver geometry, the projection is seen by the eye as 3D objects hanging in mid air. The cool thing is that the user can reach into the fog to touch the objects hanging there.

This version from  Yutaka Tokuda and colleagues at University of Sussex, adds the wrinkle that the shape of the fog can be manipulated, to create a curved “screen” [1]. This calls for clever squared geometric computations, to account not only for the fog and the eye, but also for the curvature of the fog. The latter is computed from the position of the pipes that generate the mist.

The projection is, in principle, “mere geometry”. Working from the eye position (via head tracking), the color and brightness of each pixel is computed. Working backwards, the pixel is mapped to a region of the go, and then back to the projector. Voila.

Interacting with the display uses hand tracking with a Kinnect. The fog is segmented into regions that can be touched (“actuators”). This is coordinated with the projected objects, so the user can reach into the fog and “touch” an object in a natural motion.


This is a very nice piece of work indeed. The paper [1] gives lots of details.

This is a great example of the potential of projective interfaces, which will replace the ubiquitous screen in the coming decade or two. (If you have any doubts, take a gander at this wizardry from some Illinois alums.)

Of course, the mountain we have to climb is to make one big enough and clever enough that we can walk into it. This will also combine with haptics so the objects ‘push back’ when you tough them. Not that will be cool.

  1. Yutaka Tokuda, Mohd Adili Norasikin, Sriram Subramanian, and Diego Martinez Plasencia, MistForm: Adaptive Shape Changing Fog Screens, in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2017, ACM: Denver, Colorado, USA. p. 4383-4395.
  2. University of Sussex. MistForm: adaptive shape changing fog screens 2017,

PS. Wouldn’t “Shape Changing Fog Screen” be a great name for a band?
Or how about,  “The Fog and the Eye“.

RoboThespian: Uncanny or Just Plain Unpleasant?

RoboThespian  is disturbing.

I think this particularly humanoid robot has climbed out of the uncanny valley of discomfort, and ambled out onto the  plain of extremely annoying coworker. Disney animatronics gone walkabout.

RoboThespian is a life sized humanoid robot designed for human interaction in a public environment. It is fully interactive, multilingual, and user-friendly, making it a perfect device with which to communicate and entertain.

Clearly, these guys have done a ton of clever work, integrating human like locomotion, speech synthesis, projection, face tracking, and serious chat bot software.

The standard RoboThespian design offers over 30 degrees of freedom, a plethora of sensors, and embedded software incorporating text-to-speech synthesis in 20 languages, facial tracking and expression recognition. The newly developed RoboThespian 4.0 will offer a substantial upgrade, adding additional motion range in the upper body and the option of highly adept manipulative hands.”

What can you do with all this? I think the key clue is that the programming is done via a GUI enviroment  Blender

which means that you basically create a computer generated scene, which is “rendered” in physical robots.

Much of the spectacular effect is due to well coordinated facial expressions, head movement, and speech. The robot also has sensors to detect people and especially faces, and to orient to them. It also has facial expression recongnition, which lets it “reproduce” facial expressions. All these effects are “uncanny”, and make the beast appear to be talking to you (or singing at you). Ick!

All this is in the pursuit of…I’m not sure what.

I grant you that this is a great effect, at least on video. But what is it for?

The title and demos suggests that it replaces human thespians (live onstage), which seems far fetched. If you want mechanized theater, you always have computer generated movies. As far as I can tell, the main use case is for advertising, e.g., trade show demos. It either replaces human presenters (demo babes) or it replaces video billboards.

They also suggest that this is a good device for telepresence, It “can inhabit an environment in a more human manner; it’s the next best thing to being there.”   I’m not at all sure about that. Humanoid appearance is not really important for effective telepresence in most cases, and there is no reason to think this humanioid is well suited for any give telepresence situation.

Let me be clear: this product is really nicely done.  I do appreciate a well crafted system, integrating lots of good ideas.

But I really don’t see that roboThespian is anything other than a flashy gimmick. (Human actors are way, way cheaper, and probably better.)

On the other hand, when I saw the first computer mouse on campus, I declared that it was a useless (and stupid) interface, and no one would ever use it.   I was wrong about mice (Boy was I wrong!), so my intuitions about humanoid chatter bots may be wildly off.

Update May 4 2017:  Corrected to indicated taht Engineering Arts does not use Blender, as the original post said. I must have seen some out of date information.  EngArt have their own environment which, if not built from Blender, is built to look just like it.  Thanks to Joe Wollaston for the correction.


Robot Wednesday

“When machines control us”

Patrick Baudisch, Pedro Lopes, Alexandra Ion, Robert Kovacs, and David Lindlbauer have made a bit of a splash with their interactive art installation, ad infinitum: a parasite that lives off human energy [1].

This must be “art” because it is transgressive, and has no other reason to exists, except to transgress.

The device is essentially a torture machine, designed to trap and enslave a person, and make him pull a lever to generate electricity to power the trap.

When visitors to the gallery put their hand inside the installation’s clear rectangular case, the machine’s two cuffs clamp down on their arm while their hand rests on an energy-generating lever. Each cuff is equipped with an electrode and once the machine senses the arm inside it, it sends a small electrical jolt to visitors’ arm muscles, causing them to automatically contract and start cranking the lever. Once the visitors start cranking of their own accord, the electrical current stops. But if they get too lazy, the machine gives them another little buzz, forcing them to keep cranking.”  (from [2])

The second part of the joke is that this device will never let you go until another person takes your place in the other sleeve. This social contingency pushes the human to work to get the next victim entrapped, in order to save his own skin. To be clear, this is a form of psychological torture.

The third part of the piece is the artists “message”, which is something about how we don’t realize just how much computer interfaces “use” us. (News flash: many of us have been pointing this out for many years.)   The designers have been observing how people react to this vicious treatment, and they make some not especially deep remarks about agency.

In terms of social networks and the whole fake news thing, those are the domains where we feel like we’re totally in power, but it turns out we aren’t,” Lopes says. “Once you can see an interface it becomes a different matter. When you can experience an interface where it lives with your body, versus an invisible piece of code that runs on a machine you’ll never see on the cloud? That’s so abstract. We gave up agency without even understanding we’re giving up agency.”” Quoted in [2])

The device was on show in Dublin, so I haven’t actually tried it myself.   But I’m pretty sure I would not stick my hand in it myself.

My own view is that this “art installation” is highly unethical, poorly designed, and should never have been accepted for public interaction.

First of all, if this was a “psychology experiment” instead of “art”, it would never pass the human subjects reviews. This is an implement of torture, both physical and mental. The potential harm to the subject is difficult to justify, because there is no benefit to them or to science, or to anyone.

Worse, there is no informed consent, and they are not allowed to withdraw consent when they want to. (There was no “opt in” nor an “opt out”. No operative consent at all. This simply is not a properly designed psychology experiment.

Second, this is a terrible example of computer interface design. The device appears to be a harmless toy, but isn’t. There are no warning stickers, nor any safety overrides.A good interface protects the user from harm, it doesn’t trick him into harming himself.

It’s awful, awful design.

Third, the “message” depends on a completely spurious point about “agency”.

If this device were operated by a human, executing the same “algorithm”, we would rightly recognize it as cruel and obnoxious. We would not yickety yack about how “we think we are in power, but sometimes other people have power over us”. Duh. People have power over us all the time, and sometimes we don’t know it, and sometimes we think we are in charge more than we really are.

So? We’ve proved this fifty years ago in legitimate research.  (See Zimbardo and all the variations on “social control”.)

But in this case, the (lazy) bully has programmed a computer to torture the victims. So, there isn’t a human pushing the button, just a computer set up by a human.

While the victim isn’t the “user”, so much as the “used”, he is actually being “used” by the people who created the algorithm.  Instead of some minimum wage guard “just following orders”, there is a computer just following instructions.

If the artists want to make an interesting point, they might talk about how they have succeeded in concealing their role in this torture by creating a computer system with a simple, autonomous algorithm. “The computer did it, not me”, seems to be their defense.

The computer algorithm as insulation from moral responsibility.

Now that would be an interesting thing to point out, though they would have to take responsibility for their own misdeeds.

This is indeed a trenchant commentary on today’s computer based business models. Any misbehavior is OK so long as it is done by an algorithm which isn’t human and isn’t morally responsible for the crime.  See Uber.  See AirBnB.  See Facebook.  Etc.

Really, this is a terrible, terrible exhibit. If this were done in a war zone, it would be considered a war crime.

  1.  Patrick Baudisch, Pedro Lopes, Alexandra Ion, Robert Kovacs, and David Lindlbauer, ad infinitum: a parasite that lives off human energy. 2017.
  2. Katharine Schwab, It’s Alarmingly Easy For Machines To Control Us: An art piece turns the user into the used. , in Co.Design. 2017.

Health Apps Are Potentially Dangerous

The “Inappropriate Touch Screen Files” has documented many cases of poor design of mobile and wearable apps, and I have pointed out more than once the bogosity of unvalidated cargo cult environment sensing.

This month Eliza Strickland writes in IEEE Spectrum about an even more troubling ramification of these bad designs and pseudoscientific claims: “How Mobile Health Apps and Wearables Could Actually Make People Sicker” [2].

 Strickland comments that the “quantified self” craze has produced hundreds of thousands of mobile apps to track exercise, sleep, and personal health. These apps collect and report data, with the goal of detecting problems early and optimizing exercise, diet, and other behaviors. Other apps monitor the environment, providing data on pollution and micro climate. (And yet others track data such as hair brushing techniques.)

These products are supposed to “provide useful streams of health data that will empower consumers to make better decisions and live healthier lives”.

But, Strickland says, “the flood of information can have the opposite effect by overwhelming consumers with information that may not be accurate or useful.

She quotes David Jamison of the ECRI Institute comments that many of these apps are not regulated as medical devices, so they have not been tested to show that they are safe and effective.

Jamison is one of the authors of an opinion piece in the JAMA, “The Emerging Market of Smartphone-Integrated Infant Physiologic Monitors[1]. In this article, the authors strongly criticize the sales of monitoring systems aimed at infants, on two grounds.

First, the devices have not been proven accurate, safe, or effective for any purpose, let alone the advertised aid to parents. Second, even if the devices do work, there is considerable danger of overdiagnosis. If a transient and harmless event is detected, it may trigger serious actions such as an emergency room visit. If nothing else, this will cause needless anxiety for parents.

I have pointed out the same kind of danger from DIY environmental sensing: if misinterpreted, a flood of data may produce either misplaced anxiety about harmless background level events or misplaced confidence that there is no danger if the particular sensor does not detect any threat.

An important design question in these cases is, “is this product good for the patient (or user)”?  More data is not better, if you don’t know how to interpret it.

This is becoming even more important than the “inappropriateness” of touchscreen interfaces:  the flood of cargo cult sensing in the guise of “quantified self” is not only junk, it is potentially dangerous.

  1. Christopher P. Bonafide, David T. Jamison, and Elizabeth E. Foglia, The Emerging Market of Smartphone-Integrated Infant Physiologic Monitors. JAMA: Journal of the American Medical Association, 317 (4):353-354, 2017.
  2. Eliza Strickland, How Mobile Health Apps and Wearables Could Actually Make People Sicker, in The Human OS. 2017, IEEE Spectrum.


“Hair Coach”–with App

In recent years, CES has become an undisputed epicenter of gadgets, so I can’t let the occasion pass without at least one addition to the Inappropriate Touch Screen Files.

I’ll skip the boneheaded “Catspad”, which isn’t particularly new, and certainly makes you wonder who would want this.

I think the winner for today is the “Hair Coach”, which uses a “Smart Hair Brush” to offer you “coaching” on your hair care.

The brush itself has a microphone to listen to the hair as it is brushed (which I think is slightly cool—some kind of machine learning using the crackle of your hair), accelerometers in the brush to detect your technique (and, for the mathematically challenged, count your strokes). It also has a vibrator to provide haptic feedback (to train you to brush your hair more optimally?).

Of course, no product would be complete without a mobile app: “the simple act of brushing begins the data collection process.” The app is supposed to give you “personalized tips and real-time product recommendations”. The latter are basically advertisements.

I will note that the materials on the web offer absolutely no indication that any of this “optimization” actually does anything at all, other than increase profits (they hope).

This product caught my eye as particularly egregious “inappropriate touch screen”, because this is clearly a case of a non-solution chasing a non-problem. (Of course, most of the “hair care” industry is non-solutions to non-problems.)

My own view is that the simple and millennia old technology of a hairbrush was not actually broken, or in need of digital augmentation. Worse, this technology actually threatens one of the small pleasures of life. The soothing, sensual brushing of your own hair can be a simple and comforting personal ritual, a respite from the cares of the day.

Adding a digital app (and advertising) breaks the calm of brushing, digitally snooping and “optimizing”, and pulling your attention away from the experience and toward the screen—with all its distractions. How is this good for you?

Add this to the Inappropriate Touch Screen Files.


Inappropriate Touch Screen

3D Viz Display of Mummy

Museums offer many opportunities* for digital augmentation, to visualize the unseen, provide context, and allow more human interaction with fragile and rare objects.

There are many technologies that could be interesting, including visualization and animation (2D and 3D), Virtual Reality, Augmented Reality, and imaginative combinations of techniques.

The digital technology is particularly valuable when it can give new views of objects to make the invisible visible, and to help tell their story.

Anders Ynnerman and colleagues discuss a clever interactive visualization of a mummy on display in the British Museum. The exhibit is “simple”: a digital display lets visitors explore the insides of the mummy. The digital display presents detailed 3D visualizations computed from MRI scans of the mummy.

[The] mummy is shown on the surface of the table, but, as a visitor moves a virtual slider on the table, the muscles, organs, and skeleton reveal themselves as the skin is gradually peeled away.” ([1], p. 73)

The article sketches key elements of a “work flow of scanning, curating, and integrating the data into the overall creation of stories for the public can lead to engaging installations at museums” (p. 74)

  1. Scanning
  2. Visualization
  3. Interaction
  4. Story telling

The scanning employs now ubiquitous CT scanning, though scanning dehydrated mummies requires adjustments. On the other hand, radiation dosage is less an issue.

Scanning protocols for mummies require custom settings, as the body is completely dehydrated and regular CT protocols assume naturally hydrated tissues” (p. 75)

The data is visualized by volume rendering, which can be done via ray casting. They remark that this is a highly parallel process, and therefore quite suited to contemporary GPU systems. (Game Processor Units (GPUs) are vector coprocessors (a la Illiac IV) designed to rapidly generate 3D scenes, i.e., for video games.)

The algorithm calculates a representation of the tissue depending on settings which reflect the physics of the materials and the X-ray data. Different settings reveal different types of tissue, and the rendering works to make the view understandable through color, texture, and other features.

These techniques generate huge amounts of data for a single study. Continuing developments in storage and data management have made it much easier to handle CT scans. High end, custom systems are no longer necessary to store and manipulate these volumes of data.

The display system is interactive, and projected to a large touch screen. This type of interface is used by experts (as seen on TV), but a public display needs to be more fool-proof and self-explanatory. They also comment that the system needs to be robust (to run unattended for hours without failing) and have consistent performance with no lags or noticeable artifacts.

Finally, the exhibit is designed around a story. In the case of the Gebelein Man mummy, the exhibit tells about the evidence of an apparently fatal wound (a stab in the back), and the suggestion that the individual was “murdered”. This narrative ties the archaeological exhibit to familiar contemporary police fiction, and helps visitors imagine the remains as a fellow human.

To date, developing such a visualization is labor intensive and requires considerable expertise in visualization and data handling. This process can be improved in the future, to make it easier for domain experts to create interactive visualizations to present science and stories to the public.

  1. Anders Ynnerman, Thomas Rydell, Daniel Antoine, David Hughes, Anders Persson, and Patric Ljung, Interactive visualization of 3d scanned mummies at public venues. Commun. ACM, 59 (12):72-81, 2016.

* Opportunities to teach and learn and enjoy, but not  necessarily opportunities to make piles of money. Museums are generally underfunded and over-committed.