Twenty First Century Interfaces: Graphical Robot Instruction?

The recent issue of IEEE Computer magazine on “Twenty First Century User Interface Design” has several articles on interface design [1]. I already commented on the “Cuddly” interfaces (yes, please!), from the Imani lab.

A second paper from this group discusses “Graphical Instruction for Home Robots”. Programming robots to be useful in everyday settings such as homes is difficult. Low level programming is hard and user hostile. Only geeks will have the patience to write code for their robot, and even geeks will do a poor job of it.

The alternative is to make the robot “intelligent” enough to do useful stuff, no matter what gnarly details the environment throws at it. This can be done for, say, vacuum cleaners. A suckbot can figure out the edges of the area, detects obstacles, and deals with pets and other unexpected objects.

But how do we make robots that can interact with people, so that people can tell them what to do, or show them how to do what they (the human) wants done? This is a problem from many angles: natural language interfaces are hard for both people and robots, it is difficult to anticipate what the robot needs to know, and, I would say, people don’t necessarily want to talk to their robots this way. (“Do what I mean, not what I say!”)

Daisuke
 Sakamoto, Yuda Sugiura, Masahiko Inami, and Takeo Igarashi describe graphical interfaces which split the difference. These techniques are familiar from telepresence and industrial robot systems, and they are applying it to everyday tasks.

Their examples include “Cooky”, which follows recipes programmed by the human. “Foldy” is more interesting, which the human trains the robot to fold clothes. This is a difficult problem for robots, not least because humans do not necessarily know how to do it, either.

The authors acknowledge that these techniques are not new, but they “this has not really been explored in robotics with careful design and evaluation.” (p. 24)

I have my doubts.

First of all, the example projects suffer from the generic problem with “home robots”, which is “who cares?”

For example, to use the cooking robot, you cut ingredients and place them on an array of plates. The robot then choreographs adding the ingredients and spices, and cooking for a preset time. This automates the fun part, while leaving the chores (cleaning, chopping, etc) to the human. Furthermore, you probably need to monitor progress anyway, because the preset timings cannot be precise—you need to check if it is done, and if it needs “more salt” or whatever. In other words, it saves very little work, and not the work that I would pick to avoid. It probably takes some of the fun out of cooking, too.

No matter how effective the GUI interface might be for “teaching” the cooking process to the robot, the overall result is questionable because it is an activity that probably doesn’t benefit from this kind of automation.

But how desirable are these GUI interfaces for this programming tack? Over several decades I’ve learned to deal with GUI drawing and drag-n-drop interfaces. But I don’t like them, and I’m not terribly good at it. My observations suggest that people vary greatly in their aptitude and attraction to GUI interfaces. So this isn’t a great solution for everyone,

(And, by the way,  this is definitely a twentieth century interface anyway.)

A GUI also throws in a level of cognitive abstraction that is probably undesirable. Programming a robot with a screen-based GUI involves making 2D drawing gestures that relate to 3D world actions. These mental gyrations may be familiar to GUI designers, but they are really alien to human psychology and intuitive physics.

I note that this same lab has been experimenting with Augmented Reality to create 3D gestural interfaces, which might be better.

I think using GUIs to program home robots is a decent idea, but we need better robot tasks.

For example, , I think that Foldy is an interesting project that needs considerable work.  While I personally don’t care much about folding my laundry, nor do I find it too difficult, but it is one of many everyday tasks that robots can’t do very well.

The teaching interface described is very crude, and probably impossible to really use. (While I’m thinking about it, shouldn’t there be something like a “grammar of laundry” or “grammar of folding”, which would make error detection a lot easier.)


  1. Daisuke
 Sakamoto, Yuda Sugiura, Masahiko Inami, and Takeo Igarashi, Graphical Instruction for Home Robots. Computer, 49 (7):20-25, 2016.

Robot Wednesday

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s