Human Robot Interfaces

An interesting paper in IEEE Computer Graphics and Applications from Takeo Igarashi and Masahiko Inami, “Exploration of Alternative Interaction Techniques for Robotic Systems”. [1]

I&I are interested in applying ideas from whole body interfaces to interactions with robots (i.e., computers that act back at you!) In a classic design strategy, they seem to be trying out everything from column A (human interaction interfaces) with everything from column B (robot things). This leads to some interesting ideas.

For example, they have tried a graphical drag-and-drop cooking interface, backended by a bunch of little mobile robots. This system relies on the human to select and identify ingredients, and to tell the robot the procedure. The robot is charged with precise, safe, and repeatable execution. I think this is combining the mutual strengths of the person and robot.

I&I also experiment with Augmented Reality linked to robots. One approach is to project a miniature of a room onto a work area, where the person can touch objects to direct the robot. This might be used to control lighting (touch a lamp to turn on) or robot cleaner (draw path to be vacuumed.)   I think the key advantage of AR is the 3D projection, which is potentially much easier to understand and manipulate.

They also use tangible interfaces, giving people objects to manipulate that are meaningful to the robots as well. For example, paper tags can be laid on the carpet to tell the vacuum cleaner where to clean. They have a robot that tours the room, collecting cards which contain instructions as well as location. These instructions are compiled into a plan for the cleaning robot(s). Robots can leave behind error messages in the form of printed tags, so the entire system can be screenless. (Yay! No inappropriate touch screen!)

This idea can be very general, enabling a person to lay down breadcrumbs to define a path and sequence of tasks for a robot. This is much simpler (and faster) than either detailed programming or intensive AI learning.

For me, the most interesting ideas, by far, are the “soft” interfaces. They have explored pressure sensors for cushions and soft toys, in the spirit of Schiphorst’s soft(n) (2009) [2]. Their contribution is an interesting optical based sensor that detects pressure (hugs). A second project is an actuator that is a ring that can be attached to a plush toy, to animate a limb.

Finally, the charming “Graffitti Fur” (which, by the way, isn’t a bad name for a band): this uses the natural property of carpets that show marks when scuffed. Who hasn’t drawn something in a nice plush carpet? Graffiti Fur automates this process with robots that precisely scrape the carpet, laying out rasters from a drawing. It’s cool, and I think that every Roomba-class robot should have this capability, so it can finish off the vacuuming by laying down a “welcome home, master’ message.

See their paper and the references for many more details.


 

  1. Igarashi, Takeo and Masahiko Inami, Exploration of Alternative Interaction Techniques for Robotic Systems. Computer Graphics and Applications, IEEE, 35 (3):33-41, 2015.
  2. Schiphorst, Thecla, soft(n): toward a somaesthetics of touch, in Proceedings of the 27th international conference extended abstracts on Human factors in computing systems. 2009, ACM: Boston, MA, USA.

2 thoughts on “Human Robot Interfaces”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s