Category Archives: Robotics

CuddleBits: Much More Than Meets The Eye

Paul Bucci and colleagues from University of British Colombia report this month on Cuddlebots, “simple 1-DOF robots” that “can express affect” [1] As Evan Ackerman says, “build your own tribble!” (Why hasn’t there been a zillion Tribble analogs on the market???)

This caught my eye just because they are cute. Then I looked at the paper presented this month at CHI. Whoa! There’s a lot of interesting stuff here.[1]

First of all, this is a minimalist, “how low can we go” challenge. Many social robots have focused on adding many, many degrees of freedom, for example, to simulate human facial expressions as faithfully as possible. This project goes the other way, trying to create social bonds with only one DOF.

“This seems plausible: humans have a powerful ability to anthropomorphize, easily constructing narratives and ascribing complex emotions to non-human entities.” (p. 3681)

In this case, the robot has programmable “breathing” motions (highly salient in emotional relationships among humans and other species). The challenge is, of course, that emotion is a multidimensional phenomenon, so how can different emotions be expressed with just breathing? And, assuming they can be created, will these patterns be “read” correctly by a human?

This is a great piece of work. They developed theoretical understanding of “relationships between robot behaviour control parameters, and robot-expressed emotion”, which makes possible a DIY “kit” for creating the robots – a theory of Tribbleology, and a factory for fabbing Tribbles!

I mark their grade card with the comment, “Shows mastery of subject”.

As already noted, the design is “naturalistic”, but not patterned after any specific animal. That said, the results are, of course, Tribbleoids, a fictional life form (with notorious psychological attraction).

The paper discusses their design methods and design patterns. They make it all sound so simple, “We iterated on mechanical form until satisfied with the prototypes’ tactility and expressive possibilities of movement.” This statement understates the immense skill of the designers to be able to quickly “iterate” these physical designs.

The team fiddled with design tools that were not originally intended for programming robots. The goal was to be able to generate patterns of “breathing”, basically sine waves, that could drive the robots. This isn’t the kind of motion needed for most robots, but it is what haptics and vocal mapping tools do.

Several studies were done to investigate the expressiveness of the robots, and how people perceived them. The results are complicated, and did not yield any completely clear cut design principles. This isn’t terribly surprising, considering the limited repertoire of the robots. Clearly, the ability to iterate is the key to creating satisfying robots. I don’t think there is going to be a general theory of emotion.

I have to say that the authors are extremely hung up on trying to represent human emotions in these simple robots. I guess that might be useful, but I’m not interested in that per se. I just want to create attractive robots that people like.

One of the interesting things to think about is the psychological process that assigns emotion to these inanimate objects at all. As they say, humans anthropomorphize, and create their own implicit story. It’s no wonder that limited and ambiguous behavior of the robots isn’t clearly read by the humans: they each have their own imaginary story, and there are lots of other factors.

For example, they noted that variables other than the mechanics and motion While people recognized the same general emotions, “we were much more inclined to baby a small FlexiBit over the larger one.” That is, the size of the robot elicited different behaviors from the humans, even with the same design and behavior from the robot.

The researchers are tempted to add more DOF, or perhaps “layer” several 1-DOF systems. This might be an interesting experiment to do, and it might lead to some kind of additive “behavior blocks”. Who knows

Also, if you are adding one more “DOF”, I would suggest adding simple vocalizations, purring and squealing. This is not an original, this is what was done in “The Trouble With Tribbles” (1967) [2].


  1. Paul Bucci, Xi Laura Cang, Anasazi Valair, David Marino, Lucia Tseng, Merel Jung, Jussi Rantala, Oliver S. Schneider, and Karon E. MacLean, Sketching CuddleBits: Coupled Prototyping of Body and Behaviour for an Affective Robot Pet, in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2017, ACM: Denver, Colorado, USA. p. 3681-3692.
  2. Joseph Pevney, The Trouble With Tribbles, in Star Trek. 1967.

 

Robot Wednesday

Roomba’s Successful Multispecies Interface

In home robotics are still struggling with trying to interface with humans, but it will surely be necessary to coexist with our co-species.

Roomba robot vacuum cleaners has pioneered many aspects of home automation, achieving the exalted status of being a generic term for this class of device (like “Kleenex” or “Bandaid”). Their cultural status is marked too, by pioneering the “cats riding roomba” video genre.

Now that we know that cat videos are as sneaky as the little feline varments themselves, let’s just do a “cute dog” video, which should be safe to use, right?

There are several interesting things. First of all—aww, what a cute doggy, and so video friendly!

Second, it is interesting to see that Chester has figured out the stop button on the Roomba. It looks like the “intuitive” interface is, in fact, intuitive across species boundaries!   Well done, Roomba!

How ever he discovered the stop button, it’s no secret at all how he discovered that stopping the robot makes mom pay attention.

We can note the different understanding of the technology by the two species, and the different goals they pursue in their shared use of the device. Mom thinks she is vacuuming, Chester thinks he is playing with mom. They are both right.

This is an example of a deeper point for home robots: whenever there are more than one person (entity) in the home, operating the robot is a multiplayer game. So much of the literature is all about treating a personal robot as if it is a personal phone: controlled by and interacting with exactly one person, and customized to that person. The reality is otherwise. Robots need to understand, or act as if they understand, all the people and animals (and robots) in the situation.

 

Robot Wednesday

Natural Selection of Glider Drone Concepts

Evan Ackerman reports about yet another “disposable drone” project, similar to the ‘cardboard drone’ concept from OterhLabs. Great minds move in similar ways, and the U.S. Marines are testing the same concept only larger: plywood gliders. I’m sure there are other variations on this theme in the works, it is an idea whose time has come.

The Marine version (TACAD (TACtical Air Delivery)) is plywood and bolts, plus GPS and guidance. The glider is intended to be launched from an aircraft to glide many kilometers to the recipient. Crash landing within fifty meters or so, the airframe will be discarded.

Photo: Evan Ackerman/IEEE Spectrum

One reason that the time has come for this idea is that civilian, hobbyist-grade GPS and small aircraft controllers are widely available and cheap. In a sort of technological “circle of life”, these military technologies moved out to wide use, and developed to the point where they can work as well as special orders, and are, of course, vastly cheaper. They are now being picked up by the military, replacing custom built systems.

Using inexpensive materials is particularly important for unpowered gliders because they cannot fly home. For that matter, they have limited maneuverability, and relatively high probability of mishap. Pushing the cost down makes it a “throw away” craft, worth risking in more situations.

Between the TACAD and Otherlab, we can see that there is a certain evolutionary selection process going on there. The same underlying technology (GPS, digital guidance, stand off air launch) can be realized at a variety of scales. The USMC is planning one with a payload about the size of a microwave, OterhLab’s is smaller. We could imagine both larger and smaller versions, using appropriate materials.

There is a tradeoff here; the smaller the drone, they more of them that can be deployed. Otherlab’s cardboard packages could be dropped by the hundreds, The same aircraft could drop far fewer TACAD sized craft. Depending on the type of delivery, either mode might be better.

There are other tradeoffs related to the size. The OthereLabs is designed to be delivered as a compact flatpack, and also to biodegrade after landing. I imagine that TACAD might be flatpacked, but the initial design has foldable wings for up for compact transport. Flatpack design also enables a sort of just in time, on site construction that may be advantageous for some uses. For example, the plans could be delivered electronically, and constructed from local materials.

This evolutionary radiation of disposable drone gliders is an interesting reprise of military glider technology. At its peak, gliders were widely used for paratroops (for example, the movie “A Bridge Too Far” has some excellent recreations of allied glider operations). Dangerous, defenseless, and limited, gliders were surpassed by other aircraft, especially helicopters. Decades later, the concept of a cargo glider has returned, made possibly by model air crate technology.


  1. Evan Ackerman, U.S. Marines Testing Disposable Delivery Drones, in IEEE Spectrum – Automation. 2017. http://spectrum.ieee.org/automaton/robotics/drones/marines-testing-disposable-gliding-delivery-drones

 

Robot Wednesday Friday

RoboThespian: Uncanny or Just Plain Unpleasant?

RoboThespian  is disturbing.

I think this particularly humanoid robot has climbed out of the uncanny valley of discomfort, and ambled out onto the  plain of extremely annoying coworker. Disney animatronics gone walkabout.

RoboThespian is a life sized humanoid robot designed for human interaction in a public environment. It is fully interactive, multilingual, and user-friendly, making it a perfect device with which to communicate and entertain.

Clearly, these guys have done a ton of clever work, integrating human like locomotion, speech synthesis, projection, face tracking, and serious chat bot software.

The standard RoboThespian design offers over 30 degrees of freedom, a plethora of sensors, and embedded software incorporating text-to-speech synthesis in 20 languages, facial tracking and expression recognition. The newly developed RoboThespian 4.0 will offer a substantial upgrade, adding additional motion range in the upper body and the option of highly adept manipulative hands.”

What can you do with all this? I think the key clue is that the programming is done via a GUI enviroment  Blender

which means that you basically create a computer generated scene, which is “rendered” in physical robots.

Much of the spectacular effect is due to well coordinated facial expressions, head movement, and speech. The robot also has sensors to detect people and especially faces, and to orient to them. It also has facial expression recongnition, which lets it “reproduce” facial expressions. All these effects are “uncanny”, and make the beast appear to be talking to you (or singing at you). Ick!

All this is in the pursuit of…I’m not sure what.


I grant you that this is a great effect, at least on video. But what is it for?

The title and demos https://www.engineeredarts.co.uk/robothespian/theatre-of-robots/ suggests that it replaces human thespians (live onstage), which seems far fetched. If you want mechanized theater, you always have computer generated movies. As far as I can tell, the main use case is for advertising, e.g., trade show demos. It either replaces human presenters (demo babes) or it replaces video billboards.

They also suggest that this is a good device for telepresence, It “can inhabit an environment in a more human manner; it’s the next best thing to being there.”   I’m not at all sure about that. Humanoid appearance is not really important for effective telepresence in most cases, and there is no reason to think this humanioid is well suited for any give telepresence situation.

Let me be clear: this product is really nicely done.  I do appreciate a well crafted system, integrating lots of good ideas.

But I really don’t see that roboThespian is anything other than a flashy gimmick. (Human actors are way, way cheaper, and probably better.)

On the other hand, when I saw the first computer mouse on campus, I declared that it was a useless (and stupid) interface, and no one would ever use it.   I was wrong about mice (Boy was I wrong!), so my intuitions about humanoid chatter bots may be wildly off.

Update May 4 2017:  Corrected to indicated taht Engineering Arts does not use Blender, as the original post said. I must have seen some out of date information.  EngArt have their own environment which, if not built from Blender, is built to look just like it.  Thanks to Joe Wollaston for the correction.

 

Robot Wednesday

Cool Drone Magic From Marco Tempest

We techies, we all want to build stuff that is magical. Most of us have little clue what that entails. This is why I have enjoyed working with musicians and other performing artists (e.g., [1]), who understand wonder and magic, not to mention human perception and movement.

These days, there is a vast and growing interest in human-robot interaction, self-driving cars, drones, and so on.  Much of this work is not magical in the least. Usually, this is because brilliant engineers are not really brilliant imagineers.

Fortunately, drones are now cheap and easy enough that they are getting in the hands of cops, artists and teenagers—and thus are entering out culture.  In my view, one glorious circus performance is more significant that a thousand delivery drone concepts.

Of course, if the goal is to create magic, then we really should collaborate with, well, magicians.

Along this line, illusionist Marco Tempest has released some cool videos, demonstrating amazing multi-UAV behavior, apparently under gesture and/or voice command. Actually, I’m not really sure how it all works—the very definition of magic, no?

The video is awesome, but just as interesting, he articulates the principles that make these little buzzing light bulbs seem alive, intelligent, and communicating with him.

The algorithms that enable the UAVs to fly in close, coordinated swarm that reacts to him are:

“mathematics that can be mistaken for intelligence, and intelligence for personality.”

What a lovely phrase!

If the whole idea of social robots is to make people perceive the artificial intelligence as a friendly agent, then the game is really about creating anthropomorphism, which is

an illusion created by technology and embroidered by our imagination to become an intelligent flying robot, a machine that appears to be alive.

From this point of view, all that rigamarole about big data and vast computational power is kind of off-target. The target is to create the illusion of intelligence—in the mind of the human observer.

This illusion works through the same principle that most magic tricks work:

Our imagination is more powerful than our reasoning and it’s easy to attribute personality to machines.

Another marvelous phrase!

Really cool! When can I buy a suitcase full of these intelligent drones??

By the way, this is one of the most compelling “gestural” interfaces I’ve seen.  No phone.  No goggles.  No joy stick.  Just body and hands. So, so, slick!

By the way, I would add one more little trick that would deepen the illusion:  the drones should have individual names, and should respond to their name.  I would predict that once we have applied a personal name to each flyer, we will soon perceive individual differences among them, even if they are actually programmed identically.  (Though, it would be cool to have each be programmed different.)


  1. Mary Pietrowicz, Robert E. McGrath, Guy Garnett, and John Toenjes, Multimodal Gestural Interaction in Performance, in Whole Body Interfaces Workshop at CHI 2010. 2010: Atlanta. http://lister.cms.livjm.ac.uk/homepage/staff/cmsdengl/WBI2010/documents2010/Pietrowicz.pdf
  2. Marco Tempest. Work. 2017, http://marcotempest.com/en/work/.

 

Tensegrity Robot Can Climb Better Than I Can

“Tensegrity”.

“Robot”.

Count me in!

NASA has been exploring tensegrity robots for a number of years.

Light and rugged, capable of being packed in a tiny space, these are perfect for transporting to another planet, and dropping from orbit.

The locomotion is more than a little magical!  It’s amazing to see them move along, with a weird falling/thrashing combination of just right moves

This week, the UC Berkeley BEST lab released a video of one of their tensegrity robots climbing a ramp. Awesome!

This is in Earth gravity, so how does it manage to climb at all?

I can’t walk up a 24 degree ramp very easily, and I’m a biped with good shoes. So how does a bunch of sticks and rubber bands manage?

Wow!

Note to self: Just how does this locomotion work? Dig into the simulation code.

 

Robot Wednesday

Dual Drone Coordination

One of the challenges for robots in general and UAV’s in particular is how to coordinate multiple bots.  It’s hard enough to make one do what you want, how can we make more than one work together. Whether the goal is “autonomous” on board direction or external remote control, it’s not easy to keep together and not bump each other.

Some of the wizards of Zurich have yet another idea, using visual tracking [1].  (I guess we should call the usual suspects, because so much important robotics is coming from .ch.)

The stated use case is there GPS is not available, and you want to carry a payload with a team of quad copters. As they point out, there are many reasons you might want to do this, more lifting power, better ability to maneuver the payload, etc.

Their solution is a simple leader/follower scheme, with one drone setting the course, and the second keeping station behind. The video makes this very clear.

The hard part, of course, is how to follow.

Their trick is to use visual tracking, and simple algorithms to keep the leader in view.

(In the demo, they use a black and white pattern as the target, which looks like old Augmented Reality targets. That’s a blast from the past!)

Neat.


  1. Michael Gassner, Titus Cieslewski, and Davide Scaramuzza, Dynamic Collaboration without Communication: Vision-Based Cable-Suspended Load Transport with Two Quadrotors (to appear), in IEEE International Conference on Robotics and Automation. 2017: Singapore. http://rpg.ifi.uzh.ch/docs/ICRA17_Gassner.pdf

 

Robot Wednesday