Category Archives: Robotics

Turtlabot Follow Me Demo

Turtlebots are low cost, open source robots. Glancing through the tutorials, there is a lot of state of the art stuff here, including serious mapping, navigation and autonomous driving!!

The latter features are showed off in the “follow me” demo.

Neat. And, theoretically, you can DIY!

I haven’t had the time and energy to get into turtlebots, but I really should.

The ‘follow me’ demo is nice, but what I really want is a flock of raptors to follow me like this. A posse of small fast, toothy, bipeds. Maybe in a vee instead of a line. Don’t mess with me!


PS. Wouldn’t “My Raptor Posse” be a great name for a band?

How about, “A Rip of Raptors“?  Or “Personal Raptor

Or, “The Robot Raptor Revue“.


Robot Wednesday

Hoppy Robot!

This is a great age of robot locomotion, and human engineers are recapitulating natural evolution, trying out every biological system– butterflies, bats, snakes–and many things not seen in nature (at least above the micro scale) (quadcopters, bucky bots.

Evan Ackerman reports on the amazing Salto jumping robot from U. C. Berkeley. Salto has one (count ‘em, one) leg, and springs around spending 90% of its travel in the air. It’s absolutely astonishing.

The article indicates that the control algorithm is pretty much the same as one developed in 1984, though we can pack a lot faster computation in a smaller critter now. The mechanical design is bio-inspired, learning from the small marsupial galago, which is a crazy jumper.

However, the actual magic is done with steerable “thrusters” (propellers), and the control depends on an external motion capture system that feeds instructions via wireless (an invisible tether).  This is not the way little bushbabies do it!

The new improved version will be officially presented September at IROS 2017, probably with some even more awesome demo.

I’m not really sure if this design is especially good for anything, but it’s fun to watch and would make a great game. Imagine the fitness benefits of playing “chase the boingy bot”! Or “try to escape the boingy bot”!  (These apps would mash up some kind of planning algorithm to evade or catch the puny human.)

So Cool!

  1. Evan Ackerman, Salto-1P Is the Most Amazing Jumping Robot We’ve Ever Seen, in IEEE Spectrum – Automation. 2017.


Robot Wednesday

Kangaroos Baffle AI

As dozens of companies rush to create self-driving cars for the last decade, we’ve received endless brave words about how great they work, or at least they’ll work great once we work out just a few more details.

The fact is, operating a vehicle is difficult, and making a self-driving car that can go anywhere is really, really hard. There are just so many possible situations to deal with.

This week Australian TV reports yet another goofy case: kangaroos [1].

It seems that AI that is designed to recognize and be nice to animals (i.e., not run over them) doesn’t know what to make of these jumping marsupials.

At the heart of the problem is that kangaroo locomotion doesn’t resemble walking or running in any way. It sits weirdly upright. It hops (i.e., flies a short distance) It bounces.

The computer vision algorithms are baffled.

When I saw this headline, I was going to make some snarky remarks about how ‘those know it all engineers need to get out of their Silicon Valley cul de sacs, and test things in the real world’.

That particular snark has to be shelved in this case, because the report is talking about Volvo engineers. They had tested on the usual animals (humans, dogs, cows), and also moose. But apparently they didn’t encounter any kangaroos in the Swedish road tests. : – )

I’m sure that the engineers will work out these cases. But think about all the weird stuff that a self-driving car will have to face eventually. It’s going to take a while, and there will be frequent software updates and patches. Possibly even regional ‘game packs’ to load up special algorithms. (I mean, do I need kangaroo detector software? Do Aussies need 50 kinds of blowing snow condition algorithms? )

I’ll also grant that even imperfect software, so long as it defaults safely, is probably safer for both passengers and wildlife than incompetent carbon-based units possibly impaired by ethanol or distracted by primate urges. An AI might not be sure what that thing is exactly, but it knows it should try not to run over it, no?

  1. Jake Evans, Driverless cars: Kangaroos throwing off animal detection software. ABC News.June 23 2017,


Robot Wednesday

Telepresence Robot – At the zoo

These days we see a lot of exciting stories about telepresence—specifically, live, remote operation of robots. From the deadly factual reports from the battlefields of South Asia through science fiction novels to endless videos from drone racing gamers, we see people conquering the world from their living room.

One of the emerging technologies is telepresence via a remote robot that resembles ‘an ipad on a segway’. These are intended for remote meetings and things like that. There is two way video, but the screen is mobile and under the command of the person on the other end. So you can move around, talk to people, look at things.

On the face of it, this technology is both amazing (how does it balance like that?) and ridiculous (who would want to interact with an ipad on wheels?) And, of course, many of the more expansive claims are dubious. It isn’t, and is never going to be, “just like being there”.

But we are learning that these systems can be fun and useful. The may be a reasonable augmentation for remote workers, not as good as being there, but better than just telcons. And, as Emily Dreyfus comments, a non representational body is sometimes an advantage.

Last year Sensei Evan Ackerman reported on an extensive field test of one of these telepresence sticks, called the Double 2. This test drive was an interesting test because he deliberately took it out of the intended environment, which stressed the technology in many ways. The experience is a reminder of the limitations of telepresence, but also gives insights into when it might work well.

First of all, he played with it across the continental US (from Maryland to Oregon) thousands of KM apart. Second, he took it outdoors, which it isn’t designed for at all. And he necessarily relied on whatever networks were available, which varied, and often had weak signals.

As part of the test, he went to the zoo and to the beach!

Walking the dog was impossible.

Overall, the system worked amazingly well, considering that it wasn’t designed for outdoor terrain and needs networking. He found it pretty good for standing still and chatting with people, but moving was difficult and stressful at times. Network latency and dropouts meant a loss of control, with possibly harmful results.

Initially skeptical, Sensei Evan recognized that the remote control has advantages.

I’m starting to see how a remote controlled robot can be totally different [than a laptop running Skype] . . . You don’t have to rely on others, or be the focus of attention. It’s not like a phone call or a meeting: you can just exist, remotely, and interact with people when you or they choose.

Whether or not it is “just like being there”, when it works well, there is a sense of agency and ease of use, at least compared to conventional vidoe conferencing.

This is an interesting observation. Not only does everybody need to get past the novelty, but it works best when you are cohabitating for considerable periods of time. Walking the dog, visiting the zoo—not so good. Hanging out with distant family—not so bad.

I note that the most advertised use case—a remote meeting—may be the weakest experience. A meeting has constrained movement, a relatively short time period, and often is tightly orchestrated.  This takes little advantage of the mobility and remote control capabilities. You may as well as well just do a video conference.

The better use is for extended collaboration and conversation. E.g., Dreyfus and others have used it for whole working days, with multiple meetings, conversations in the hall, and so on.  Once people get used to it, this might be the right use case.

I might note that this is also an interesting observation to apply to the growing interest in Virtual Reality, including shared and remote VR environments.  If a key benefit of the telepresence robot is moving naturally through the environment, then what is the VR experience going to be like?  It might be “natural” interactions, but it will be within a virtual environment.  And if everyone is coming in virtually, then there is no “natural” intereaction at all (or rather, the digital is overlaid on the (to be ignored) physical environments. There will be lots of control, but will there be “ease”?  We’ll have to see.

  1. Evan Ackerman, Double 2 Review: Trying Stuff You Maybe Shouldn’t With a Telepresence Robot, in IEEE Spectrum – Automation. 2016.


Robot Wednesday

The Omnicopter is Cool!

Yet another wonder robot from ETH in Zürich (e.g., see this and this):

The Omnicopter.


Not a quadcopter, it’s an octocopter!

The advantage of this design is that it is way, way more maneuverable than quadcopters, helicopters, or blimps. It has full 6DOF movement.

The principle was described in a paper last year [2] and a neat little video:

This year they produced a cool demonstration, playing fetch with the omnicopter.

This is pretty amazing!

The description of the demo indicates that it works by evaluating large numbers of possible trajectories to select optimal one from a given initial state to a final state. They say that the system can generate 500,000 trajectories per second, resulting is a smooth, magical effect.

(This is very much a “brute force” search through all possible trajectories—computers don’t have to be “smart” if they are fast!)

As Evan Ackerman comments, this design has a lot of potential to be better than the conventional approach of trying to put a robot arm on a quadcopter. “[Y]ou could just stick a gripper onto an arbitrary face of it, and then have the entire robot serve as an actuator.”

Nice work, all!

  1. Evan Ackerman, ETH Zurich’s Omnicopter Plays Fetch, in IEEE Spectrum – Automation. 2017.
  2. Dario Brescianini and Raffaello D’ Andrea. Design, modeling and control of an omni-directional aerial vehicle. In 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016, 3261-3266.


Robot Wednesday

Cool Multirotor Transformer

If there is anything we like more than Robots, it’s a Swarm of Robots (a self-assembling swarm is even better!).

Let’s face it. Individual drones are passé. Millions were sold as toys last year. This is hardly cutting edge. (I’m talking to you William Gibson and Cory Doctorow).

Multiple UAVs, working together is where it’s at now.  While there is a lot of cool cooperating robots coming out of Dot CH (Switzerland), we can be sure that Asia is in the game too.

This month at the International Conference on Robotics and Automation (ICRA) in Singapore. Moju Zhao and colleagues from U. Tokyo presented the latest developments in their “Transformable Multirotor with Multilinks” [2].

(They really need a catchy name for this device. A TMuMu? The Magic Mover?)

The basic idea is a reconfigurable group of simple rotors linked together. The group can transform into different shapes to do different things.

The canonical example is to form a compact circle for flight, unfold into a “U” shape to engulf a target, and then close around it to grasp the target and carry it away.

This is really cool!

It looks simple, but there is a lot of fiddly detail to get this to work: the tricky thing about a multipurpose aircraft is that there are so many possibilities that have to be simulated and controlled. Earlier papers explain some of the complicated details (e.g., [1]).

The recent paper and video shows that the TMuMu not only flies (which is amazing) but can pick up and deliver cargo, too.  They term this “whole body manipulation”, referring to the fact that the entire device works as the manipulator.

Very nice work!

  1. Moju Zhao, Koji Kawasaki, Kei Okada and Masayuki Inaba Transformable multirotor with two-dimensional multilinks: modeling, control, and motion planning for aerial transformation. Advanced Robotics, 30 (13):825-845, 2016/07/02 2016.
  2. Moju Zhao, Koji Kawasaki, Xiangyu Chen, Shintaro Noda, Kei Okada, and Masayuki Inaba, Whole-body Aerial Manipulation by Transformable Multirotor with Two-dimensional Multilinks, in 2017 IEEE International Conference on Robotics and Automation (ICRA). 2017: Singapore.


Robot Wednesday