Tag Archives: Evan Ackerman

A Self-repairing robot

In one sense, the idea of robots building and repairing robots is obvious and old hat.  And repairing yourself can be a pretty simple extension of repairing some other machine.  But it’s not done very often.

This fall researchers from the University of Tokyo reported on demonstrations of teaching a self repair operation to commodity robots [2].  Specifically, the robots learned to use their own manipulators to tighten screws on their own body. (For this demo, the robot didn’t figure out for itself when a screw needs adjustment.)

 

Now, tightening a screw isn’t a gigantic deal.  However, robot manipulators are not really designed to reach their own body, so some screws are going to be challenging.  And some of them require an Allen wrench, which is a different grip and generally calls for changing the grip as you go, ‘regrasping”.

“The actual tightening is either super easy or quite complicated, depending on the location and orientation of the screw.”  Evan Ackerman in [1].

They also demonstrate that once you can do screws, you can screw on additional pieces, such as carrying hooks.  Neat.

Part of the trick is that they use CAD data describing their body.  They use this data to learn how to operate on themselves. Duh!  It’s so obvious, once you see it!

It seems to me that part of the challenge here is that these generic robots were not designed to self-repair or even repair each other.  There is no reason for this.  With a bit of care, robots can be assembled in ways that are easier for them to self-repair.  One way to assure this is to use robots to assemble the same model of robot.  And CAD systems themselves can analyze designs to maintain self-repair-ability.

This concept will be especially interesting to combine with evolutionary design.  The robot should not only be able to assemble and repair a robot, it should learn to optimize the assembly/repair process, presumably in tandem with evolutionary design of the robot to be assembled.

(To prevent a runaway robot uprising, the system should have to submit detailed proposals and requests for funding, in order to acquire the resources needed for the new versions.  That ought to keep them under the control–of accountants!)


  1. Evan Ackerman, Japanese Researchers Teaching Robots to Repair Themselves, in IEEE Spectrum – Robotics. 2019. https://spectrum.ieee.org/automaton/robotics/robotics-hardware/japanese-researchers-teaching-robots-to-repair-themselves
  2. Takayuki Murooka, Kei Okada, and Masayuki Inaba, Self-Repair and Self-Extension by Tightening Screws based on Precise Calculation of Screw Pose of Self-Body with CAD Data and Graph Search with Regrasping a Driver, in IEEE-RAS International Conference on Humanoid Robots (Humanoids 2019). 2019: Toronto. p. pp.79-84.

 

Robot Wednesday

Launching UAVs – From a Cannon

Flying is hard, taking off and landing are harder.  This rule applies to UAVs as well.

In the case of take off, it is necessary to apply a lot of energy to clear the ground and gain altitude (without hitting anything), and then convert to cruising at a steady and energy efficient pace.

For UAVs, this process is often augmented by powered assist.  A small UAV may be tossed into the air by hand.  Larger craft might be shot from catapult or sling.

This fall researchers from Cal Tech and JPL report on a multi-rotor UAV designed to be launched from a tube—shot from a cannon [2].  The idea is to quickly and safely put the aircraft where you want it.  The demonstration shows that this works pretty well even from a moving vehicle, which certainly would be useful for some purposes.

The tricky part, of course, is transitioning from ballistic projectile into a working copter.  For that matter, the first tricky part is surviving the blastoff (light weight UAVs are rather fragile). The prototype uses an airgun, which presumably can be tuned for optimal pressure.

The UAV is designed as a transformer, shaped as a ballistic shell, and unfolding four rotor arms in flight.  The diagrams show the rotors neatly snuggled into the smooth body for launch.  The arms are springloaded, and simultaneously snap out to the quad copter configuration.

Squidbot (it looks a little like a squid) is launched from the airgun and initially flies unpowered in a parabolic trajectory. In practice, this phase enables the drone to be deployed ahead of a moving vehicle, or to the side, or whatever direction fits the mission.

At a predetermined time or when directed by the operator the rotors are released and snap into position in a about a tenth of a second. Whap!  After deployment, the on board systems spin up the rotors and initiate controlled flight similar to other quad copters.

The demo video shows a launch from a moving truck, which is kind of cool.  I’m imagining a multiple launcher system that can deploy a swarm of drones over head in a second.  That would be neat.

The researchers imagine this might be used for an explorer on Mars or Titan, where UAV copter missions are already under development.   One advantage over other launch methods is that launching from the vehicle has few constrains that depend on the surface conditions.  In low gravity, the ballistic path could be quite long, giving the rover a large coverage with a fleet of drones.

One limitation of this model is that the UAV seems to be a throw away.  Landing is tricky, and it seems to require complete refitting to be reused.  I can imagine an automatic loading system, but folding up squid is pretty tricky.  For exploring Mars or Titan, I’m pretty sure you want to recapture and reuse your aerial probes, so this is something to work on.

I also wonder if the airgun is all that much better than a mechanical launch alternative.  Granted, you can probably get a much more powerful launch, and consequently greater range with the compressed air tube.  (Of course, there will be a cost to pump up the pressure, which will be even more significant in a thin atmosphere.)

But you could get pretty good range with a crossbow arrangement, which could launch a very similar projectile copter.  Or, just for fun, how about a trebuchet launcher?

Any and all of these systems will be challenging to operate on a remote planet. Even if loading, aiming and launching is fully automated, the moving parts will be vulnerable to dust and corrosion.  On Mars the launch tube is going to have grit in it, which will play havoc with both the air seals and exiting the tube.  On Titan, the atmosphere is likely to be corrosive, but also may have sleet and ice to clog your tube.

But I have to say that this launch system would close the circle to ancient traditions of fireworks.  Our drone sky shows could be launched from mortars, analogous to pyrotechnics.


  1. Evan Ackerman, Caltech and JPL Firing Quadrotors Out of Cannons, in IEEE Spectrum – Robotics. 2019. https://spectrum.ieee.org/automaton/robotics/drones/caltech-and-jpl-firing-quadrotors-out-of-cannons
  2. Daniel Pastor, Jacob Izraelevitz, Paul Nadan, Amanda Bouman, Joel Burdick, and Brett Kennedy, Design of a Ballistically-Launched Foldable Multirotor, in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2019: Macau.
  3. Daniel Pastor, Jacob Izraelevitz, Paul Nadan, Amanda Bouman, Joel Burdick, and Brett Kennedy, Design of a Ballistically-Launched Foldable Multirotor. arXiv arXiv:1911.05639, 2019. https://arxiv.org/abs/1911.05639

 

Robot Wednesday

Urchinbot!

In the never ending exploration of biomimetic robots, it’s not all butterflies.  If you want innovation, you want to look at insects and you want to look in the ocean.  Cause…that’s where life is weird.

This winter researchers from Harvard’s Wyss Institute for Biologically Inspired Engineering report on yet another interesting biomimetic system, a Sea Urchin inspired robot [2].  As Sensei Evan Ackerman says, it’s “One of the Weirdest Looking Robots We’ve Ever Seen.” [1]

To be fair, it looks weird partly because it is out of context, out of scale, and made of plastic. But it is also interesting because it is modeled after a juvenile urchin, which has a substantially different body plan from mature adults.  Specifically, the Urchinbot has two kinds of feet, rigid spines and extensible “tube feet” in a five-fold symmetric layout.  (Adults have much more complicated structure.)

The urchinbot was designed to closely emulate the natural urchin’s locomotive mechanisms.  The spines are attached to the body by a rigid ball joint, and actuated by three soft domes that push the rigid spine in different directions.  The tube feet also inflate to extend the length, deflate to retract, and a magnet to emulate the stickiness of the urchin’s adhesive feet.

The research demonstrates some “gaits”.  The urchinbot drags itself (slowly) across a surface, and also can rotate.  This prototype is limited, but it works!

The researchers suggest that a fully working urchinbot might have useful applications in underwater maintenance, where it is necessary to traverse irregular surfaces and jam into difficult spaces.

Cool.


  1. Evan Ackerman, Harvard’s UrchinBot Is One of the Weirdest Looking Robots We’ve Ever Seen, in IEEE Spectrum – Robotics. 2019. https://spectrum.ieee.org/automaton/robotics/robotics-hardware/harvard-amphibious-urchinbot
  2. T. Paschal, M. A. Bell, J. Sperry, S. Sieniewicz, R. J. Wood, and J. C. Weaver, Design, Fabrication, and Characterization of an Untethered Amphibious Sea Urchin-Inspired Robot. IEEE Robotics and Automation Letters, 4 (4):3348-3354, 2019. https://ieeexplore.ieee.org/document/8754783

 

 

Robot Wednesday

 

More on Kiki

In an earlier post, I critiqued the new social robot, Kiki.  I think Kiki is fodder for a whole volume of sociotechnical analysis.  (A “white”, “female”, robot who is extremely submissive and lives only to please you.  Wow!)

This month, Evan Ackerman interviewed Mita Yun, one of Kiki’s moms [1].  The conversation gives us additional ideas about what they think they are doing.

One sociotechnical point to note is Yun’s headline that this social home robot is “completely useless”.  I think this comment reflects her background in the robot industry, which often must justify their projects by claiming to do useful (to the sponsors) things, or at least to potentially have “serious” applications (e.g., the researchers at EPFL haptic skin “could help rehabilitation and enhance virtual reality”, while avoiding any mention of the obvious killer app, dildonics.)

She is also making the point that the “success” of a social robot is in how much people enjoy interacting with it, not in any particular economic or practical tasks it might perform.  Yun also makes clear that this also means that, no matter how many users request it, “we’re never going to add Alexa integration to Kiki”.   So there.

What does Sensei Yun think she is doing here?

She says she wants the robot to be a “character”, as in a story. In this, we can see the influence of digital game design (and, I assume and hope, the classics [2, 3])  She also references puppeteering, another very appropriate inspiration.

In an earlier post, I commented on some alarming aspects of Kiki’s character, which is realized with an artificial “personality”.  Yun’s remarks give us additional insight into what they were striving for.

Notably, Kiki is very simple, it does not move or converse.  “[W]e started with the eyes” and paid more attention to the artificial personality, rather than complicated gestures and visual features.

I remarked on how submissive Kiki is, aiming above all to “please” you, the master.  Yun actually characterizes Kiki as “vulnerable” with “slightly lower status than the human”.   (This is sounding more and more like a slave–all the more so since Kiki is clearly female.)

However, Kiki is non-verbal, so it is psychologically in the range of a pet.  “Kiki probably does more than a plant. It does more than a fish, because a fish doesn’t look you in the eyes. It’s not as smart as a cat or a dog, so I would just put it in this guinea pig kind of category.”  So–not a slave or a wife, but a guinea pig.

I’m not sure that these clarifications make me any less alarmed by the social psychology of this device.

On one hand, their naïve model of a personality is at least plausible for something like “guniea pig”.  On the other hand, it’s not clear why this kind of pet should have white skin, or intentionally be so damn “vulnerable”.

If you take this as a sort of artificial guinea pig, then “vulnerable” and “submissive” are defensible, I suppose, if not necessarily ideal. Of course, a wired-in desire to please the master is not necessarily natural, and certainly raises moral questions.

Deliberately designing a vulnerable, submissive, “female”, who has no choice but to please seems to speak volumes about the desires and assumptions about the designers.

When you look at Kiki this way, it is basically a poor substitute for a pet.

If you could have guinea pig, why would you want a Kiki?  The main advantages of Kiki are low maintenance (no food, litter, or vet bills) and a wired in desire to please you.  Kiki won’t (indeed, can’t) escape or die, and must try hard to make you happy.  Is that a good thing?

It’s hard for me to see why people would want this, at least after the initial novelty wears off.

Yun is happy to tell you that Kiki does nothing useful, except try to keep your attention make you happy.  Best case, it’s as interesting as a not-very-smart pet, without the actual burdens of caring for another living thing.  And it’s not particularly snuggly.

I dunno about this.

But, as I said before, it will be a great source of student projects analyzing the psychology and anthropology of Kiki and her masters.


  1. Evan Ackerman, This “Useless” Social Robot Wants to Succeed Where Others Failed, in IEEE Spectrum – Robotics. 2019. https://spectrum.ieee.org/automaton/robotics/home-robots/kiki-social-home-robot
  2. Brenda Laurel, Computers as Theatre, Reading, MA, Addison Wesley, 1991.
  3. Jane McGonigal, Reality is broken: why games make us better and how they can change the world, New York, Penguin Press, 2011.

 

Robot Wednesday

 

A Robot Puppeteer from ETH

String puppeteering is amazing to me, almost magical.  How do they do it?

I was very interested to see the report from ETH of a robot puppeteer, which works pretty amazingly well [2].  Wow!  How did they do it?

What they didn’t do was design a “theory of puppets”, or a programming language (the “Pinocchio” language?).

What they did do is apply motion planning and physics simulation to rapidly predict the effects of gravity, strings, and the articulation of the puppet.  This planner develops optimal motions of the robot to move the puppet to the desired position.  If I understand correctly, the simulation runs constantly, computing the next one second of motion, to make the optimal moves.

“we devise a predictive control model that accounts for the dynamics of the marionette and kinematics of the robot puppeteer.” ([2], p. 1)

This works amazingly well!

The researchers also can design optimal “paddles” for particular puppets and motions.  The simulation system would also enable the development of new puppets in silico, and it would be possible to modify the design of existing puppets.

The researchers suggest that this planning model could be useful for challenging manipulation tasks, such as cloth, soft parcels, or cables.  (I think this betrays their affiliation with an engineering institute.)

I think it would also be very interesting to see how this model matches the behavior of human experts.  The results look similar, but is this an accurate description of what a human puppeteer “knows”?    As Evan Ackerman points out, expert human puppeteers seem to be a lot better, so what is missing from the simulation?

It is very possible that humans have alternative ways to do it, and if so, it might be interesting to incorporate these “algorithms” into a simulation.  In addition, it would be interesting to see how puppeteers learn the craft, which might offer lessons for the development of the models.  (In other words, this is an opportunity to do some “biomimetic” design mimicing a somewhat mysterious human skill.)

Finally, seeing this simulation made me think about making much more complicated “puppets” by creating puppet puppeteers.  In principle, a computational puppeteer could manipulate a puppet that is attached to another puppet, pulling strings that make other strings pull, etc.


  1. Evan Ackerman, ETH Zurich Demonstrates PuppetMaster Robot, in IEEE Spectrum – Robotics. 2019. https://spectrum.ieee.org/automaton/robotics/robotics-hardware/eth-surich-puppetmaster-robot
  2. Simon Zimmermann, Roi Poranne, James M. Bern, and Stelian Coros, PuppetMaster: robotic animation of marionettes. ACM Transactions on Graphics, 38 (4) July 2019. https://dl.acm.org/citation.cfm?doid=3306346.3323003

 

Robot Wednesday

Soft Exo Suit – I want to dance in it

I vividly remember when I saw the first blurry, probably bootlegged, images from the first Star Wars movie.  Power armor!!!  Yes!!

In the decades since, this Science Fiction staple has approached reality, as exoskeletons appear in factories, workplaces, and military facilities.  Real power suits.  Yes!

This summer, researchers at Harvard report on another iteration, “soft exo suits”, that augment hip movement, with the effect of saving energy (or getting more out of the same energy) [2].

Apparently the device is programmed to do a complex set of moves that are tuned to walking and running gaits.

The thing that strikes me about this is that it is starting to be a realistic wearable device.  No external power cables, not too bulky.

The portable exosuit is made of textile components worn at the waist and thighs, and a mobile actuation system attached to the lower back which uses an algorithm that robustly predicts transitions between walking and running gaits. Image credit: Wyss Institute at Harvard University

Cool!

The experiment uses two “profiles”, one for walking and one for running.  The tests were run on a small group of people (men) with very similar anatomy.  Obviously, these “one size fits all” algorithms probably aren’t optimal for everyone, so it would be necessary to be able to program this apparatus for different people.

And if it is programmable, then it should be able to learn “gaits” other than walking and running.  In particular, it should be able to learn to augment specific behaviors, such as jumping, sprinting, balancing.

I myself won’t be happy until it can be programmed for dancing.  In particular, dancers and choreographers should be able to augment the dance, enabling new expression.  Faster, higher, longer.

Now that will be really cool!


  1. Evan Ackerman, Soft Exosuit Makes Walking and Running Easier Than Ever, in IEEE Spectrum – Robotics. 2019. https://spectrum.ieee.org/automaton/robotics/medical-robots/soft-exosuit-makes-walking-and-running-easier-than-ever
  2. Jinsoo Kim, Giuk Lee, Roman Heimgartner, Dheepak Arumukhom Revi, Nikos Karavas, Danielle Nathanson, Ignacio Galiana, Asa Eckert-Erdheim, Patrick Murphy, David Perry, Nicolas Menard, Dabin Kim Choe, Philippe Malcolm, and Conor J. Walsh, Reducing the metabolic rate of walking and running with a versatile, portable exosuit. Science, 365 (6454):668, 2019. http://science.sciencemag.org/content/365/6454/668.abstract

 

PS. wouldn’t “Soft Exo Suits” be a great name for a band?

Robot Wednesday

 

Improvised robots

These days, we see robots built out of almost anything, and learning to move in all kinds of crazy ways.  Basically, developing “gaits” for a mobile machine is becoming a computation that can be done on demand.

Researchers from Tokyo report on a dramatic case of this:  ad hoc robots built with tree branches, that learn to ‘walk’ [2].  The robot is constructed from some generic connectors that have motors and sensors. These parts are attached to branches, to create weird ‘robots’.

The gait is developed by 3D scanning the natural branches, and then using machine learning to learn how to walk in a 3D simulation.  The results are odd, but effective.  “It isn’t how well the sticks walk, it’s that they walk at all.”

 

This is basically the same technology that is used for developing conventional robots though you don’t have to scan the structural parts if you make them to order.  The machine learning is very general, and this project demonstrates that it can work with parts that are very different from anything humans would deliberately design.

The upshot seems to be that it is possible to construct a functioning crawler out of “found objects”. Artists have long been excited by the possibilities of repurposing objects, and this technique allows an artist to make a robot from whatever they choose.

Evan Ackerman points out that this sort of robot might be practically useful in certain situations [1].  For one thing, it could allow creating special or one off robots out of generic hardware and local materials.

Not having to worry about transporting structural materials would be nice, as would being able to create a variety of designs as necessary using one generalized hardware set.” [1]

I wonder if you could also make temporary field repairs, replacing a broken leg with an ad hoc stump form a tree branch, and then learn to limp.

I imagine that this concept could be extended to other aspects of robotic function.  I think that you could construct an ad hoc manipulator out of tree branches, as well as structures such as cargo baskets and sensor pods.

It would be interesting to see how well this concept could be scaled down.  Imagine a swarm of little bots, built out of twigs and grass!


  1. Evan Ackerman, Robots Made Out of Branches Use Deep Learning to Walk, in IEEE Spectrum – Robotics. 2019. https://spectrum.ieee.org/automaton/robotics/robotics-hardware/robots-tree-branches-deep-learning-walk
  2. Azumi Maekawa, Ayaka Kume, Hironori Yoshida, Jun Hatori, Jason Naradowsky, and Shunta Saitu, Improvised Robotic Design With Found Objects, in Workshop on Machine Learning for Creativity and Design at NeurIPS 2018. 2018: Montreal. https://nips2018creativity.github.io/doc/improvised_robotic_design.pdf

 

Robot Wednesday