Tag Archives: Evan Ackerman

Improvised robots

These days, we see robots built out of almost anything, and learning to move in all kinds of crazy ways.  Basically, developing “gaits” for a mobile machine is becoming a computation that can be done on demand.

Researchers from Tokyo report on a dramatic case of this:  ad hoc robots built with tree branches, that learn to ‘walk’ [2].  The robot is constructed from some generic connectors that have motors and sensors. These parts are attached to branches, to create weird ‘robots’.

The gait is developed by 3D scanning the natural branches, and then using machine learning to learn how to walk in a 3D simulation.  The results are odd, but effective.  “It isn’t how well the sticks walk, it’s that they walk at all.”

 

This is basically the same technology that is used for developing conventional robots though you don’t have to scan the structural parts if you make them to order.  The machine learning is very general, and this project demonstrates that it can work with parts that are very different from anything humans would deliberately design.

The upshot seems to be that it is possible to construct a functioning crawler out of “found objects”. Artists have long been excited by the possibilities of repurposing objects, and this technique allows an artist to make a robot from whatever they choose.

Evan Ackerman points out that this sort of robot might be practically useful in certain situations [1].  For one thing, it could allow creating special or one off robots out of generic hardware and local materials.

Not having to worry about transporting structural materials would be nice, as would being able to create a variety of designs as necessary using one generalized hardware set.” [1]

I wonder if you could also make temporary field repairs, replacing a broken leg with an ad hoc stump form a tree branch, and then learn to limp.

I imagine that this concept could be extended to other aspects of robotic function.  I think that you could construct an ad hoc manipulator out of tree branches, as well as structures such as cargo baskets and sensor pods.

It would be interesting to see how well this concept could be scaled down.  Imagine a swarm of little bots, built out of twigs and grass!


  1. Evan Ackerman, Robots Made Out of Branches Use Deep Learning to Walk, in IEEE Spectrum – Robotics. 2019. https://spectrum.ieee.org/automaton/robotics/robotics-hardware/robots-tree-branches-deep-learning-walk
  2. Azumi Maekawa, Ayaka Kume, Hironori Yoshida, Jun Hatori, Jason Naradowsky, and Shunta Saitu, Improvised Robotic Design With Found Objects, in Workshop on Machine Learning for Creativity and Design at NeurIPS 2018. 2018: Montreal. https://nips2018creativity.github.io/doc/improvised_robotic_design.pdf

 

Robot Wednesday

Drones Over Tanzania!

I’m more than a little skeptical of the widely ballyhooed commercial use cases for UAVs, deliveries basically, local reconnaissance.  And I’m especially skeptical of neo-colonialist assumptions that places like Africa are not so much poor as blank slates, which can be “fixed” by the magic of technology.

So I was interested to read Evan Ackerman and Michael Koziol’s article on the local “drone industry” in Tanzania [1].  There’s a lot of interesting stuff going on in Africa, not least because it’s Africans doing it, not “experts” from Silicon Valley.

For one thing, commercial products, even hobby grade UAVs are too expensive for Africa.  Fiberglass and carbon fiber are aerospace materials—sexy, but hardly the stuff for a small business in Africa.  And who needs it anyway?  The article reports on drones made with bamboo and zip ties.  There is sexy software inside, but the outside is cheap and easy to repair.

Even more interesting, Tanzanians are pioneering uses “that aren’t even on the radar for the United States and Europe.”

One use case is land surveying.  Where I live, things have been surveyed to death centuries ago—and important tool of appropriation. (When you steal a continent, you want to keep careful records of which thief owns the swag.)  But much of Tanzania has not be ground surveyed.  Satellites aren’t very detailed, and small aircraft are expensive.

Commercial and hobby-grade drones are very expensive.  So why not operate a rent-a-drone service?

A second use case is the classic delivery service.  In my town, drone delivery is competing with motor vehicles in the last ten miles, which are pretty efficient and benefit from enormous amounts of infrastructure.  In Tanzania, many places rely on motor bikes over limited roads.  And, of course, there are islands and other isolated spots with even less connectivity.

So delivery drones make a whole lot more sense in Tanzania, at least outside the main towns, which is a lot of the place.  Of course, it remains to be seen how much business there is for a kilo or two of “urgent” cargo.  Obviously, low cost, locally repairable aircraft would be an asset, and maybe a swarm might lift larger loads that could make a difference to the equation.

Now, to be sure, this article is mainly about enthusiastic drone-heads (is there a term in Swahili for this? : – )) , and they’re pretty much the same everywhere. These use cases face similar economic challenges in Africa as elsewhere.  Just how much business is there for aerial surveying or delivering priority packages?  I dunno.

These projects are in very early stages, and there is a lot that might happen.  For starters, unlike developed areas, government policy in Tanzania has yet to be set.  And, this being Africa, there is a possibility of corruption distorting policy.  Depending on how policy and law comes down, local entrepreneurs could win or lose big time.

Assuming these businesses continue and thrive, they may have major side-effects.  The land-surveying services are based on defining firm property boundaries, for the purposes of establishing formal ownership, obtaining loans, and transferring property.  In the past, these legal processes are predominantly used by wealthy and privileged people.  Spreading the use of these legal protocols to wider populations could create wealth, but also can create inequality (displacing poor people who can’t prove title), and conflict (disputed titles, foreclosures, etc.)

And, of course, there are other, darker uses of surveillance drones.  Police, gangs, and militias might make use of low cost, locally made drones. Putting an air force in the hands of any group that has a few thousand dollars might be dangerous and destabilizing anywhere, and Tanzania is no exception.

So, it will be interesting to see what happens as Tanzania boots up local drones and drone-based businesses, and maybe exports them to neighbors, too.


  1. Evan Ackerman and Michael Koziol, Tanzania Builds a Drone Industry From Local Know-How and Bamboo, in IEEE Spectrum – Robotics. 2019. https://spectrum.ieee.org/robotics/drones/tanzanias-homegrown-drone-industry-takes-off-on-bamboo-wings

 

Robot Wednesday

Cathartic robots: bad design, worse psychology

“By defining the challenges and applying them to the case- study of Cathartic Objects, we learn that designers might be able to rely on literature and on their own judgment to sensibly design for negative emotions. However, evaluating the design still carries risks, and perhaps remains limited to auto-ethnographical research for the time being.” (From [2])

Huh?

This applied psychology experiment has little in the way of background research, and no experimental or clinical evaluation of the imagined psychological benefits. In particular, the project is predicated on the supposed benefits of “catharsis”, with little consideration for competing hypotheses about, say, the negative effects of rehearsal and reward of aggressive behaviors.

So what is the project?  Essentially, it is a collection of objects that a person interacts with in destructive or abusive ways.  The interactions are actually quite disturbing.

    • A little animal like object that the user stabs. The robot reacts as if in pain. I.e., you are encouraged to torture a helpless being.
    • An object that detects “swear words” (I imagine there are cultural and linguistics issues with this recognition), and lights up as more and more verbal abuse is delivered. In short, you are reinforced for verbally abusing the object.
    • A doll that (somehow) detects that the user is “upset”, and delivers a mocking, abusive laugh. The user is invited to react to the mocking by punching the doll. e., this object is abusive, and rewards you for reacting to verbal abuse with physical violence
    • A personal message is written on a ceramic tile, which the user is invited to smash. In short, you are encouraged to vent anger with violent destruction.

Yoiks!!

Obviously, the robotic technology is not particularly necessary, though it does have the virtue that these are inanimate objects. The researcher has remarked that they are intended to be “non-anthropomorphic”, in the hope that the behaviors will not transfer. Unfortunately, I’m very sure that the behaviors you learn with these devices definitely will transfer to other, non-robotic objects, including people and animals.

By the way, I think this isn’t so much “catharsis” as “displacement”—attacking a helpless robot instead of the cause of the negative emotion.

What’s wrong with this picture?  It’s a really poor psychology experiment

You can tell that I don’t like this project very much.

I think it is a very poor approach to problem solving, and has a strong potential for increasing violent behavior.  I also don’t like that they make claims for alleged psychological benefits, without any evidence that this approach is safe and effective.  That is malpractice, plain and simple.

A big part of the problem is a lack of background research.  The authors comment that “we learn that designers might be able to rely on literature and on their own judgment to sensibly design for negative emotions” [2]  Well, they relied on their own intuitions a lot more than the broad literature, and that’s a problem.

At the core of the issue is how they think about the problems they address. The problem is perceived as “the user has negative emotions” (especially “frustration” and “anger”) and the goal is to “make the user feel better”.

In the likely event that the negative emotions are a symptom rather than the disease, this approach is not likely to help very much.  Worse, I’m pretty sure that violence and verbal abuse will not make the underlying problem better.  Quite the contrary.

In an interview, the researcher indicates that “[i]t has been extremely challenging to get approval for formal human subject studies that center around negative emotions and destructive behaviors.” [1]   Ya, think?  Personally I think the CMU IRB is completely justified, and should make him follow the legal and ethical requirements for research on human subjects.  And, by the way, without control or comparison conditions, or even measurement, it’s  not even a real study—which generally will be rejected by an IRB for very good reason.

The researcher also remarks that “We also know, according to research in psychology, that people tend to feel aversion towards the idea of any negative emotions, which does not help the casewhich seems to imply that the IRB is somehow “afraid” of his research. This is just plain dumb (and insulting).  It’s not aversion to negative emotions that is the problem, it is bad design and worse methodology; designs that appear to promote violent behavior, seemingly without understanding what they are doing, and with a very shaky theoretical justification.

Further “Investigations”?  Please, don’t!

The researcher says he plans to test these psychological objects “in an interactive installation setting”.  (Note that an art installation doesn’t require IRB approval.)

Future work will attempt to test out these concepts of destruction and catharsis in an interactive installation setting. By having people directly interact with cathartic objects, I hope to learn more about people’s emotions and experiences, as well as perceptions and values regarding interactive objects that are designed to support behaviors of destruction and catharsis.

Please don’t.  These prototypes are really, really bad for people.

How This Could Be Improved

Obviously, there should be some serious consideration of the potential hazards of these devices, and concern for the welfare of the victims users.  In order to learn what effects, positive or negative, if any, these interactions have, there needs to be much better research design.

Let’s start with measurement.  These devices are supposed to make the participants “feel” better, and perhaps reduce the level of negative emotions.  (I think the former would be called “displacement”, the latter would be “catharsis”, but the terminology doesn’t matter.)

Possible measures of these emotions could be: self-reports, observer ratings, and even physiological measures.  Obviously, we need before and after measurements.  And I think that long term effects would be important.  Among other things, there might be cumulative effects or habituation that reduces the effects.  So that means follow ups for weeks and months.

Second, it seems very important to investigate “transfer” and learning effects.  Do these interactions “train” the user to be violent and abusive in other interactions with robots, animals, and people.  It is important to realize that, if these objects work as he hopes, then they will probably be reinforcing for the users.  And that might very well increase their abusive behavior.  As long as they only stab the robot, that’s disturbing but maybe OK.  If they start stabbing pets or other people, too, that’s a serious problem.

Third, I might suggest considering the effects for different people.  I’m 100% positive you’ll find gender differences in experience of this very male oriented design.  Age, race, and culture almost certainly will make a difference in how these are received and used.  (If nothing else, the “swear words” device is going to need a multi-lingual, multicultural suite of dictionaries.)

Finally, let’s talk about control conditions.  Or at least comparison conditions.

If you are going to claim that these objects have beneficial effects, then we need to know “compared to what”?  And there are a lot of possible comparisons that might and should be made.

Here are some plausible comparison conditions not necessarily in any order:

    • Base line, no treatment. (Bad feelings will fade with time.)
    • Similar “catharsis” with “dumb”, non-robotic objects. E.g., yell at an image on the screen, throw darts at a picture, punch a punching bag.) By the way, there is extensive research on such “treatments”.  Look it up.
    • Meditation/mindfulness etc. – recognize and set aside negative feelings instead of acting them out. (Also has extensive literature.)
    • Drugs?
    • Human (instead of robot) interaction – positive. Have a soothing conversation instead of a temper tantrum
    • Human interaction – negative. Have an argument with a person instead of a temper tantrum.
    • Animal interaction – positive. Pet a puppy, or something like that.
    • Animal interaction – negative. Torture a puppy. This is unethical, so don’t literally torture animals.

Get they idea?  This needs to be compared to all the other things people can an do do to deal with negative emotions, to show when and where it might be effective, and what effects it might have.

What Really Should Be Done

What I would really like to see, most of all, is robot objects that can help people actually solve problems, not practice violent displacement behaviors.  Can a robot help de-escalate negative behaviors?  Help people reframe situations?  Encourage positive behaviors, such as seeking information and rational communication?

Note that I’m not talking about “technology that sets out to make people “happier” or more efficient.” as these researchers sneer,  I’m talking about technology that sets out to help people solve problems non-violently and positively, even when they start with negative emotions.

The goal should be to make life better, not to vent.


  1. Evan Ackerman, These Robotic Objects Are Designed to Be Stabbed and Beaten to Help You Feel Better, in IEEE Spectrum – Robotics. 2019. https://spectrum.ieee.org/automaton/robotics/home-robots/these-robotic-objects-are-designed-to-be-stabbed-and-beaten-to-help-you-feel-better
  2. Michal Luria, Amit Zoran, and Jodi Forlizzi, Challenges of Designing HCI for Negative Emotions, in Conference on Human Factors in Computing Systems. 2019: Glasgow, UK. https://www.researchgate.net/publication/332820712_Challenges_of_Designing_HCI_for_Negative_Emotions

 

Cool Robot Hummingbird

Humming birds are awesome.  In fact, if I hadn’t see them, I’d say they were impossible.  I mean, the incredible hovering and darting, done by insanely fast flapping.  Oh, and absurdly beautiful plumage, too.  C’mon.  Obviously, an imaginary animal.

This summer, researchers at Purdue report on a bio-inspired robot flyer that imitates hummingbird flight [1].  Wow!  <<link IEEE>>  (Three (!) upcoming papers: [2, 3, 4])

The Purdue Hummingbird* is a flapping- wing robot with 17cm wingspan and 12 grams weight. The wings flap (independently!) at a hummingbirdy 30+ Hz.  It can execute hummingbird like hovering and maneuvering.  (The papers describe the abilities of the biological humming bird as “extraordinary”, “extreme”, “aggressive” and “unmatched by small scale man-made vehicles”, among other well justified adjectives.)

Flappng fast isn’t the hard part, the hard part is controlling the flight.  Flapping flight is quite unstable, or at least, on the edge of instability most of the time.  The research used machine learning to create control models.  In fact, they used a combination of more than one model, reflecting the competing constraints of stability and extreme maneuvering. The models also incorporate sensory feedback (from its wings!) to navigate and correct for anomalies like wind gusts.**

Cool!

The resulting flight is reported to be close to that observed in biological hummers. That’s cool, and furthermore, constitutes a “theory of hummingbirds”.  This would seem to beg for investigation of biological hummingbirds to see if neurological structures can be identified that work like these models.

Well done all.


*which really needs a better name!

**I call the overall work “multiple models”. Technically, they refer to  a “model” and a “policy”, and the sensory feedback is fed directly to the motors.  See the papers for detailed explanations of all the stuff they did.


  1. Evan Ackerman, This Robot Hummingbird Is Almost as Agile as the Real Thing, in IEEE Spectrum – Robotics. 2019. https://spectrum.ieee.org/automaton/robotics/drones/robot-hummingbird-is-almost-as-agile-as-the-real-thing
  2. Fan Fei, Zhan Tu, Yilun Yang, Jian Zhang, and Xinyan Deng, Flappy Hummingbird: An Open Source Dynamic Simulation of Flapping Wing Robots and Animals. eprint arXiv:1902.09628:arXiv:1902.09628, 2019. https://ui.adsabs.harvard.edu/abs/2019arXiv190209628F
  3. Fan Fei, Zhan Tu, Jian Zhang, and Xinyan Deng, Learning Extreme Hummingbird Maneuvers on Flapping Wing Robots. eprint arXiv:1902.09626:arXiv:1902.09626, 2019. https://ui.adsabs.harvard.edu/abs/2019arXiv190209626F
  4. Zhan Tu, Fan Fei, Jian Zhang, and Xinyan Deng, Acting Is Seeing: Navigating Tight Space Using Flapping Wings. eprint arXiv:1902.08688:arXiv:1902.08688, 2019. https://ui.adsabs.harvard.edu/abs/2019arXiv190208688T

 

Robot Wednesday

Weightless robots for the ISS

In the near future (assuming the US administration doesn’t pull the plug), NASA will be deploying “Astrobee”, a free-flying robotic system, in the International Space Station. This is the latest in a series of “Personal Assistant” robots  These are small helper bots that are specifically designed for the International Space Station.  The robot is a 30 cm cube, and moves by emitting jets of air from a fan in any of 12 directions. Modular add ons include little arms and who knows what else [2].

There’s no gravity on the ISS, so these little guys are extremely maneuverable!  The Astrobee can be remotely driven by crew members or ground operators, or operate autonomously. The later case will use a 3D map of the station which is a nicely contained and slowly changing environment—well suited for robot navigation.  The designers are experimenting with simple forms of robot-human interaction, including a touch screen with graphics portraying expressive cartoon eyes, and possibly turn signals and other messages.

As Evan Ackerman says, “Aren’t they adorable?” [1]

The overall goal is for the Astrobees to operate almost entirely autonomous, requiring minimal attention from the crew.  The initial payloads make it a very capable recorder of video and sounds. But who knows what science teams might propose. https://www.nasa.gov/content/guest-science-resources

I’ll note several limitations of these little guys. As far as I can tell, the Astrobee only operates in the atmosphere.  It can’t maneuver outside the station or, heaven forbid, a depressurized area of the station.  I’m also pretty sure that without gravity the mechanics of grabbing, pushing, twisting, etc. have to be handled carefully.  The robot will need to brace in order to apply force, and will be easily tossed about by other forces (e.g., collision with crew or objects).  The spacefarers are already familiar with these limitations, because they apply to the human crew, too.


  1. Evan Ackerman, NASA Launching Astrobee Robots to Space Station, in IEEE Spectrum – Robotis. 2019. https://spectrum.ieee.org/automaton/robotics/space-robots/nasa-launching-astrobee-robots-to-iss-tomorrow
  2. Maria G. Bualat, Trey Smith, Ernest E. Smith, Terrence Fong, and D. W. Wheeler, Astrobee: A New Tool for ISS Operations, in 2018 SpaceOps Conference. American Institute of Aeronautics and Astronautics, 2018. https://doi.org/10.2514/6.2018-2517

 

Robot Wednesday

Cool Inflatable Robot

The recent interest in  “soft” robots has explored various hydraulic and pneumatic actuators, and a variety of non-humanoid body plans, often biosinspired or origami inspired.

This spring researchers at Brigham Young University report on explorations of an inflatable robot with a humanoid body type [2].   It looks a lot like the inflatable punching bag clowns we had when I was a kid, except it has actuators and is computer commanded.

If this can be made to work, there are many advantages.  At the top of the list is the safety of a light, soft robot.  Even if it accidentally hits you or falls on you, it’s not going to hurt.  (The video shows kids playing with it.)  It’s also light, and packs into a small space, which are good reasons for NASA’s interest.

NASA is funding this research because inflatable robots are ideal for space exploration, being low size, low mass, durable, and safe.” (From [1])

Of course, it’s not so easy to make this work.  The balloon man isn’t rigid, and movement isn’t predictable or repeatable.  This means that conventional robot control commands don’t work, because you don’t know where the arm is at any time.

The research is developing “visual servoing”, which is a fancy term for visually guiding the motion.  Technically, this involves visual recognition of the pose of the robot, and an approximate kinematic model to estimate the pose and computationally plan and command.  The model has to be quite adaptive, because the shape of the robot changes all the time.

Nice work.


  1. Evan Ackerman, Inflatable Robots Are Destined for Space, If We Can Control Them, in IEEE Spectrum – – Robotics. 2019. https://spectrum.ieee.org/automaton/robotics/robotics-hardware/inflatable-robots-for-space
  2. P. Hyatt, D. Kraus, V. Sherrod, L. Rupert, N. Day, and M. D. Killpack, Configuration Estimation for Accurate Position Control of Large-Scale Soft Robots. IEEE/ASME Transactions on Mechatronics, 24 (1):88-99, 2019. https://ieeexplore.ieee.org/document/851091

 

Robot Wednesday

 

Hacking Tesla’s Autopilot

The folks that brought you the Internet are rushing to get you into a self-driving, network connected car.

What could possibly go wrong?

Setting aside the “disruption” of this core economic and cultural system, there have been quite a few concerns raised about the safety of these contraptions.  Automobiles are one of the most dangerous technologies we use, not least because we use them a lot.  Pretty much everything that can go wrong does go wrong, eventually.

Well, buckle up, cause it’s as bad as anyone might have thought.

The spring the Tencent Keen Security Lab report on successful hacks of Tesla cars [2].  The focus was on the autopilot self-driving system.  In fact, they were able to root the car, monkey with the steering and other controls, and reverse engineered the computer vision lane detection to make a simple hack that could cause the vehicle ot suddenly change lanes.

“In our research, we believe that we made three creative contributions:
1. We proved that we can remotely gain the root privilege of APE and control the steering system.
2. We proved that we can disturb the autowipers function by using adversarial examples in the physical world.
3. We proved that we can mislead the Tesla car into the reverse lane with minor changes on the road.” ([2], p.1)

Gulp.

Rooting the car is obviously bad for many reasons, and in this case they used their access to discover weaknesses in the other systems.  Taking over the steering is, well, just about as bad as it could get.  Tesla’s response is that this isn’t a “real” problem, because the driver can always override at any time.  But doesn’t that defeat the purpose of the autopilot?

The lane changing hack is interesting, if rather academic.  They found a case where just the right paint on the road could fool the algorithm.  As Evan Ackerman puts it, “Three Small Stickers in Intersection Can Cause Tesla Autopilot to Swerve Into Wrong Lane[1]. But this is really a rare case, and would probably be overridden if oncoming traffic were present.  As Ackerman comments, though, this brittleness is worrying because “the real world is very big, and the long tail of very unlikely situations is always out there,”.

To be fair to Tesla, a big part of the problem is that the car is full of software that is connected to a public network.  The hackers got in through the automatic updating system over the network.  Tesla is hardly the only car designed this way, and more companies are moving to remote updates for on board software.  Sigh.

There are many other autopilot and self-driving systems under development (including at Tencent).  They will have similar vulnerabilities.  If the received wisdom from software engineering holds true for these systems, there will be one bug for every ten lines of code—thousands and millions of bugs to exploit!

I’ll also note that a key part of the attack was that the root access allowed them to examine the system in detail and at leisure.  This raises a big question about proposals to release or open source software for cars and other systems.  Yes, this can lead to the rapid discovery of flaws. But it also means that hackers can have their way with the system.  And it certainly doesn’t mean that bugs will be fixed quickly out in the field.

My own view is that cars and other life-threatening technology should never, ever be connected to a public network, and should never do software updates over public networks.  That’s less convenient and costlier for the manufacturer, but that’s just tough.


  1. Evan Ackerman, Three Small Stickers in Intersection Can Cause Tesla Autopilot to Swerve Into Wrong Lane. 2019: IEEE Spectrum — Cars That think. https://spectrum.ieee.org/cars-that-think/transportation/self-driving/three-small-stickers-on-road-can-steer-tesla-autopilot-into-oncoming-lane
  2. Tencent Keen Security Lab, Experimental Security Research of Tesla Autopilot. Tencent Keen Security Lab 2019-03, 2019. https://keenlab.tencent.com/en/whitepapers/Experimental_Security_Research_of_Tesla_Autopilot.pdf

 

Robot Wednesday