Tag Archives: Evan Ackerman

A Tiny EHD Flyer

One of my all time favorite Mythbusters episodes was a test of “anti gravity” devices found on the Internet.   One of them was a strange triangle with no moving parts and they were all laughing at yet another goofy Internet scam—and then it actually lifted off the table. Grant, Tory, and Kari just about soiled their pantaloons!

The device tested on Mythbusters does actually lift, though it is not anti gravity, it is an  “Electrohydrodynamic” lifter.  A high electric field creates a charged plasma field which bounces off air molecules, generating downward thrust.  As Evan Ackerman notes, the idea has been around for a while, and there is even an intrepid mouse name Orville who flew in one circa 2003.

Last summer researchers at Berkeley report on a scaled down version of this concept  that they call the Ionocraft [2].  At 2 cm square and 30 mg, it’s tiny (the power supply is external, of course).

In fact, scaling down makes things work better. The thrust to weight ratio improves, and the lack of moving parts make fabrication easier and operation more reliable.

As Evan Ackerman points out, this is absolutely not a biomimetic or bio-inspired design [1].  But this may have advantages over the heroic efforts to mimic bees and other small flyers.  (I did mention “no moving parts”, didn’t I?)

OK, lift is one thing.  Control is another thing entirely.  Much of the paper is about the sensors and autonomous guidance systems.  Tiny systems, and apparently capable of working in the vicinity of significant plasma generation.

The paper indicates that the thrust generated is designed to be controlled by algorithms similar to conventional quadcopters. (I.e., there are 4 controllable thrusters.)  I have to say that the demo video does look all that controlled.  But then again, the miracle is that the dog dances at all.  In any case, they have ideas about how to stablize the flight.

Cool.


  1. Evan Ackerman, Penny-Sized Ionocraft Flies With No Moving Parts, in IEEE Spectrrum – Robotics. 2019. https://spectrum.ieee.org/automaton/robotics/drones/pennysized-ionocraft-flies-with-no-moving-parts
  2. D. S. Drew, N. O. Lambert, C. B. Schindler, and K. S. J. Pister, Toward Controlled Flight of the Ionocraft: A Flying Microrobot Using Electrohydrodynamic Thrust With Onboard Sensing and No Moving Parts. IEEE Robotics and Automation Letters, 3 (4):2807-2813, 2018. https://ieeexplore.ieee.org/document/8373697

 

Robot Wednesday

Tiny Pronking Robot!

Tiny insects are notoriously stronger and faster than animals with similar body structures at large scales.  So robots should be faster and stronger proportionally when scaled down, no?

“Empirical biological scaling laws show an increase in relative velocity (body length per second) with decreasing body mass […]. However, as robots are scaled down, their relative speeds pale in comparison to biological counterparts.” ([2], p.1)

Researchers from Maryland and Buffalo report this winter on microscale quadruped robots (“ant scale”), which locomote amazingly fast [2].  On little legs! (“As far as we know, this is the smallest legged robot in existence” [1] )

Cool!

The tiny crawlers are 3D printed, and use small fixed magnets as actuators, driven by external coils.  This design has achieved waling speed of about 15 body lengths per second, which would be some 40+ miles per hour at human size!

Actually, “walking” isn’t correct.  This little guy pronks!  (Because…Sprinbok!)  (Actually, the orientation of the magnets can generate a trot or a pronk.)

This is a young springbok stotting/pronking. It was photographed in Etosha National Park in Namibia Date February 2012 Source I photographed this springbok in Etosha National Park Author Yathin sk

 

The researchers indicate that the concepts can scale even smaller, but that depends on fabrication processes.  However, the relative importance of friction, gravity, and other forces will change as scale changes, so the design might need to change.

Neat.

(PS. This was a “best paper” at this workshop.)


  1. Evan Ackerman, Four-Legged Walking Robot Is Smaller Than an Ant’s Face, in IEEE Spectrum – Robotics. 2019. https://spectrum.ieee.org/automaton/robotics/robotics-hardware/four-legged-walking-robot-is-smaller-than-an-ants-face
  2. Ryan St. Pierre, Walker Gosrich, and Sarah Bergbreiter, A 3D-Printed 1-mg Legged Microrobot Running at 15 Body Lengths per Second,, in 2018 Hilton Head Solid-State Sensors, Actuators and Microsystems Workshop. 2018: Hilton Head. p. 59-62. https://transducer-research-foundation.org/technical_digests/HiltonHead_2018/hh2018_0059.pdf

PS.  Another Great Band Name:
Tiny Pronking Robots

 

Robot Wednesday

Sproing! Short Take Off for Small UAV

Yet another delivery drone concept.  But this one is not another quad copter variant, but has an interesting idea for short take off and landing:  bird like hops.

South African company Passering is demonstrating a small fixed wing aircraft with a bio inspired legs that let it “leap” into the air.  They also say that it can do short landings as well [1].

It looks pretty neat, and I certainly can see the logic of the specialized boost for take off and landing, which lets the rest of the plane be optimized for fast flight.  Plus, like some birds, these springy legs can help a relatively heavy flier get up and down without a ridiculous long run way.

I have to wonder what this feature would be like at a larger scale.  For one thing, it’s going to be hard to scale up to heavier craft, because springy materials aren’t going to scale up so easily, and mechanical assist will add weight and energy costs.  That means this is something that is only going to work for these small drones.

I’m imagining what this might be for a passenger aircraft.  I’m pretty sure that the hopping and alitghting behaviors would probably be an excitingly rough ride.  So even if it did scale up, you probably won’t want to use if for civilian passenger travel!  : – )

Pretty neat.


  1. Evan Ackerman, Delivery Drones Use Bird-Inspired Legs to Jump Into the Air, in IEEE Spectrum – Automation. 2019. https://spectrum.ieee.org/automaton/robotics/drones/delivery-drones-use-birdinspired-legs-to-jump-into-the-air

 

Robot Wednesday

The Greeting Machine: Yet More Robot Social Psychology

Robots definitely don’t have to be anthropomorphic to be useful, but do they need to be useful at all?

This summer researchers from Israel and Cornell report a minimalist “robot” that pushes this question almost to the limit.

The “Greeting Machine” is a minimalist robot, with extremely limited function and features. The device does only one thing: it “greets you” with an abstract gesture.  The idea is a device that responds to the arrival of a person with a gesture that communicates “approach” or “avoid”.  The researchers investigated a number of variations of the simple two balls gesture.

humans are very sensitive to movement, and very good at interpreting responsive movement as social cue.” From [1]

It seems to work, though not surprisingly the meaning is ambiguous and different people read it differently.  However, they found that their subjects generally projected human-like emotions and intent to the non-human, non-intentional robot.

People attributed intent and emotions to the robot’s gestures, and the social interpretations were extremely consistent between participants.Hadas Erel quoted in [1]

It’s important to note that the subjects were undergraduates, who are surely digitally literate and likely familiar with video characters, including fictional robots.  It is also not clear what the cultural background of the subjects was.  Gestures as well as social inferences are culturally specific, so it isn’t clear how the results would generalize.

The report does not report any gender differences, but I’d be shocked if male and female subjects didn’t have different reactions.  In the remarks quoted in the report, it is almost possible to guess the gender of a subject from their language.  If this intuition is correct, then this is yet another unexamined social bias in these interactions.

While it is interesting to ask just how little a robot can do and still qualify as a robot, this project is most interesting as yet another demonstration of how people project humanity on even the most non-human robots.

Let’s be clear: there is nothing even remotely human about the device, and no objective reason to even attend to it, let alone interpret the movement as a socially meaningful gesture.  But humans are highly biased toward interacting with things as if they are other humans.

(I’ll note that, in defense of the subjects, the robot in this case is actually remote operated by the experimenter, and thus arguably does represent the motives and emotions of the (hidden) human. An autonomous version could be argued to be indirectly operated by the designer, so, again, is an agent for some hidden human.  It’s not so unreasonable to treat any robot as indirectly acting for some unknown humans.)

Furthermore, it is now becoming abundantly clear that this anthromophizing comes with the whole package of social biases which aren’t strictly logical even when dealing with actual humans, and certainly don’t apply to robots.

Specifically, notice that the is yet another white skinned robot. The researchers hoped and assumed that this device “has no association with human appearance”, but their study clearly showed that the participants were projecting human qualities and motives onto this supposedly not-human entity.  From what we know, it is very possible that a black or brown skinned robot would have produced different attributions about emotions and motives.  Details would surely depend on exactly the social background of the participants.

So, I would say that this project set out to explore “minimalist” interactive robots, with the implicit assumption that a minimal, abstract robot that does not look human will be perceived as, well, a non-human machine.  What they clearly found was that people project human emotions and motives onto the machine. If anything, the abstractness makes it an ambiguous Rorschach test, with people having their own idiosyncratic and not carefully examined responses to the robot and its actions.

This conclusion is certainly consistent with many other studies, and calls into question the common assumption that, because robots are not really human, they offer a blank slate for the design of social interactions. This does not seem to be true, and the greeting machine is yet more evidence.


  1. Evan Ackerman, Greeting Machine Explores Extreme Minimalism in Social Robots, in IEEE Spectrum – Robotics. 2019. https://spectrum.ieee.org/automaton/robotics/home-robots/greeting-machine-explores-extreme-minimalism-in-social-robots
  2. L. Anderson-Bashan, B. Megidish, H. Erel, I. Wald, G. Hoffman, O. Zuckerman, and A. Grishko. The Greeting Machine: An Abstract Robotic Object for Opening Encounters. In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2018, 595-602.

 

Robot Wednesday

 

Mobile 3D Printing Robots

Hey let’s combine two cool technologies to be even cooler together!

This summer researchers at Nanyang Technological University in Singapore demonstrated 3D printing with mobile robots [2].

3D printing is cool, but conventional printers are actually pretty simple beasts.  The program is a bunch of moves in the X, Y, and Z planes.  The printer itself is the geometric frame of reference and, barring malfunction, it’s all really straightforward geometrically.

This mobile system puts the printer on wheels, specifically on a mobile robot, so the printer can move to the place to print, and then print.  This makes it possible to build things that are much larger –larger even than the printer itself–and also to print in situ—handy for constructing a building. And, as Evan Ackerman notes “once you’ve decided to go that route, there’s no reason not to use multiple robots to speed things along.” [1]

This is really cool!

One of the tricky things, of course, is that the geometry is w-a-y harder, because you need to know where the robot is, at least relative to where you want the printed object to be.  With sub-millimeter tolerances!

The demonstration accomplishes this by a extension of the slicing algorithm to include planning for where one or more printers are for the printing.  This requires (multi) robot motion planning, potentially including nasty details about the topography of the site and topology of the printed output.  I.e., the site might have obstacles, and the structure itself becomes an obstacle as it is built.  And remember that the robot wants to get back home after printing, so there is now a potential to ‘print yourself into a corner’.

(It also occurs to me that wind and vibrations and birds perching and so on might perturb the robot during the print.  This is going to take some serious error detection and correction of the ‘current position of the printer’.)

As the researchers say, this approach is very flexible, and really opens up what might be done, especially at the scale of a building.  And a coordinated swarm of robots should be very efficient.

I’ll note that the swarm might ultimately be augmented by resupply robots to shuttle materials to the printers, track preparation robots to smooth and grade paths for the printers to travel and sit on, and maybe even helpers that act as support scaffolding during construction. On the last idea, imagine a robot that holds up a form to support printing of an arched doorway, and then extracts the support when the printing is self supporting.

Cool!

I can envision this approach as a way to rapidly throw up a temporary structure, perhaps a fireproof windbreak or hearth.  (I’m imagining it extruding adobe-like material.)

Or, how about applying this to cake decoration – at scale!  For your next big party, how would you like a gazebo made of extruded sugar!  Edible architecture!

I also imagine a performance art work, in which a swarm of robots silently build a cage around a sleeping or immobile subject.   Or a swarm that builds a bower of love around a couple….

(And, of course, it won’t be long before these robots are hacked, to take them over for sabotage or just vandalism.  Construction robots hacked to write out messages in giant letters across a main highway….)


  1. Evan Ackerman, Mobile Robots Cooperate to 3D Print Large Structures, in IEEE Spectrum – Robotics. 2918. https://spectrum.ieee.org/automaton/robotics/industrial-robots/mobile-robots-cooperate-to-3d-print-structures
  2. Xu Zhang, Mingyang Li, Jian Hui Lim, Yiwei Weng, Yi Wei Daniel Tay, Hung Pham, and Quang-Cuong Pham, Large-scale 3D printing by a team of mobile robots. Automation in Construction, 95:98-106, 2018/11/01/ 2018. http://www.sciencedirect.com/science/article/pii/S0926580518304011

 

Robot Wednesday Monday

Humans Are Never Colorblind

Psychologists have documented that human perception is highly unreliable, and perceptions about people are especially prone to a variety of biases, errors, and logical shortcuts.  There are many perceptual cues that people (and I mean all people) use to judge other people, often unconsciously. Unfortunately, at the top of the list of perceptual traps is skin color:  people everywhere are highly susceptible to making inferences and generalizations based on a person’s skin color.

This spring researchers from the HIT lab in NZ (famous for groundbreaking AR) and elsewhere report that a similar effect is seen in perception of robots.

“Determining whether people perceive robots to have race, and if so, whether the same race-related prejudices extend to robots, is thus an important matter.” ([2], p. 196)

In one part of the study the participants were willing to ascribe a race to a robot, with only 11% choosing “does not apply”!  (Sigh.)

The study also found a bias very similar to ones seen in studies with images of humans.  I.e., dark skinned robots were treated similarly to dark skinned humans, and different from light skinned entities.

 “Participants were able to easily and confidently identify the race of robots according to their racialization and their performance in the shooter bias task was informed by such social categorization processes.” ([2], p. 201)

Racial categories are highly problematic, and certainly deeply affected by culture.  But however you define race for people, robots obviously cannot have a “race”.  Yet people ascribe the label.

“For us, the main question was if the participants choose anything but the “Does not apply” option.” ([2], p. 203)

These findings are certainly significant given that humanoid and household robots are almost all white skinned.  This is in strong contrast to real demographics of butlers and nannies.

One problem with this lack of diversity is the effects of social stereotypes.  “If robots are supposed to function as teachers, friends, or carers, for instance, then it will be a serious problem if all of these roles are only ever occupied by robots that are racialized as White.”  ([2], p. 202).  They raise another point that some times a social robot should have a “race”.  In these cases, the “race” must be reliably conveyed in order to enable the bot to function correctly in the social setting.

It is a bit surprising that so many people were willing and able to ascribe a “race” to a picture of a robot.  (What is wrong with people????)  In part, this must be due to the anthropomorphism of the robot.  I doubt that the same effect would be seen for, say, autonomous vehicles, no matter what their skin color.  (But maybe not—people seem to be able to imagine personalities to speaking interfaces, so who knows what human personalities might be unconsciously assigned to different robots.)

Clearly, the coming technological utopia will be just a morally complex as the bad old days. As some have pointed out, exploiting enslaved sentient machines isn’t any more moral than human slavery.  One wonders how the racial and other unconscious social cues might play into these interactions.  (E.g., adding darker skins to  “menial” robots–ick.)

And for all the faux anxiety about The Robot Uprising, I have a bad feeling that people will have much, much more fear of and violent reactions to robots with different “racial” features.  Many people will be much more subservient to White Male robots, whether they should be or not.

I even wonder whether these prejudices are a factor in the implicit competition between household robots and human servants.  Are white skinned robots an attractive alternative to dark skinned humans?   Double ick.

At the very least, designers of social robots must remain aware that they cannot avoid ancient social cues, definitely including the awful mess of gender and racial stereotypes.

On that point, it is rather worrying that the research was not well received by the conference reviewers, and proposals for discussions were prohibited [1].  I sympathise with the discomfort (look at how many “icks” appear above).  But  I don’t think that head-in-the-sand rejection is going to work.

This is important, dammit.


  1. Evan Ackerman, Humans Show Racial Bias Towards Robots of Different Colors: Study, in IEEE Spectrum – Robotics. 2018. https://spectrum.ieee.org/automaton/robotics/humanoids/robots-and-racism
  2. Christoph Bartneck, Kumar Yogeeswaran, Qi Min Ser, Graeme Woodward, Robert Sparrow, Siheng Wang, and Friederike Eyssel, Robots And Racism, in Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. 2018: Chicago, IL, USA. p. 196-204. https://dl.acm.org/citation.cfm?id=3171260

 

Robot Wednesday

More Morphin Copters

Apparently, reconfiguring drones is an idea whose time has come.

Earlier I noted an admirably simple folding quad copter, from a French team.  This week I read of a group in Tokyo who see your quad copter and raise you four—a snaky octocopter that can configure in a zillion ways—the flying DRAGON [2] .  So there!

This flying snake thing has modules connected by gimbals, each with  two rotors, also on gimbals.  Altogether, the assembly can bend in 6DOF, just like a robot arm.   A flying robot arm.

The researchers conceptualize this robot as a sort of overactuated flying arm that can both form new shapes and use those shapes to interact with the world around it by manipulating objects.” (from [1])

Reconfiguring in flight is, well, complicated.

A key feature of this design is that the rotors aren’t all in the same plane as in a rigid quadcopter. This is actually a key to stability:  the rotors point in multiple directions and the body is rigid, yielding stable flight and hovering.

“To achieve an arbitrary 6DoF pose in the air, rotor disks cannot be aligned in the same plane, which is the case for traditional multirotors.” ([2], p. 1177)

The control system is modular, featuring “spinal” and “link” controllers, as well as a high level processor.  Indeed, the device looks like nothing so much as a hovering spine.

The demo video shows an impressive maneuver, slinking thorough a small horizontal hole, unfurling while hovering and slipping link by link up through the floor.  Pretty cool.

What’s more, the software autonomously determines the transformation needed. Very impressive.

This flying robot arm has the potential to be used as a flying robot arm:  it can poke and grasp and carry cargo.

It will be interesting to see how this approach compares to swarms of rigid copters.  What are the advantages and disadvantages of a handful of really complicated snakey fliers versus a constellation of many simpler fliers.   (A swarm is probably harder to shoot down.)

I predict that this will soon be a moot question, because there will be swarms that can lock together into spines, and disperse again into drones, as needed.


  1. Evan Ackerman, Flying Dragon Robot Transforms Itself to Squeeze Through Gaps, in IEEE Spectrum – Robotics. 2018. https://spectrum.ieee.org/automaton/robotics/drones/flying-dragon-robot-transforms-itself-to-squeeze-through-gaps
  2. M. Zhao, T. Anzai, F. Shi, X. Chen, K. Okada, and M. Inaba, Design, Modeling, and Control of an Aerial Robot DRAGON: A Dual-Rotor-Embedded Multilink Robot With the Ability of Multi-Degree-of-Freedom Aerial Transformation. IEEE Robotics and Automation Letters, 3 (2):1176-1183, 2018. https://ieeexplore.ieee.org/document/8258850/

 

Robot Wednesday