Tag Archives: Evan Ackerman

A Drone Hops On A Bus…

I’m not a huge fan of the UAV delivery-to-your door concept.  Aside from a personal dislike of small helicopters buzzing around me, I hate to see retail and delivery jobs eliminated.  The people I know who do this work really need these jobs as a step up (or a landing to avoid going down).

On the other hand, I would like to see public transportation survive and thrive.  A city with no public transit is a rotten place to grow up, and a difficult place to get started or be poor–or old,

So I was interested to read a report from Stanford researchers who explore the possibility that, as Evan Ackerman put it, “Delivery Drones Could Hitchhike on Public Transit”. [2]

The concept is simple.

Contemporary UAV delivery copters have a pretty limited range, especially when carrying meaningful cargo loads.  This means that covering a city needs a lot of UAVs and a lot of recharging bases, spread across the whole area.  Some operators have been experimenting with mobile vehicles, essentially delivery trucks that can dispatch UAVs.  Such a solution would use a fleet of surface vehicles to extend the range of a smaller number of bases and UAVs.

The Stanford researchers explored the potential effectiveness of using existing mass transit systems instead of dedicated mobile bases.  There are already large fleets of vehicles covering the city, so why not piggyback on them?  And the roof of a city bus is pretty large and not used for very much.  So, yeah, that could work.

The study focusses on the question of “how would you route deliveries if you could do it”?  I.e., assuming that we can ride on the roof, how well would that work [2].

This is actually a moderately complicated optimization problem, because there are a lot of variables to consider.  But, hey, optimization problems are what academic computer scientists are here to tackle!

The paper describes an efficient framework that can compute a schedule quickly (in seconds!), which means that you could try to keep up with the flow of a real city [2].  I.e., you can recompute a new solution as things change.

The hitchhiking more than triples the effective range of the UAVs, which save power by riding the bus part of the way to their destination.

Cool!

Obviously, there is some work to be done to get UAVs to autonomously land and take off from the roof of a bus.  You probably need to know the motion of the bus (which might or might not be moving), and take care about obstacles (underpasses, overhead wires, who knows?)   A city bus is a creature of the urban jungle, for sure.

I assume that we might have a charger station on the bus, too.  And maybe more than one dronepad per vehicle, which would add air traffic control to the requirements.

I’m not sure how the rendezvous would happen, but at least some of the time the UAV might have to wait for the bus, just like the passengers.  So, perhaps the bus stops would have dronepads with chargers, where the UAVs can safely nest.

Aside from extending the range of the aircraft, this concept has other potential advantages.  Riding on a bus is probably a relatively safe and secure location, which offers a potential haven in case of emergency, bad weather, or malfunction.  Worst case, the UAV can power down and ride to the end of the line for manual recovery.

But the best thing is that the UAVs would pay fares (maybe even refueling fees), sustaining the public transportation network with paying freight.  This would also push the transit system to cover the whole area, in order to garner more freight traffic, and in the process serving more passengers.  (And if the UAVs nest on bus shelters, there would be a demand to install and maintain shelters throughout the whole area, too.)

So, there are lots of wins, including plusses for the mass transit system and the public who rely on it.

And interesting idea.


  1. Evan Ackerman, Delivery Drones Could Hitchhike on Public Transit to Massively Expand Their Range, in IEEE Spectrum – Robotics, June 11, 2020. https://spectrum.ieee.org/automaton/robotics/drones/delivery-drones-could-hitchhike-on-public-transit-to-massively-expand-their-range
  2. Shushman Choudhury, Kiril Solovey, Mykel J. Kochenderfer, and Marco Pavone, Efficient Large-Scale Multi-Drone Delivery Using Transit Networks. arXive, 2020. https://arxiv.org/abs/1909.11840

 

Robot Wednesday

Dogs and Robots

My cat will tell you that dogs are stupid.  They are certainly a lot more likely to pay attention to what humans say to them than cats are.

But both cats and dogs are quite able to ignore voices from a speaker, e.g., a TV or computer.  They do pay attention to cat, dog, and other animal sounds, but generally don’t worry about the people talking.

So we know that dogs have some concept of a difference between speech from a person who is present, and the sound of a person who is not present.

But what about “human like” robots?

A “social” robot is designed to present various human attributes and behaviors, so as to appear sort of human—to a person.  What these attributes and behaviors should be is an open question, as is the question of how different people perceive them.

This spring researchers at Yale explored how dogs think about human-like robots [2].

The new study examines whether these human-oriented attributes are perceived and acted on by dogs.  Clearly dogs can tell the difference between a human and a machine.  They can also perceive the simulation of a simulated human, though they may or may not perceive the “humanness” of the simulation.  For one thing, a robot does not smell like a person, does it?

The study compared a loudspeaker with a (very toylike) robot.  The device called the name of the dog, and gave a “sit” command.  In the first condition, the dog was observed to see if he or she gazed at the source of his name.  The second condition observed if the dog sat in response to the command.

The results showed that the dog seemed to attend to the robot more than the disembodied loudspeaker.  I.e., the dog was more likely to gaze at the robot, and to sit.  The researchers conclude that “The dogs in our study reacted socially to a social robot, and the robot seemed to affect the dogs’ behaviors.” ([2], p. 22)

The researchers speculate that the dogs perceive the social robot as an “agent”, akin to a human.

Well, maybe.

It certainly seems as if the dogs could perceive the difference between the social robot and the bare speaker.  However, there were two people present with the dog in each trial, and the people surely knew the difference—not to mention the aim of the study.  The video does not give us a good view of the humans, so it’s hard to tell what they were doing.

Can you say, “Clever Hans”?

We also don’t really know how much prior experience the dogs may have with robots (probably not much) and speakers (possibly a lot).  So there may be issues of familiarity and novelty.

And above all, the robot itself isn’t particularly human-like, is it?  I would never mistake it for a human, so why would a dog?  So, whatever is going on, I have to really wonder if the imagined “social” cues even exist.

This is an interesting study, and it is certainly nice to consider a broader notion of the perception of “social” robots, as well as a proper curiosity about what non-humans might think of our silly toy robots.

To me, it is an open question whether robots designed to be “social” for humans should or should not be perceived as human-like by dogs.  I tend to think not.  However, I strongly suspect that dogs can pick up what the people around them think about the robot, and might well play along with the game.

In short, this may be a complicated and indirect case of Clever Hans in the twenty first century.


  1. Evan Ackerman, Dogs Obey Commands Given by Social Robots, in IEEE Spectrum – Robotics, May 13, 2020. https://spectrum.ieee.org/automaton/robotics/robotics-software/dogs-obey-commands-given-by-social-robots
  2. Meiying Qin, Yiyun Huang, Ellen Stumph, Laurie Santos, and Brian Scassellati, Dog Sit! Domestic Dogs (Canis familiaris) Follow a Robot’s Sit Commands, in Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. 2020, Association for Computing Machinery: Cambridge, United Kingdom. p. 16–24. https://doi.org/10.1145/3371382.3380734

 

Robot Wednesday

Roombots?

Researchers at École Polytechnique Fédérale de Lausanne E(PFL)’s Biorobotics Laboratory report this spring on Roombots, a swarm of small robots that configure into various pieces of human-useable furniture [1].

Part of the idea is to have a large inventory of possible furniture, only some of which is needed at a given time.  As Evan Akerman comments, a small apartment might have only 4 chairs, but occasionally more are needed.  Roombots would reconfigure to provide extra chairs at the time they are needed [1].

These robots also enable features not generally available in conventional furniture.  For instance, a chair can be configured to follow you around, or conversely, to run away from you.  Or a table might pick up things that fall off.

 

It’s all very clever.

Of course, in its current incarnation, it is also pretty much useless.

First of all, the furniture is, well, awful.  Yes, that’s recognizably a chair.  But it’s a terrible chair.  So this is a pretty poor solution to the problem, not particularly better than other approaches such as sitting on a box or the edge of a bed.  Or borrowing some chairs from a neighbor.

Second, mostly, this is a solution to a non-problem.  Out here in the cornfields of Illinois, we have no problem moving a chair by hand.  And what would be the purpose of a chair that runs away?

And, as Ackerman notes, these swarm robots are a lot harder to program (configure) than single purpose robots.  I mean, if you want to make a chair that runs away, that’s a pretty simple little thing to add to a chair.  You don’t need a general purpose swarm.

To be fair, this research project isn’t really about marketing furniture.  It’s about how to make programmable swarms of robots.  I have to admit that it is fascinating to watch the swarm build itself into a chair.

I would note that, at some point, the Roombots should start to work at optimizing their behavior–creating not just a lousy generic chair, but adjusting the chair to fit the user.  (This would require finer grained bots.)

One thing I am looking forward to, is the demonstration of how to hack a Roombot swarm.  As far as I can tell, there are several points of attack, not least via the Bluetooth command channel.  So, hackers take over the swarm, modify the program, and configure them so the chair chases you, and tries to strangle you!  Or, perhaps, the table goes over, rifles through your purse, and steals your credit cards!

Cool!

I know that hacking the lab prototype means very little.  The point is, this kind of programmable living space will need to be very hardened against any kind of misbehavior, accidental or deliberate.


  1. Evan Ackerman, Roombot Swarm Creates On-Demand Mobile Furniture, in IEEE Spectrum – Robotics, April 20, 2020. https://spectrum.ieee.org/automaton/robotics/home-robots/roombot-swarm-on-demand-mobile-furniture
  2. S. Hauser, M. Mutlu, P. A. Léziart, H. Khodr, A. Bernardino, and A. J. Ijspeert, Roombots extended: Challenges in the next generation of self-reconfigurable modular robots and their application in adaptive and assistive furniture. Robotics and Autonomous Systems, 127:103467, 2020/05/01/ 2020. http://www.sciencedirect.com/science/article/pii/S0921889019303379

 

Robot Wednesday

Another Inflatable Robot

Soft robots of various types are interesting because they are relatively safe for humans to be around.  Regardless of clever sensors, planning, and safety protocols, a light, squishy robot simply cannot hurt you in a collision or fall.  That’s a good thing.

Most “soft robots” to date are pretty small, and generally tethered to power sources (which may be pretty darn “hard”).  To be useful in human environments, these systems need to scale up in size and in independent operation.

(See perhaps, this, this, this, this, this, this)

This spring, researchers at Standford report on a human sized inflatable robot that can operate tetherless [2].   The design is an inflatable “truss” of soft pneumatic tubes.  It kind of looks like a balloon animal.  It also kind of looks like a tensegrity robot, though it is held together by joints and air pressur, not tensegrity.

In fact, this robot is actually an assembly of robots.  The tubes (balloons) have rollers that move along and pinch the tubes, changing the shape of the overall structure.  This enables locomotion and gripping.  Importantly, these reconfigurations do not require addition or subtraction of gas—the tubes retain their size and pressure throughout the changes.  This means that it can be inflated with a bicycle pump, and operate continuously without a pump or gas supply.  Cool!

As in the case of tensegrity robots, the gait is a weird lurching stagger.

But it is also capable of configuring into many different robots.

 “the modularity allows the same hardware to build a diverse family of robots “ ([2], p. 10)

Evan Akerman points out that to be truly human-safe, the rollers need to be make soft and safe [1].  But that looks doable to me.

The comparison to tensegrity robots makes me wonder if you could make a real tensegrity bot using pneumatic tubes for the struts.  I’m not sure if this would be a great advantage except for weight and packing space, but it does open the way for a hairy world of hybrid tensegrity-with-bendy-struts robots.  Perhaps this would be a way to help tensegrity bots grasp objects?


  1. Evan Ackerman, Stanford Makes Giant Soft Robot From Inflatable Tubes, in IEEE Spectrum – Robotics, March 18, 2020. https://spectrum.ieee.org/automaton/robotics/robotics-hardware/stanford-giant-soft-robot-inflatable-tubes
  2. Nathan S. Usevitch, Zachary M. Hammond, Mac Schwager, Allison M. Okamura, Elliot W. Hawkes, and Sean Follmer, An untethered isoperimetric soft robot. Science Robotics, 5 (40):eaaz0492, 2020. http://robotics.sciencemag.org/content/5/40/eaaz0492.abstract

 

Robot Wednesday

Now That’s What I Call A Robot!

As Evan Ackerman puts it “if anyone can do it, it’s Gundam Factory Yokohama. Because no one else will.” [1]

At 18 meters tall and 25 tons, this will be the largest humanoid robot ever.

Why?

Because we can!

[Simulation Video]

 

(And there is an open source simulator available to play with!  Cool!)


  1. Evan Ackerman, Japan Is Building a Giant Gundam Robot That Can Walk, in IEEE Spectrum – Robotics, January 28, 2020. https://spectrum.ieee.org/automaton/robotics/humanoids/japan-building-giant-gundam-robot

 

Robot Wednesday

Pigeonbot!

People have built machines that fly for a couple of centuries now, but we don’t generally use feathers.

Feathers are complicated, finicky things, generally beyond puny human engineering.

But feathered flight does really sophisticated stuff, definitely beyond puny human engineering.

Some research has explored designs that use artificial feathers, or at least feather-like entities.  These don’t work that well.

This winter researchers at Stanford report on remarkable studies of a UAV that uses real feathers [2].

The research is motivated by the gap in performance of natural biological wing systems and human engineered aircraft.  And, unlike some earlier investigations, this project used real feathers, not feather-like structures.  The bot also has a realistic number of feathers, and they are organized to mimic the original pigeon’s wings.  It’s an amazingly complicated mechanism.

“The outcome, PigeonBot, embodies 42 degrees of freedom that control the position of 40 elastically connected feathers via four servo-actuated wrist and finger joints.“ ([2],  p.1)

And it works, looking just like the pigeon it emulates.

 

The natural feathers are important.  As Evan Ackerman reports, the researchers discovered that feathers have “micron-scale features that researchers describe as “directional Velcro””.  “Real feathers can slide to allow the wing to morph, but past a certain point, the directional Velcro engages to keep gaps from developing in the wing surface.”  [1]

Cool!

The study has some other interesting implications.

For one thing, these real, biological feathers are not only unique, the feathers of an individual bird are a unique interlocking set.  Unlike engineered wings, these wings grew up, all the parts grew and developed together through the life of the bird.  You might say, it’s an “organic” wing! : – )

“we observed qualitatively that biological variation between pigeon individuals is too large to exchange a particular flight feather between different individuals without compromising the wing planform. Accordingly, we found that manufacturing of the biohybrid morphing wing is accurate and repeatable, provided that feathers from a single individual are used” ([2], p. 5)

They also observe that this pigeon wing is only one of “10,000 extant bird species— offering unprecedented comparative research opportunities”( [2], p. 11) If each pigeon wing is subtly different, what will we learn from all the other species of naturally evolved bird wings?

Nice work, all!


  1. Evan Ackerman, PigeonBot Uses Real Feathers to Explore How Birds Fly, in IEEE Spectrum – Robotics, January 16, 2020. https://spectrum.ieee.org/automaton/robotics/drones/pigeonbot-uses-real-feathers-to-explore-how-birds-fly
  2. Eric Chang, Laura Y. Matloff, Amanda K. Stowers, and David Lentink, Soft biohybrid morphing wings with feathers underactuated by wrist and finger motion. Science Robotics, 5 (38):eaay1246, 2020. http://robotics.sciencemag.org/content/5/38/eaay1246.abstract

Another Cool Exoskeleton

Sarcos Guardian XO “Give Workers Super Strength”  (And it’s untethered!)

The video and company materials make clear who the target market is:  military workers.  Most of these exoskeletons target military uses, though they’ll be pretty useful for a lot of work (I’m not surprised to read that Caterpillar and other heavy industries are investing.)

Evan Akerman’s report helped me understand the design of these systems a lot better [1].

“In a practical sense, the Guardian XO is a humanoid robot that uses a real human as its command and control system”

First of all, this can be thought of as a vehicle.  A formfitting single occupant vehicle.  You drive it by moving your body, and it follows and amplifies your movements.  A key part of the design is feed back to the rider/driver. It increases capabilities, but it is important for the human to “feel” the effort.

“It’s better to think of the exo as a tool that makes you stronger rather than a tool that makes objects weightless,”

Second, this is potentially a very dangerous vehicle. If the exoskeleton doesn’t stay in close correlation with the driver’s body, someone’s going to get hurt.  And it’s going to be the puny carbon-based unit that breaks first.  So there are dead man’s switches and regulators to make sure the robot doesn’t disarticulate the rider.

“All of the joints are speed limited, meaning that you can’t throw a punch with the exo”

Akerman points out that these systems will be potentially very dangerous around other people (suited or naked).  It’s pretty clear that the operator is responsible for avoiding injury and damage to people and objects around him.  Speed limitation helps, but still.  This is as dangerous and any other powered machinery.


I’ve been wanting to get this technology in the hands (and feet) of dancers.  Think of it!  Jumping!  Climbing!  Inhuman acrobatics!  Superhuman stamina!  So cool!

This report is a bit deflating for this particular idea of mine.  Obviously, you’d need to adjust the speed limitations and other safety features to be able to dance in one of these.  And that’s not going to be easy or safe.

Dancing is risky, must be risky.  That’s the beauty of it.

Taking risks is not going to be possible in these safety-minded industrial units, at least as currently designed.

But on the other hand, dancers are experts at motion, including making extraordinary movement safely.  It might be quite interesting to do some collaborative exploration, letting dancers carefully loosen the safety envelope, to see what you can do.  I suspect that dancers might help create even better and smarter safety systems.

In fact, I wonder if Sarcos might do well to involve dancers in their design team.  One of my own rules of thumb is, if you want to study embodied computing, you don’t want to rely on engineers.  You want to collaborate with dancers, who are all about embodied motion.


  1. Evan Ackerman, Sarcos Demonstrates Powered Exosuit That Gives Workers Super Strength, in IEEE Spectrum – Robotics. 2019. https://spectrum.ieee.org/automaton/robotics/industrial-robots/sarcos-guardian-xo-powered-exoskeleton

 

A Self-repairing robot

In one sense, the idea of robots building and repairing robots is obvious and old hat.  And repairing yourself can be a pretty simple extension of repairing some other machine.  But it’s not done very often.

This fall researchers from the University of Tokyo reported on demonstrations of teaching a self repair operation to commodity robots [2].  Specifically, the robots learned to use their own manipulators to tighten screws on their own body. (For this demo, the robot didn’t figure out for itself when a screw needs adjustment.)

 

Now, tightening a screw isn’t a gigantic deal.  However, robot manipulators are not really designed to reach their own body, so some screws are going to be challenging.  And some of them require an Allen wrench, which is a different grip and generally calls for changing the grip as you go, ‘regrasping”.

“The actual tightening is either super easy or quite complicated, depending on the location and orientation of the screw.”  Evan Ackerman in [1].

They also demonstrate that once you can do screws, you can screw on additional pieces, such as carrying hooks.  Neat.

Part of the trick is that they use CAD data describing their body.  They use this data to learn how to operate on themselves. Duh!  It’s so obvious, once you see it!

It seems to me that part of the challenge here is that these generic robots were not designed to self-repair or even repair each other.  There is no reason for this.  With a bit of care, robots can be assembled in ways that are easier for them to self-repair.  One way to assure this is to use robots to assemble the same model of robot.  And CAD systems themselves can analyze designs to maintain self-repair-ability.

This concept will be especially interesting to combine with evolutionary design.  The robot should not only be able to assemble and repair a robot, it should learn to optimize the assembly/repair process, presumably in tandem with evolutionary design of the robot to be assembled.

(To prevent a runaway robot uprising, the system should have to submit detailed proposals and requests for funding, in order to acquire the resources needed for the new versions.  That ought to keep them under the control–of accountants!)


  1. Evan Ackerman, Japanese Researchers Teaching Robots to Repair Themselves, in IEEE Spectrum – Robotics. 2019. https://spectrum.ieee.org/automaton/robotics/robotics-hardware/japanese-researchers-teaching-robots-to-repair-themselves
  2. Takayuki Murooka, Kei Okada, and Masayuki Inaba, Self-Repair and Self-Extension by Tightening Screws based on Precise Calculation of Screw Pose of Self-Body with CAD Data and Graph Search with Regrasping a Driver, in IEEE-RAS International Conference on Humanoid Robots (Humanoids 2019). 2019: Toronto. p. pp.79-84.

 

Robot Wednesday

Launching UAVs – From a Cannon

Flying is hard, taking off and landing are harder.  This rule applies to UAVs as well.

In the case of take off, it is necessary to apply a lot of energy to clear the ground and gain altitude (without hitting anything), and then convert to cruising at a steady and energy efficient pace.

For UAVs, this process is often augmented by powered assist.  A small UAV may be tossed into the air by hand.  Larger craft might be shot from catapult or sling.

This fall researchers from Cal Tech and JPL report on a multi-rotor UAV designed to be launched from a tube—shot from a cannon [2].  The idea is to quickly and safely put the aircraft where you want it.  The demonstration shows that this works pretty well even from a moving vehicle, which certainly would be useful for some purposes.

The tricky part, of course, is transitioning from ballistic projectile into a working copter.  For that matter, the first tricky part is surviving the blastoff (light weight UAVs are rather fragile). The prototype uses an airgun, which presumably can be tuned for optimal pressure.

The UAV is designed as a transformer, shaped as a ballistic shell, and unfolding four rotor arms in flight.  The diagrams show the rotors neatly snuggled into the smooth body for launch.  The arms are springloaded, and simultaneously snap out to the quad copter configuration.

Squidbot (it looks a little like a squid) is launched from the airgun and initially flies unpowered in a parabolic trajectory. In practice, this phase enables the drone to be deployed ahead of a moving vehicle, or to the side, or whatever direction fits the mission.

At a predetermined time or when directed by the operator the rotors are released and snap into position in a about a tenth of a second. Whap!  After deployment, the on board systems spin up the rotors and initiate controlled flight similar to other quad copters.

The demo video shows a launch from a moving truck, which is kind of cool.  I’m imagining a multiple launcher system that can deploy a swarm of drones over head in a second.  That would be neat.

The researchers imagine this might be used for an explorer on Mars or Titan, where UAV copter missions are already under development.   One advantage over other launch methods is that launching from the vehicle has few constrains that depend on the surface conditions.  In low gravity, the ballistic path could be quite long, giving the rover a large coverage with a fleet of drones.

One limitation of this model is that the UAV seems to be a throw away.  Landing is tricky, and it seems to require complete refitting to be reused.  I can imagine an automatic loading system, but folding up squid is pretty tricky.  For exploring Mars or Titan, I’m pretty sure you want to recapture and reuse your aerial probes, so this is something to work on.

I also wonder if the airgun is all that much better than a mechanical launch alternative.  Granted, you can probably get a much more powerful launch, and consequently greater range with the compressed air tube.  (Of course, there will be a cost to pump up the pressure, which will be even more significant in a thin atmosphere.)

But you could get pretty good range with a crossbow arrangement, which could launch a very similar projectile copter.  Or, just for fun, how about a trebuchet launcher?

Any and all of these systems will be challenging to operate on a remote planet. Even if loading, aiming and launching is fully automated, the moving parts will be vulnerable to dust and corrosion.  On Mars the launch tube is going to have grit in it, which will play havoc with both the air seals and exiting the tube.  On Titan, the atmosphere is likely to be corrosive, but also may have sleet and ice to clog your tube.

But I have to say that this launch system would close the circle to ancient traditions of fireworks.  Our drone sky shows could be launched from mortars, analogous to pyrotechnics.


  1. Evan Ackerman, Caltech and JPL Firing Quadrotors Out of Cannons, in IEEE Spectrum – Robotics. 2019. https://spectrum.ieee.org/automaton/robotics/drones/caltech-and-jpl-firing-quadrotors-out-of-cannons
  2. Daniel Pastor, Jacob Izraelevitz, Paul Nadan, Amanda Bouman, Joel Burdick, and Brett Kennedy, Design of a Ballistically-Launched Foldable Multirotor, in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2019: Macau.
  3. Daniel Pastor, Jacob Izraelevitz, Paul Nadan, Amanda Bouman, Joel Burdick, and Brett Kennedy, Design of a Ballistically-Launched Foldable Multirotor. arXiv arXiv:1911.05639, 2019. https://arxiv.org/abs/1911.05639

 

Robot Wednesday

Urchinbot!

In the never ending exploration of biomimetic robots, it’s not all butterflies.  If you want innovation, you want to look at insects and you want to look in the ocean.  Cause…that’s where life is weird.

This winter researchers from Harvard’s Wyss Institute for Biologically Inspired Engineering report on yet another interesting biomimetic system, a Sea Urchin inspired robot [2].  As Sensei Evan Ackerman says, it’s “One of the Weirdest Looking Robots We’ve Ever Seen.” [1]

To be fair, it looks weird partly because it is out of context, out of scale, and made of plastic. But it is also interesting because it is modeled after a juvenile urchin, which has a substantially different body plan from mature adults.  Specifically, the Urchinbot has two kinds of feet, rigid spines and extensible “tube feet” in a five-fold symmetric layout.  (Adults have much more complicated structure.)

The urchinbot was designed to closely emulate the natural urchin’s locomotive mechanisms.  The spines are attached to the body by a rigid ball joint, and actuated by three soft domes that push the rigid spine in different directions.  The tube feet also inflate to extend the length, deflate to retract, and a magnet to emulate the stickiness of the urchin’s adhesive feet.

The research demonstrates some “gaits”.  The urchinbot drags itself (slowly) across a surface, and also can rotate.  This prototype is limited, but it works!

The researchers suggest that a fully working urchinbot might have useful applications in underwater maintenance, where it is necessary to traverse irregular surfaces and jam into difficult spaces.

Cool.


  1. Evan Ackerman, Harvard’s UrchinBot Is One of the Weirdest Looking Robots We’ve Ever Seen, in IEEE Spectrum – Robotics. 2019. https://spectrum.ieee.org/automaton/robotics/robotics-hardware/harvard-amphibious-urchinbot
  2. T. Paschal, M. A. Bell, J. Sperry, S. Sieniewicz, R. J. Wood, and J. C. Weaver, Design, Fabrication, and Characterization of an Untethered Amphibious Sea Urchin-Inspired Robot. IEEE Robotics and Automation Letters, 4 (4):3348-3354, 2019. https://ieeexplore.ieee.org/document/8754783

 

 

Robot Wednesday