Category Archives: Robotics

Biomimetic Robotic Zebrafish

Bioinspired and Biomimetic systems are the bees knees (sometimes, literally! [1]).

In some cases, taking bio inspiration leads to designs and design principles for human purposes (e.g., crawly robots inspired by Earthworms [2], or nets inspired by spiderwebs [4]).

Other times, creating a biomimetic robot teaches us about nature.


A group of European researchers from Ecole Polytechnique Fédéral de Lausanne and Sorbonne report this fall on a project that has created a robot zebrafish (Danio rerio) that joins the school of live zebrafish [3].

This is actually pretty difficult, because zebrafish are kind of loosey-goosey about schooling, coming together as needed in different situations. Today’s successful zebrafish must pay attention to the other fish, and play nicely with others.

The result is a robot not only looks and swims like a zebrafish, it learns the social signals of the fish, and behaves correctly I.e., it mimics the anatomy, the movement, the behavior, and the social signaling of the natural fish.

Cool!

This seemingly rather simple result required analysis of how zebrafish school. The researchers developed a two level model, a high level strategy (where the school is going) and a more detailed movement model (how to move in the school).

They also had to quantify the “social integration” achieved by the robot and other fish, which is a measure of how zebrafish-like the robot is, compared to observations of the real zebrafish.

And, of course, they used a fishbot that looks and swims like a zebrafish. For some reason, zebrafish aren’t fooled by a lure that is a very abstract fish shape.

The researchers emphasize that all three forms of mimicry are important for successful schooling.  She’s gotta look like a zebrafish, swim like a zebrafish, and follow along like a zebrafish.

These results suggest that it should be possible to create robots that not only join in, but persuade and lead a school via the natural signaling of the fish. Such a robot or group of robots presumably would be a low-stress method to herd fish. (I’m not completely sure why one would need to herd zebrafish, per se.)


This study is pretty awesome.

It does to seem like kind of a one-off case, though. It took a lot of work to observe and model these small groups of zebrafish. It isn’t clear how well these techniques might apply to larger groups, longer time periods, other environments, or other species.

Obviously, it will be useful to automate the learning of the social signals and so on as they suggest. Eventually, this might lead to a theory of fish—metaknowledge of different cognitive models in fish. Now that would be cool.


  1. J. Amador Guillermo, Matherne Marguerite, Waller D’Andre, Mathews Megha, N. Gorb Stanislav, and L. Hu David, Honey bee hairs and pollenkitt are essential for pollen capture and removal. Bioinspiration & Biomimetics, 12 (2):026015, 2017. http://stacks.iop.org/1748-3190/12/i=2/a=026015
  2. Fang Hongbin, Zhang Yetong, and K. W. Wang, Origami-based earthworm-like locomotion robots. Bioinspiration & Biomimetics, 12 (6):065003, 2017. http://stacks.iop.org/1748-3190/12/i=6/a=065003
  3. Leo Cazenille, Bertrand Collignon, Yohann Chemtob, Frank Bonnet, Alexey Gribovskiy, Francesco Mondada, Nicolas Bredeche, and José Halloy, How mimetic should a robotic fish be to socially integrate into zebrafish groups ? (accepted). Bioinspiration & Biomimetics, 2017 http://iopscience.iop.org/10.1088/1748-3190/aa8f6a
  4. Zheng, L., M. Behrooz, and F. Gordaninejad, A bioinspired adaptive spider web. Bioinspiration & Biomimetics, 12 (1):016012, 2017. http://stacks.iop.org/1748-3190/12/i=1/a=016012

 

 

Robot Wednesday

 

PS. Wouldn’t  “Biomimetic Robotic Zebrafish” be a good name for a band?

More on Gita, Personal Cargo Bot

Earlier this year, I noted the interesting personal cargo bot, Gita (coming Real Soon Now?)

Development seems to be progressing, and the company released video of Gita in some more real world settings.

It seems to be working pretty well, at least in the “follow” mode.   Evan Ackerman points out “looks like they may have ditched that SLAM belt thing”.  I assume that they are using computer vision which is the basis for their navigation, but can also follow one target. (Their technology is not documented.)

Also, the video suggests that they have a nice, simple operation: stand “in front of the eyes” and press the “follow me” button. Then it follows (and presumably learns the route). I like that interface—it’s clear, and it’s real hard to hack.

In my earlier post, I commented that this plain, simple device is kind of cool, but very utilitarian. I still think there is a call for customization (everything is better with flames pained on it!) and unauthorized racing and acrobatic modifications.

Just how many Gitas can a (modified) Gita jump over? Show me a Gita that tips up and drives on one tire! And so on.

From earlier post :

“First of all, they simply have to come in different colors (duh!). Second, I strongly recommend the company encourage customization, including hand painted decorations, decal kits (e.g., flames, team logos), and even plastic and foam 3D decorations (Fins! Shark’s teeth! Ray gun pods!).

“Third, there should be (unsanctioned) modifications to hot rod them. 35 KMH? Not good enough!

“For that matter, there should be rodeos and shows, with trick jumps (I’m seeing flaming hoops), motocross, ski races, etc. For these Gita-X Games, it would be cool to be able to stream out the video, a la drone racing, no”

Finally, I still want to see similar behavior, but in a raptor-like bot. Cross Gita with, say Michigan’s Cassie, and you’ll really have a personal cargo bot!

 

Robot Wednesday

The “Ethical Knob” Won’t Work

If the goal was to make a splash, they succeeded.

But if this is supposed to be a serious proposal, it’s positively idiotic.

This minth Giuseppe Contissa, Francesca Lagioia, and Giovanni Sartor of the Univerity of Bloogna published a description of “the ethical knob”, which adjusts the behavior of an automated vehicle.

Specifically, the “knob” is supposed to set a one-dimensional preference whether to maximally protect the user (i.e., the person setting it) or others. In the event of a catastrophic situation where life is almost certain to be lost, which lives should the robot car sacrifice?

Their paper is published in Artificial Intelligence and Law, and they have a rather legalistic approach. In the case of a human driver, there are legals standards of liability that may apply to such a catastrophe. In general, in the law, choosing to harm someone incurs liability, while inadvertant harm is less culpable.

Extending the principles to AI cars raises the likelihood that whoever programs the vehicle bears responsibility for its behavior, and possibly liability for choices made by his or her software logic. Assuming that software can correctly implement a range of choices (which is a fact not in evidence), the question is what should designers do?

The Bologna team suggests that the solution is to push the burden of the decision onto the “user”, via a simple, one-dimensional preference for how the ethical dilemma should be solved. Someone (the driver? the owner? the boss?) can choose “altruist”, “impartial”, or “egoist” bias in the life and death decision.

This idea has been met with considerable criticism, with good reason. It’s pretty obvious that most people would select egoist, creating both moral and safety issues.

I will add to the chorus.


For one thing, the semantics of this “knob” are hazy. They envision a simple, one-dimensional preference that is applied to a complex array of situations and behavior. Aside from the extremely likely prospect of imperfect implementation, it isn’t even clear what the preference means or how a programmer should implement the feature.

Even more important, it is impossible for the “user” to have any idea what the knob actually does, and therefore to understand what the choice actually means. It isn’t possible to make an informed decision, which renders the user’s choice morally empty and quite possibly legally moot.

If this feature is supposed to shield the user and programmer from liability, I doubt it will succeed. The implementation of the feature will surely be at issue. Pushing a pseudo-choice to the user will not insulate the implementer from liability for how the knob works, or any flaws in the implementation.  (“The car didn’t do what I told it to.”, says the defendant.)

The intentions of the user will also be at issue. If he chooses ‘egoist’, did he mean to kill the bystanders? Did he know it might have that effect? Ignorance of the effects of a choice is not a strong defense.

I’m also not sure exactly who gets to set this knob. The authors use the term “user”, and clearly envision one person who is legally and morally responsible for operating the vehicle. This is analogous to the driver of a vehicle.

However, the “user” is actually more of a passenger, and may well be hiring a ride. So who says who gets to set the knob? The rider? The owner of the vehicle? Someone’s legal department? The terms and conditions from the manufacturer? The rules of the road (i.e., the T&C of the highway)? Some combination of all of the above?

I would imagine there would be all sorts of financial shenanigans arising from such a featuer. Rental vehicles charging more for “egoist” settings, with the result that rich people are protected over poor people. Extra charges to enable the knob at all. Neighborhood safety rules that require “altruist” setting (except for wealthy or powerful people). Insurance companies charging more for different settings (though I’m not sure how their actuaries will find the relative risks). And so on.

Finally, the entire notion that this choice can be expressed in a one-dimensional scale, set in advance, is questionable. Setting aside what the settings mean, and how they should be implemented, the notion that they can be set once, in the abstract, is problematic.

For one thing, I would want this to be a context sensitive. If I have children in the car, that is a different case than if I am alone. If I am operating in a protected area near my home, that is a different case than riding through a wide open, “at your own risk” situation.

Second, game theory makes me want to have a setting to implement tit-for-tat strategy. If I am about to crash into someone set at ‘egoist’, then set me to ‘egoist’. If she is set to ‘altruist’, set me to ‘altruist’, too. And so on. (And, by the way, shouldn’t there be a visible indication on the outside so we know which vehicles are set to kill us and which ones aren’t?)

This whole thing is such a conceptual mess. It can’t possibly work.

I really hope no one tries to implement it.


  1. Giuseppe Contissa, Francesca Lagioia, and Giovanni Sartor, The Ethical Knob: ethically-customisable automated vehicles and the law. Artificial Intelligence and Law, 25 (3):365-378, 2017/09/01 2017. https://doi.org/10.1007/s10506-017-9211-z

 

Robot Wednesday

“Artificial Creatures” from Spoon

There are so many devices wanting to live with us, as well as a crop of “personal” robots. Everything wants to interact with us, but do we want to interact to them?

Too many products and not enough design to go around.

Then there is Spoon.

We design artificial creatures.

A partner to face the big challenges rising in front of us.

A new species, between the real and digital domains for humans, among humans.

OK, these look really cool!

I want one!

But what are they for?

This isn’t very clear at all. The only concrete application mentioned is “a totally new and enhanced experience while welcoming people in shops, hotels, institutions and events.” (I guess this is competing with RoboThespian.)

Anyway, it is slick and sexy design.

The list of company personnel has, like, one programmer and a whole bunch of designers and artisans. Heck, they have an art director, and a philosopher, for crying out loud.

Did I forget to say that they are French!

I have no idea exactly what they are going to build, but I will be looking forward to finding out.

 

Robot Wednesday

Sun2ice: Solar Powered UAV

One of the important use cases for UAVs is surveillance in all its forms. Small, cheap aircraft can cover a lot of area, carry a lot of different sensors, and swoop in to obtain very close up information.   In some cases, a human can directly control the aircraft (as in selfie cams and drone racing), but for many cases the UAV needs to be substantially autonomous.

Furthermore, remote observation generally needs long, slow flights, rather than short, fast ones. Range and flight duration are critical.

Remote sensing by UAVs is ideal for many kinds of environmental research, especially in remote areas such as deserts, oceans, or polar regions. A fleet of (inexpensive) UAVs can multiply the view of a single (very expensive) scientist by orders of magnitude, measuring a broad area, and identifying points of interest for detailed investigation.

This summer a group of researchers from ETH and the AtlantikSolar company have demonstrated a UAV that continuously monitored glaciers in Greenland. The Sun2ice is solar powered, so it charges its batteries as long as the sun is shining. In the polar summer, there is essentially 24 hour sunlight, so the UAV has power to fly continuously for months, at least in principle. Like other solar powered aircraft and boats, the AtlantikSolar needs not fuel and should be capable of extremely long missions.

Of course, flying over Greenland is difficult for any aircraft, and flying a small UAV continuously over remote and rugged glaciers is very challenging. The aircraft must deal with high winds and cold temperatures, even in good weather. With no pilot on board, the control systems must be highly automated.

The UAV must navigate over uninhabited territory, far from the humans back at base. It has to stay on station to collect data continuously, with little help from people. Magnetic compasses don’t work on Greenland, and continuous daylight means that celestial navigation is not possible either.

The researchers also had to deal with take off and landing from a remote field station. The video shows the UAV being delivered to its launch point via dogsled—Pleistocene technology deploying twenty first century technology. The test flights were successful, though flying time was less than a full day.

Flying an experimental solar-powered UAV as AtlantikSolar in Arctic conditions is very challenging due to the narrow sun angle, extreme climatic conditions, the weakness of the magnetic field used for the compass, and the absence of smooth grass-covered terrain to land a fragile airplane.

This technology is ideal for intense observation of glaciers and other natural phenomena. The UAV flies low enough to obtain high resolution images, and if it can stay on station, can provide updated data every hour or less. The UAV is cheaper than a satellite, and even than a piloted aircraft. It would be possible to deploy a fleet of UAVS to monitor a glacier or volcano in great detail for substantial periods.

Cool.


  1. Philipp Oettershagen, Amir Melzer, Thomas Mantel, Konrad Rudin, Thomas Stastny, Bartosz Wawrzacz, Timo Hinzmann, Stefan Leutenegger, Kostas Alexis, and Roland Siegwart, Design of small hand-launched solar-powered UAVs: From concept study to a multi-day world endurance record flight. Journal of Field Robotics, 34 (7):1352-1377, 2017. http://dx.doi.org/10.1002/rob.21717

 

Robot Wednesday

A Bio-Inspired Robot Rat?

It’s IROS time again, which means a slew of papers and videos of robot projects!

There is at least one whole session dedicated to bio-inspired robots, including a Robotic Rat from a team at Bejing Institute of Technology and Waseda U. [1]

In some cases, bio-inspired robots are intended to solve problems, taking advantage of designs from nature (e.g., bat inspired flight) .

In this case, the aim seems to be to actually simulate the body and movement of a rat, apparently “to improve the robot-rat interaction”. (At the time of this writing, the full paper has not been released yet, so I’m working from the abstract.)

I’m not sure exactly why this problem needs to be solved, though I guess it might be interesting for behavioral experiments with rats. Or, as the abstract hints, the robot simulation might be a way to characterize the behavior of a real rat.

In any case, they seem to have done a bang-up job of simulating the visual appearance of a rat. The paper promises to report details of how well the movements of the robot replicate a real rat.

I have to wonder, though, if this is actually going to accomplish what they say they want to do: to create robot-rat interaction that mimic rat-rat interaction. The key question is whether they are simulating the right things, or enough of the right things from the point of view of the rat.

In such robot-rat interactions, the robot should be designed to fully replicate a real rat in terms of morphological and behavioral characteristics.

The “morphological and behavioral characteristics” are, in fact, the visual appearance of the rat. This is what human’s see (and what YouTube shows), but isn’t necessarily how rats perceive each other. In fact, smell is probably more important than how the rat moves, feels, and sounds may also be very important, too.

The robot has no flesh and fur, so it certainly doesn’t feel like a rat. From the abstract and video, I can’t tell what the robot sounds like (it looks like it will sound pretty mechanical to another rat), and I don’t see any mention of olfaction. I would be surprised if precisely mimicking the motions of a rat without the feel, smell and sound will “fool” a rat.

I note that the abstract does not mention any study of robot-rat interaction. I assume that future work will examine such interactions.  I predict that they will confirm my intuition that the robot isn’t even close to realistic enough in a rat’s perception.

This is a very clever design, and they have done a remarkable job of mimicking the visual appearance and motions of a rat. But I don’t think this will be sufficient for naturalistic robot-rat interaction. Their (implicit) model of what is important in rat social behavior simply leaves out too much important stuff.


  1. Chang Li, Qing Shi, Kang Li, Mingjie Zou, Hiroyuki Ishii, Atsuo Takanishi, Qiang Huang, and Toshio Fukuda, Motion Evaluation of a Modified Multi-link Robotic Rat (in press), in International Conference on Intelligent Robots and Systems. 2017: Vancouver.

 

Robot Wednesday

Drone Shows

I have noted the cool collaboration between roboticists from ETH and Cirque du Soleil.

This is now a for-hire business, with the tag line, “Drone shows: The magic is real”.

I note that the basic technology is pretty standard stuff, it’s “just quadcopters”. But developing a show or installation involves careful planning for safety, and they also do “costume design” (i.e., dressing up the flyers), choreography (flyers and human-flyer combos), as well as the control systems for the real time performances.

These theatrical spectacles are probably paving the way for robots in the home and cityscape better than all the engineering studies ever done.  First, the elegant storytelling is enchanting and attractive. I want to dance with these pretty robots.

Second, their choreography is developing a sense and a “grammar” of how humans and UAVs should interact.  Notably, the UAVs have a certain personality that seems  appropriately mechanical but still readable and approachable by humans.

I will add one criticism.

Esthetically, their shows are starting to all look the same, and the “gee whiz” factor is wearing off fast.

I’m hoping to see the next thing, something new and different. I didn’t really find that in the 2017 shows. In fact, the 2017 show reel is about 50% the exact same shows as the 2016 reel.

Perhaps it’s time to open up this technology to more artists.

 

Robot Wednesday