Category Archives: Robotics

Drones Counting Ducks Down Under

One of the oldest citizen science projects is bird watching.  For more than a century, enthusiastic birders have amassed vast datasets of avian sightings.  To date, technology has enhanced but not displaced this proud nerd army. Photography, GPS, and databases have vastly improved the data from birders, but nothing has replaced boots on the ground.


This month, a research project at the University of Adelaide reported a demonstration of a UAV mounted image system that, for once, beats human birders [1].

Specifically, the study compared the accuracy of humans versus a small survey quadcopter, on a task to count birds in a nesting colony.  In order to have a known ground truth, the tests used artificial colonies, populated by hundreds of simulated birds.  The repurposed decoys were laid out to mimic some actual nesting sites.

They dubbed it “#EpicDuckChallenge”, though it doesn’t seem especially “epic” to me.

The paper compares the accuracy of human counters on the ground, human counts from the aerial imagery, and computer analysis of the aerial imagery.

First of all, the results show a pretty high error for the human observers, even for the experienced ecologists in the study. Worse, the error is pretty scattered, which suggests that estimates of population change over time will be unreliable.

The study found that using aerial photos from the UAV is much, much more accurate than humans on the ground. The UAV imagery has the advantage of being overhead (rather than human eye level), and also holds still for analysis.

However, counting birds in an image is still tedious and error prone.  The study shows that machine learning can tie or beat humans counting from the same images.

Together, the combination of low-cost aerial images and effective image processing algorithms gave very accurate results, with low variability. This means that this technique would be ideal for monitoring populations over time, because repeated flyovers would be reliably counted.


This study has its limitations, of course.

For one thing, the specific task used is pretty much the best possible case for such an aerial census.  Unrealistically ideal, if you ask me.

Aside from the perfect observing conditions, the colony is easily visible (on an open, flat, uniform surface), and the ‘birds’ are completely static.  In addition, the population is uniform (only one species), and the targets are not camouflaged in any way.

How many real-world situations are this favorable?  (Imagine using a UAV in a forest, at night, or along a craggy cliff.)

To the degree that the situation is less than perfect, the results will suffer.  In many cases, the imagery will be poorer, and the objects to be counted less distinct and recognizable. Also, if there are multiple species, very active birds, or visual clutter such as shrubs, it will be harder to distinguish the individuals to be counted.

For that matter, I’m not sure how easy it will be to acquire training sets for the recognizer software.  This study had a very uniform nesting layout, so it was easy to get a representative subsample to train the algorithm.  But if the nests are sited less uniformly, and mixed with other species and visual noise, it may be difficult to train the algorithm, at least without much larger samples.


Still, this technique is certainly a good idea when it can be made to work.  UAVs are great “force multiplier” for ecologists, giving each scientist much greater range. Properly designed (by which I mean quiet) UAVs should be pretty unobtrusive, especially compared to human observers.

The same basic infrastructure can be used for many kinds of surface observations, not just bird colonies.  It seems likely that UAV surveying will be a common scientific technique in the next few decades.

The image analysis also has the advantage that it can be repeated and improved.  If the captured images are archived, then it will always be possible to go back with improved analytics and make new assessments from the samples.  In fact, image archives are becoming an important part of the scientific record, and a tool for replication, cross validation, and data reuse.


  1. Jarrod C. Hodgson, Rowan Mott, Shane M. Baylis, Trung T. Pham, Simon Wotherspoon, Adam D. Kilpatrick, Ramesh Raja Segaran, Ian Reid, Aleks Terauds, and Lian Pin Koh, Drones count wildlife more accurately and precisely than humans. Methods in Ecology and Evolution:n/a-n/a, http://dx.doi.org/10.1111/2041-210X.12974
  2. University of Adelaide, #EpicDuckChallenge shows we can count on drones, in University of Adelaide – News. 2018. https://www.adelaide.edu.au/news/news98022.html

 

 

Singaporean Robot Swans

Evan Ackerman calls attention to a project at National University of Singapore, that is deploying robotic water quality sensors that are designed to look like swans.

The robots cruise surface reservoirs, monitoring the water chemistry, and storing data as it is collected into the cloud via wifi.  (Singapore has wifi everywhere!)  The robots are encased in imitation swans, which is intended ‘to be “aesthetically pleasing” in order to “promote urban livability.”’ I.e., to look nice.

This is obviously a nice bit of work, and a good start.  The fleet of autonomous robots can maneuver to cover a large area, and concentrate on hot spots when needed, all at a reasonable cost. I expect that the datasets will be amenable to data analysis machine learning, which can mean a continuous improvement in knowledge about the water quality.

As far as the plastic swan bodies…I’m not really sold.

For starters, they don’t actually look like real swans.  They are obviously artificial swans.

Whether plastic swans are actually more aesthetically pleasing than other possible configurations seems like an open question to me.  I tend to thing that a nicely designed robot might be just as pleasing or even better than a fake swan.  And it would look like a water quality monitor, which is a good thing.

Perhaps this is an opportunity to collaborate with artists and architects to develop some attractive robots that say “I’m keeping your water safe.”


  1. Evan Ackerman, Bevy of Robot Swans Explore Singaporean Reservoirs, in IEEE Spectrum – Automation. 2018. https://spectrum.ieee.org/automaton/robotics/industrial-robots/bevy-of-robot-swans-explore-singaporean-reservoirs
  2. NUS Environmental Research Institute, New Smart Water Assessment Network (NUSwan), in NUS Environmental Research Institute – Research Tracks -Environmental Surveillance and Treatment 2018. http://www.nus.edu.sg/neri/Research/nuswan.html

 

Robot Wednesday

Yet More Robot Zebrafish

It seems to be the Year of the Robot Zebrafish.  Just as our favorite lab species are so thoroughly studied that they are now being “uploaded” to silicon, the widely studied zebrafish  (Danio rerio) is being digitized.

This winter researchers at NYU report on a very advanced robot zebrafish, which is very literally “biomimetic”—a detailed 3D animatronic fish.  These kinds of models are useful for learning about how animals interact with each other.  To achieve these goals, the model needs to look, smell, and behave just like a natural animal.  (Yes, even zebrafish can recognize a lame, unrealistic dummy.)

It’s not that difficult to create a visually accurate model, but achieving “realistic enough” behavior is very difficult.  It requires reproducing relevant motion, signals (including visual, auditory, chemical signals), and perception of relevant stimuli (again, potentially in several modalities).  Then, the model needs to act and react in real time in just the way a natural fish would.

In short, you have to really understand the fish, and create a complex real time simulation. As the researchers note, many previous studies have partially implemented the simulation, including an “open loop control”, i.e., employing human direction.  This new research is “closed loop”, and also allows 3D motion of the model.

The apparatus is an aquarium with a digitally controlled zebrafish, where natural fish can swim and interact with the robot.  The research employs 3D printed model fish, a digitally controlled mechanical system (which is quite similar to the mechanism of a 3D printer or router), and 3D computer vision.

Sketch of the experimental apparatus. The drawing shows the experimental tank, robotic platform, lightings, cameras, and holding frame. For clarity, the black curtain on the front of the frame is omitted and the focal fish and the robotic stimulus are magnified. From [1]

The first studies investigate the basic question of how effective closed loop control may be.  We all “know” that 3D, closed loop simulation will be “more fishlike”, but did anyone check with the zebrafish?

In the event, the results showed that the full 3D closed loop was not necessarily as “authentic” as a 2D closed loop, at least in the limited conditions in the study. One factor is that the closed loop motion was partly based on recordings of natural behavior, which, wait for it, seemed natural to the fish.  But overall, the robot was never mistaken for a real fish in any condition.

Although the new robotic platform contributed a number of hardware and so ware advancements for the implementation of biomimetic robotic stimuli, the larger shoaling tendency of zebrafish toward live conspecifics suggest that the replica was not perceived as conspecifics in any condition.” ([1], p. 12)

The researchers identify a number of limitations of the apparatus which probably contributed to the realism. Basically, the equipment used in this experiment probably wasn’t capable of mimicking natural motion precisely enough.  In addition, I would say that there is still much to be learned about what cues are important to the zebrafish.

However, this technology made it possible to quickly and precisely experiment with the real fish.  I’m confident that with improvements, this approach will enable systematic investigation of these questions.


  1. Changsu Kim, Tommaso Ruberto, Paul Phamduy, and Maurizio Porfiri, Closed-loop control of zebrafish behaviour in three dimensions using a robotic stimulus. Scientific Reports, 8 (1):657, 2018/01/12 2018. https://doi.org/10.1038/s41598-017-19083-2

 

Worm Brain Uploaded to Silicon?

Ever since the first electronic computers, we’ve been fascinated with the idea that a sufficiently accurate simulation of a nervous system could recreate the functions of a brain, and thereby recreate the mental experience of a natural brain inside a machine.  If this works, then it might be possible to “upload” our brain (consciousness?) into a machine.

This staple of science fiction hasn’t happened yet, not least because we have pretty limited understanding of how the brain works, or what you’d need to “upload”.  And, of course, this dream rests on naïve notions of “consciousness”.  (Hint: until we know the physical basis for human memory, we don’t know anything at all about the physical basis of “consciousness”.)

Neural simulations are getting a lot better, though, to the point where simulations have reproduced (at least some aspects of) the nervous system of simple organisms, including perennial favorites C. elegans (ring worms) and Drosophila (fruit flies). It would be possible to “upload” the state of a worm or fly into a computer, and closely simulate how the animal would behave.  Of course, these simple beasts have almost no “state” to speak of, so the simulations are not necessarily interesting.

This winter a research group from Technische Universität Wien report a neat study that used a detailed emulation of the C. elegans nervous system as an efficient controller for a (simulated) robot [2].

The key trick is that they selected a specific functional unit of the worm’s nervous system, the tap-withdrawal (TW) circuit.  In a worm, this circuit governs a reflex movement away from a touch to the worm’s tail. This circuit was adapted to a classical engineering problem, controlling an inverted pendulum, which involves ‘reflexively’ adjusting to deviations from vertical.  The point is that the inverse pendulum problem is very similar to the TW problem.

In real life, the worm reacts to touch – and the same neural curcuits can perform tasks in the computer. (From [1])

The study showed that this worm circuit achieves equivalent performance to other (human designed) controllers, using the highly efficient architecture naturally evolved in the worms.  Importantly, the natural neural system learned to solve the control problem without explicit programming.

This is an interesting approach not because the worm brain solved a problem that hadn’t been solved in other ways.  It is interesting because the solution is a very effective (and probably optimal) program based on a design developed through natural evolution.

The general principle would be that naturally evolved neural circuits can be the basis for designing solutions to engineering problems.

It’s not clear to me how easy this might be to apply to other, more complicated problems.  It is necessary to identify (and simulate) isolated neural circuits and their functions, and map them to problems.  In most cases, by the time we understand these mappings, we probably have efficient solutions, just like the TW – to –  inverted pendulum mapping in this study,

We’ll see what else they can do with this approach.

I also thought it was quite cool to see how well this kind of “upload” can be made to work with pretty standard, easily available software.  They didn’t need any super specialized software or equipment.  That’s pretty cool.


  1. Florian Aigner, Worm Uploaded to a Computer and Trained to Balance a Pole, in TU Wien – News. 2018. https://www.tuwien.ac.at/en/news/news_detail/article/125597/
  2. Mathias Lechner, Radu Grosu, and Ramin M. Hasani, Worm-level Control through Search-based Reinforcement Learning. arXiv, 2017. https://arxiv.org/abs/1711.03467

 

“Programmable Droplets” Demo from MIT

This is a really cool idea from MIT.

Programmable droplets” uses a digitally controlled array of electric fields to push drops of liquid around.  The idea is to automatically mix chemicals precisely and repeatedly, to replace conventional laboratory methods.

There are plenty of automated lab robots out there to be sure, but this particular approach is admirably simple and straightforward.  Not just less human labor, but few working parts and potentially less waste.

It is difficult to for me to know exactly what has been implemented because it has not been published where I can read it, at least until July or so.  Sigh.  (I have really, really good access to technical publications, so it takes quite something to publish where I can’t get to it.  Congratulations.)


Obviously, there are a bunch of challenges.  The demo video shows rather large droplets, and I’m sure it will be necessary to not only use different (and smaller) droplets, but also to automatically monitor the size and integrity of drops on the surface.  (Maybe with video?)

I’m no chemist, but I have to wonder about the limits of drops on a surface.  I’m sure there are many chemicals that will be difficult to work this way.  For one thing, it only works for liquids, and possibly only for compounds in water.  And I’m sure there will be tricky details to embed this into a sealed and sterile workflow with controlled temperature and pressure.

Still, it is tres cool.


The video is heavily aimed at medical and biological applications, which is certainly a potential money maker.  But the technique is quite general, and since it is under software control it will be easy to drop in different modules for different kinds of mixing.

This also has the benefits of digitally controlled robotic operation. The mixing is controlled by the digital script, and, conversely, the exact operations are precisely recorded by the system.  This is useful for quality control and documentation, and also enables strong replicability.

This kind of digital control is also exciting because the knowledge (how to do the experiment) is represented digitally, which means it can be automatically generated, shared over networks, easily manipulated to modify the process.  The exact protocol can be uploaded with the published paper. Tools and automated assistants can generate executable experiments.  And so on.

Really cool, even though I can’t get the paper.


  1. Larry Hardesty, Programmable droplets: Using electric fields to manipulate droplets on a surface could enable high-volume, low-cost biology experiments, in MIT News. 2018. http://news.mit.edu/2018/programmable-droplets-enable-high-volume-low-cost-biology-experiments-0119

 

Robot Wednesday

Simulated Octopus Skin Is Cool

Octopuses are really cool.  (Me and octopuses go way back.  The first book report I ever wrote—in first grade—was about octopuses.)

One of many cool things is their awesome skin, which can change color, and match their background.  Even cooler, the skin also varies its texture at the same time, to imitate a surface.

Last fall, researchers at Cornell and Woods Hole report on a bio-inspired synthetic skin that morphs in the way that octopuses do [3].  The design uses compressed air to push a rubbery skin into different textures.  This is a rugged and cheap technology, but achieving fine-grained control is difficult.  (Think about blowing up a toy balloon—easy to do, but not much control over the texture of the surface.)

To solve the problem, the researchers took inspiration from the structures in an octopus’ skin. The flexibly skin of an octopus covers an array of muscular hydrostats, that configure into finger-like papillae.  These muscles contract to push the skin into complex shapes. (Similar mechanisms are seen in some worms and crabs, for instance.)

 “We took initial inspiration from the form and function of cephalopod papillae” ([3], p. 1)

Following this inspiration, the research design used flexible skin and a fiber mesh, which controls the inflation, “[i]n the same way that a string wrapped around a balloon will alter its inflated shape.” ([3], p. 1) The material can be manufactured simply, and the research developed a simple and elegant model that maps between the shape and the configuration of the binary material.

Cool!

This technology is of interest for military camouflage, of course.  But it also is interesting for haptic interfaces, or even for furniture or clothing.  I could imagine gloves or shoes with texture dynamically adjusted to optimize grip of a surface. (And I hate to say it, but this tech seems promising for advanced dildonics.)

The bioinspired engineering may also offer insights into natural biological systems, and might be a laboratory for understanding the capabilities of the octopus. How do octopuses detect and imitate the background?  There is nothing like having an ‘emulator’ to experiment with. [2]

Nice work.

 

  1. Cecilia Laschi. 2017. “Helping robots blend into the background.” Science 358 (6360):169 http://science.sciencemag.org/content/358/6360/169.abstract
  2. Amy Nordrum, and Celia Gorman. 2017. “A new elastic skin morphs to produce different textures.” IEEE Spectrum – Robotics, October 18. https://spectrum.ieee.org/video/robotics/robotics-hardware/octopusinspired-camouflage-for-soft-robotics
  3. J. H. Pikul, S. Li, H. Bai, R. T. Hanlon, I. Cohen, and R. F. Shepherd. 2017. “Stretchable surfaces with programmable 3D texture morphing for synthetic camouflaging skins.” Science 358 (6360):210 http://science.sciencemag.org/content/358/6360/210.abstract

 

Robot Wednesday