Category Archives: Environmental Sensing

Penguin Feathers Tell All

One of the important questions for filed biology is to document and understand the movements of animals, which reveals many aspects of behavior, including nesting, mating, what they eat, and what eats them. But it isn’t at all easy to track animals in the wild.

For centuries, this difficult problem was tackled through personal observations and with tags. The former is possible only in some fortunate circumstances, and the latter requires capture, release, and recapture, which is difficult, expensive, and lossy. But 21st century technology is now available (and cheap enough) for filed biologiists to use.

In recent years, electronic location tags have become small and cheap, opening a new age of animal tracking. With a small radio tag attached, almost any animal can be tracked, on land, sea, or air. This still requires capture and release or at least touching the animal to tag it. And tags are cheap but not free.

Another cool advance is the use of chemical analysis of tissue to infer the travels and history of an animal. These techniques have advanced to the point that one discarded feather can speak volumes—without harming the animal.

This month Michael J. Polito and colleagues report on some successful experiments tracking Penguins through this method [2]. The study tagged Penguins with location tracking tags and when recaptured, took one tail feather.

The chemical analysis of the feathers detected the isotopes of Carbon in the feathers, which are different in different regions of the ocean, which have different plankton and fish to eat. The study showed that this method was as accurate as the location tag in identifying which waters were visited by each bird that winter.


This means that catching a sample of Penguins once (rather than twice) and plucking one feather (rather than attaching a tracker) can reveal where they fed during the dark winter.

  1. Sarah Gabbott, Penguin feathers record migration route, in BBC News -Science & Environment. 2017.
  2. Michael J. Polito,, Jefferson T. Hinke, Tom Hart, Mercedes Santos, Leah A. Houghton, and Simon R. Thorrold, Stable isotope analyses of feather amino acids identify penguin migration strategies at ocean basin scales. Biology Letters, 13 (8) 2017.

US NSF Funds Antarctic Science Drones

All around the world, Unoccupied Aircraft Systems (AKA, drones) are becoming useful scientific instruments. With the technological and economic push-pull of military and consumer demand, drones are becoming ubiquitous and cheap. Cheap enough for poverty stricken scientists to use.

Small drones have many advantages besides cost. They can carry cameras and other instruments to extend the view of science teams by many kilometers. They fly low, and can, indeed, touch down if needed.   With advances in control systems, it is becoming reasonable to operate flocks of them, to cover even more ground.

Many groups around the world are booting up this technology (E.g., reports by the US Marine Mammal Commission [2] and a coalition in New Zeeland [1]).

This week the US National Science Foundation announced funding of the Drones in Marine Science and Conservation lab at Duke University, which is specifically aimed at monitoring animals in Antarctica.

The advantages are obvious. Antarctica is huge, far away, and hard to get to. Satellites are blinded by cloud cover, and limited in resolution. Aircraft can only operate a few days per year, and are awfully expensive. Drones offer the advantages of aerial surveying at a reasonable cost.

As the video makes clear, the basic use is similar to civilian and military scouting, with the advantage that the penguins will neither shoot nor sue.  🙂

These drones are a bit more complicated than the toys under the Christmas tree, because they are equipped with a variety of instruments, potentially radar, lidar, multispectral cameras, and chemical samplers. As the NSF article points out, they “can even be used to sample breath from individual whales”.

The thrust of the NSF funding is to pull together all the rest of the picture, namely data analysis, visualization, and archiving the data. The project also contemplates training and other assistance to help future projects that want to employ drones.

This is pretty neat.

  1. Lorenzo Fiori, Ashray Doshi, Emmanuelle Martinez, Mark B. Orams, and Barbara Bollard-Breen, The Use of Unmanned Aerial Systems in Marine Mammal Research. Remote Sensing, 9 (6) 2017.
  2. Marine Mammal Commission, Development and Use of UASs by the National Marine Fisheries Service for Surveying Marine Mammals. Bethesda, 2016.


Robot Wednesday

Remote Sensing Penguin Guano

There is so much we don’t know about the Earth and the biosphere. Even for relatively big and easy to see species such as birds, it is hard to know how and where they live, or even how many individuals exist. There are only so many biologists, and humans can only go and see so much.

Remote sensing of the planet from space has gives important insights about large scale processes that can’t be seen easily form a human perspective. For instance, a few images from space make absolutely clear how important dust storms in Africa are for the Amazon forests in South America.

In the past, it has been difficult to learn much about animal populations, because individuals are small and elusive. Biologists are getting better at detecting and tracking animals, especially mass movements of them.

This month NASA calls attention to a successful long term project that uses satellite imagery to locate colonies of Penguins [3]. Penguins are, of course, far too small to be reliably detected from most satellite imagery. However, Penguins live in colonies, and produce immense amounts of guano, which can be seen from space.

In fact, Penguin colonies could be seen from space 30 years ago [2], and space imagery and analysis have gotten a lot better since then.

The basic technique is to detect the color of guano covered rocks, and to infer how many Penguins live there from the area covered. Cross checking on the ground has confirmed that this indirect and remote measure is a pretty good estimate of and many Penguins there are and where they nest.

As the researchers note, Penguins live on sea ice, which means that they are a sensitive indicator of how ice conditions change. As sea ice is melting in parts of Antarctica, we can document how Penguins relocate in response. Penguins are also eat krill and fish, so they are a visible indicator of the health of these foods in an area.

Mathew Schwaller, Heather Lynch and colleagues have completed a global census of Adelie Penguins using imagery from several satellites [1]. They use machine learning techniques to identify the visual signature of nesting areas. Based on the very characteristic nesting habits of Adelies, it is possible to estimate the number of Penguins based on the area. Naturally, the satellite data is combined with on-site investigations and other reports, in order to validate the remote sensing and the estimation.

From [1] FIGURE 1. Map of extant Adélie Penguin colonies, as well as penguin colonies not found in imagery and presumed extinct. Solid bars represent sections of coastline in which populations are generally increasing in abundance, and dashed lines those in which populations are generally decreasing. Areas with no bar are either a mix of increasing and decreasing populations, are not changing in abundance, or do not have sufficient data to assess population change (see Supplemental Material Appendix A). Right: example of high-resolution imagery from Devil Island (−63.797°, −57.290°; location indicated by black arrow). Areas identified in the analysis as guano are shaded in light green. Imagery © 2014 by DigitalGlobe, Inc.
One huge advantage of the satellite data is that there is continued coverage of the whole world, so it is possible to track the changes in Penguin populations. For instance, the 2014 report indicates that over the last twenty some years, nesting sites in West Antarctica have dwindled. This is where sea ice is shrinking. In the same period, new nesting sites have appeared in East Antarctica, where sea ice has increased. Overall, the total population of Adelies seems to have increased in recent years, even as the birds have migrated to more favorable ice.

Ideally, the census can be maintained for a number of years to accumulate a much more detailed baseline, to improve the technique, and refine the understanding of the Penguin population. This census is only one species, so it remains to be seen how similar techniques might track other species.

  1. Heather J. Lynch and M. A. LaRue, First global census of the Adélie Penguin. The Auk, 131 (4):457-466, 2014/10/01 2014.
  2. Heather J.  Lynch,  and Mathew R. Schwaller, Mapping the Abundance and Distribution of Adélie Penguins Using Landsat-7: First Steps towards an Integrated Multi-Sensor Pipeline for Tracking Populations at the Continental Scale. PLOS ONE, 9 (11):e113301, 2014.
  3. Adam Voiland, Penguin Droppings Are Fertile Ground for Science : Image of the Day. NASA Earth Observatory.2017,

PS.  Wouldn’t “Penguin Guano” be a good name for a band? How about ‘Adelie Census’?



Space Saturday

Antarctic Ice Losses

As everyone knows, Antarctica is covered with ice. A lot of ice. Ice that is many kilometers deep. Enough ice that, should it all melt, oceans would rise tens of meters. With the retreat of sea ice in the Arctic and glaciers in many places in the Northern hemisphere, a lot of attention is focused on Antarctic ice.

This spring (which is fall in the south), there has been evidence of yet another dramatic calving, as a crack on the Larsen C ice shelf suddenly grew. (I note that Larsen A and B have already broken off in the last decade.)

This activity was observed by ESA’s Sentinel-1 satellites. Observation from space is pretty much the only way to know what is going on in the winter down there.

The current location of the rift on Larsen C, as of May 1 2017. Labels highlight significant jumps. Tip positions are derived from Landsat (USGS) and Sentinel-1 InSAR (ESA) data. Background image blends BEDMAP2 Elevation (BAS) with MODIS MOA2009 Image mosaic (NSIDC). Other data from SCAR ADD and OSM.

This accelerated change comes after the “warm search” of Antarctic summer, and may signal a break up this year. If so, the ice shelf will be 10% smaller, and the smallest ever observed. This is certainly a big event.

Separation of this ice will probably not affect sea level (the ice is floating on the water). But there is growing evidence that the ice is melting at an accelerated pace at that location, and may well be accompanied by melting of near by ice on land. The latter will contribute to sea level rise.

I used to not expect to see the ice caps melt or the great Athropogenic sea rise (AKA, The Great Glub!). But, who knows? The pace of melting is faster and seems to be accelerating, so I might live long enough to see it.

  1. O’Leary, Martin, Adrian Luckman, and Project MIDAS, A new branch of the rift on Larsen C in Project MIDAS Blog. 2017.


Space Saturday

Weidensaul on “The New Migration Science”

Of all the cool things about birds (they fly! they sing! they have feathers! they are living dinosaurs!) one of the most profound is their astonishing seasonal migrations.

Scott Weidensaul writes for the Cornell Lab of Ornithology about technologies that are coming on line that enable scientists to gain unprecedented information about bird migrations.

[T]oday really is a truly exceptional time for migration science, with so many new avenues for documenting the journeys of birds.

First on the list are twenty first century leg bands, one gram geolocation recorders. Some larger birds can carry a satellite tag that tracks their travel and reports by radio. A cheaper and lighter option is a recording tag that logs the data, to be recovered when the bird is recaptured.

A third option are tiny radio transmitters that can be picked up by a network of collaborating receivers. With standardized signals and networked databases, a receiver can pick up and report any pings in its area, no matter who tagged the animal. The bird does not have to be recaptured, so there is much higher probability of encountering the tagged inividuals.

Weidensaul reports that both DNA and chemical isotope analyses can be made from a single feather or scrap of tissue. DNA can help sort out subpopulations, and isotope analysis can identify geographical history, e.g., of what the bird has eaten or drunk recently.

Recent improvements in data processing have enabled the routine use of NEXRAD weather radar to detect migrating flocks of birds each night. High resolution weather radar can also detect individual birds and reveal details of behavior. These studies, combined with remote sensing of vegetation and water, are enabling a detailed understanding of critical way stations where migratory birds rest for the day, and then continue.

With decades of archived NEXRAD, scientists are also studying trends over time. (The main trend is “down”, as we all might expect.)

Digital networks enable the combination of data from all these soruces, The internet also has automated the centuries-old traditions of collaboration among birders, creating massive crowdsourced datasets of observations.

Weidensaul reports current efforts to deploy cameras to automatically identify birds in cities. With today’s powerful visual analytics, it seems likely that inexpensive digital cameras will soon routinely identify and report individual birds.

Finally, inexpensive microphones on mobile devices or not can record high quality digital sound, which soon will enable a detailed picture of all the unseen birds in the area.

All of these digital technologies were developed for purposes other than ornithology. Almost no one develops complex and expensive technology just for observing birds. But birders will not be denied! These are some excellent examples of repurposing technology, and using powerful general purpose tools such as image and signal processing algorithms and machine learning.

And, of course, birders have been collaborating and crowd sourcing for centuries, long before computer scientists got into the game. Birders are some of the original citizen scientists, and, just as our feathered friends have persisted from dinosaur days, the global collaborative community of bird enthusiasts has survived centuries.  Now we have picked up digital technology and put it to good use.

  1. Bird Studies Canada. Motus Wildlife Tracking. 2017,
  2. Cornell Lab of Ornithology. eBird – Birding in the 21st Century. 2017,
  3. Scott Weidensaul, The New Migration Science, in All About Birds. 2017, Cornel Lab of Ornithology: Ithaca.

Spectroscope At The Grocery Store

OK, I’ve been beefing about “cargo cult” apps that use mobile devices and sensors to do DIY environmental and medical analysis.  Unfortunately, it’s getting harder and harder to even know how real things are.

Case in point, consider Tekla S. Perry report for IEEE Spectrum about the “SCiO Food Analyzer” app. First of all, this isn’t a trivial toy (like, say, a “smart” hair brush). They are building a tiny infrared spectroscope that attaches to or soon will be built in to a mobile phone or other device.

Is this real, or is this just something that looks real?  It’s hard to tell.

My rule of thumb is the spectroscopy is pretty magical, so this has got to be an interesting device. The question is, does this device actually work? And how do we know that?

The suggested use case is the desire to examine food in the store to get a better idea of the quality. The IR scanner can do some kinds of chemical analysis, and report on carbohydrates, fat, and sugar contents, for instance. The app uses unspecified “algorithms” to relate the measures to the flavor of produce, as well as the levels of carbs.

The article reports on a successful demonstration of the technology, which impressed the reporter. The app isn’t intended to tell you what an unknown item is, instead you tell it “this is an apple”, and it tells you the sugar content and how it falls in the range of apples it knows about. I.e., the algorithm predicts how the fruit will taste, based on the readings.

It was all pretty magical, pointing a gadget at food and getting an instant analysis. To be fair, I can’t verify the accuracy of what I was seeing on the screen; I didn’t take the fruits and cheeses back to a laboratory to confirm the analysis using more traditional technology. But it certainly seemed real, real enough that I would be pretty excited to have this kind of technology built into my smart phone,

Evidence? There are no obvious citations in the article. Consulting the company web site, there are some generic descriptions of the technology, but no validation study, published or not.

This being Silicon Valley, there is lots of information about awards and press reports, as well as news about funding and company alliances. Apparently, attracting venture capital and phone manufacturers is supposed to tell me that the results are scientifically valid. Sigh.

The lack of peer-reviewed evidence is a concern. For one thing, it is offered in the area of food safety (and possibly drug safety), which are potentially dangerous if users misinterpret the results or rely on them farther than they should. (“My phone didn’t say it was contaminated, so I thought it was OK to eat it.”)

It’s not that the technology is unbelievable, or implausible. But the fact that it could work does not mean that this particular device does work.

There are many questions that I’d want addressed.

The IR scan has many obvious limitations. I’m pretty sure it won’t work on frozen food, nor through foil or other IR opaque packages. I suspect it won’t work for most cooked foods, I don’t know what kinds of errors it may be vulnerable to. (Dust? Water on the lens? Sugar water on the packaging Fingers in the way of the scan? Deliberate hacking?)

The unspecified algorithms are surely some form of machine learning. What exactly were they taught? What are the limits of the data? If it knows about apples, what about pears?

For example, there are hundreds of species of apples. Have they sampled all of them? How well does the system deal with a new variant? What happens if I point it at a kiwi fruit and ask it if this “apple” is fresh?

The basic learning task is not just a chemical analysis, but also relating the chemistry to the quality of the produce. What heuristics are used, and how valid are they? In addition to variation in produce, how much variation is found among people’s tastes? How is this accounted for in the algorithm? Just how useful are the results?

And, of course, whatever it does, how reliable and accurate is it?

Having made a living as a software guy, I know very well that demos are hardly the same thing as actual validation.

Finally, I thought it was kind of funny that the motivating problem was that the food in the local store tastes blah, and “he resigned himself to occasionally buying tasteless produce or traveling 30 miles to a grocer he discovered that he could trust.”

This device addresses this lack of trust by…I’m not sure. I guess it lets you avoid the stuff you don’t want, though it doesn’t do much to get better food into your local store.

But the funny part is that the lack of trust in the store is solved by an app that does a lot of fancy stuff–that we are asked to trust on faith.

At the moment, it’s hard to know just how well this “magical” product works. Is this real, or cargo cult?  I can’t say, and that means I must assume it doesn’t work until proven.

It is more than a little worrying that venture capital seems to have replaced openly published research as the method for validating technology. We know that will lead to disaster.

  1. Consumer Physics, “SCiO: The world’s first pocket size molecular sensor”,
  2. Tekla S. Perry, What Happened When We Took the SCiO Food Analyzer Grocery Shopping, in IEEE Spectrum – View From The Valley. 2017.


CoralNet: Computer Beats Crowd

In recent years there has been a fluorescence of “crowd sourced science”, a la Galaxy Zoo and it’s large family of descendants.

These projects all share one basic rationale, that some tasks are hard for computers, yet easy for people. Galaxy Zoo is an early, successful, and influential project asks people to perform several image classification tasks, such as categorizing the shape of galaxies. The idea is that computer algorithms are ineffective at this task compare to humans, and there is so much data to process that professional astronomers cannot even dream of looking at it all.

Their studies indicate that large numbers of untrained people (i.e., volunteers from the Internet) provide data that is useful and comparable to alternatives such as machine processing. (Crowdsourcing may also give faster turn around.) Similar methods have been applied to a variety of image processing tasks in a number of domains, including interpretation of handwriting from old documents.

In all these cases, the whole enterprise hinges on the claim that human processing beats the computer, at least at the price point (which is generally around zero dollars.) These claims are clearly contingent on the specific task, and on the state of technology (and funding).

For example, recent advances in face recognition algorithms (driven, no doubt by well financed national security needs) have dramatically changed this calculus in the realm of analysis of digital imagery of human faces.  Low cost, off the shelf software can probably beat human performance in most cases.

This is actually one of the continuing technological stories of the early twentyfirst: the development of algorithms to meet and exceed human perception and judgment. Part of the “big” news in “Big Data” is the ways that it can outperform humans.

One example of these developments is CoralNet, from U. C. San Diego [1, 2].

It is now possible to survey large areas of coral reef quickly, generating large amounts of data. From this data, it is import to identify the type of coral and other features, which are important to understand the ecology of coral and the associated ecology, and to monitor changes over time. It isn’t feasible to hand annotate this data, so automated methods are needed.

The CoralNet system annotates digital imagery of coral reefs, identifying the type of coral and state of the reef. The basic idea is to use machine learning techniques to train the computer to reproduce the classifications of human experts. How well does that work?

The Silicon Valley approach would be to assert that they have “disrupted” coral identification, and rush out a beta. Real scientists, however, actually study the question, and publish the results.

In the case of CoralNet, there have been several studies over the past few years, including. For example, Oscar Beijbom and colleagues published detailed analysis of the performance of human experts and the automated system [2]. Additional details appear in Beijbom’s Thesis [1].

The study found variability among human analysts (to be expected, but often overlooked), and determined that the automated system performed comparably to human raters. These papers is a good example of the careful work that is needed to validate digitally automated science.

Since the 2014 study, the software has been improved and updated. CoralNet 2 improved the speed to the point that it is 10 to 100 times faster than human classification. This speed up is significant, making data available quickly enough to understand changes to the reefs. Combined with automated data collection (e.g., autonomous submarines), it is now possible to continuously monitor reefs around the world.

It seems obvious to me that crowdsourcing a la zooniverse would not be warranted for this case. The computer processing is now good enough that human raters, even thousands of them, are not needed.

I note that even in the domain of ocean ecology, there are many examples of simple analysis tasks. For example, in “Seafloor Explorer” crowdsourced identification of images of the seafloor, identifying material and species.  This is basically the same task as CoralNet automates, though looking for different targets.

I’m pretty sure that machine learning algorithms could catch or exceed the crowdsourced results of CoralNet. (It may or may not be feasible to develop the system, of course.)

The point is that crowdsourcing science is not a panacea, nor are their any problems that, for certain and always will be done better by Internet crowds. My own suspicion is that crowdsourcing (at least the “galaxy zoo” kind) will fade within a decade, as machine learning conquers every perceptual task.

And since I brought it up, I’ll also note the challenges these techniques pose to reproducibility. Human crowdsourcing is, by definition, impossible to reproduce. Classification vial machine learning may be difficult to reproduce as well, especially if the algorithm is updated with new examples.

  1. Oscar Beijbom, Automated Annotation of Coral Reef Survey Images, Ph.D. Thesis in Computer Science. 2015, University of California, San Diego: San Diego.
  2. Oscar Beijbom, Peter J. Edmunds, Chris Roelfsema, Jennifer Smith, David I. Kline, Benjamin P. Neal, Matthew J. Dunlap, Vincent Moriarty, Tung-Yung Fan, Chih-Jui Tan, Stephen Chan, Tali Treibitz, Anthony Gamst, B. Greg Mitchell, and David Kriegman, Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation. PLOS ONE, 10 (7):e0130312, 2015.