Category Archives: Technology

Drones Counting Ducks Down Under

One of the oldest citizen science projects is bird watching.  For more than a century, enthusiastic birders have amassed vast datasets of avian sightings.  To date, technology has enhanced but not displaced this proud nerd army. Photography, GPS, and databases have vastly improved the data from birders, but nothing has replaced boots on the ground.


This month, a research project at the University of Adelaide reported a demonstration of a UAV mounted image system that, for once, beats human birders [1].

Specifically, the study compared the accuracy of humans versus a small survey quadcopter, on a task to count birds in a nesting colony.  In order to have a known ground truth, the tests used artificial colonies, populated by hundreds of simulated birds.  The repurposed decoys were laid out to mimic some actual nesting sites.

They dubbed it “#EpicDuckChallenge”, though it doesn’t seem especially “epic” to me.

The paper compares the accuracy of human counters on the ground, human counts from the aerial imagery, and computer analysis of the aerial imagery.

First of all, the results show a pretty high error for the human observers, even for the experienced ecologists in the study. Worse, the error is pretty scattered, which suggests that estimates of population change over time will be unreliable.

The study found that using aerial photos from the UAV is much, much more accurate than humans on the ground. The UAV imagery has the advantage of being overhead (rather than human eye level), and also holds still for analysis.

However, counting birds in an image is still tedious and error prone.  The study shows that machine learning can tie or beat humans counting from the same images.

Together, the combination of low-cost aerial images and effective image processing algorithms gave very accurate results, with low variability. This means that this technique would be ideal for monitoring populations over time, because repeated flyovers would be reliably counted.


This study has its limitations, of course.

For one thing, the specific task used is pretty much the best possible case for such an aerial census.  Unrealistically ideal, if you ask me.

Aside from the perfect observing conditions, the colony is easily visible (on an open, flat, uniform surface), and the ‘birds’ are completely static.  In addition, the population is uniform (only one species), and the targets are not camouflaged in any way.

How many real-world situations are this favorable?  (Imagine using a UAV in a forest, at night, or along a craggy cliff.)

To the degree that the situation is less than perfect, the results will suffer.  In many cases, the imagery will be poorer, and the objects to be counted less distinct and recognizable. Also, if there are multiple species, very active birds, or visual clutter such as shrubs, it will be harder to distinguish the individuals to be counted.

For that matter, I’m not sure how easy it will be to acquire training sets for the recognizer software.  This study had a very uniform nesting layout, so it was easy to get a representative subsample to train the algorithm.  But if the nests are sited less uniformly, and mixed with other species and visual noise, it may be difficult to train the algorithm, at least without much larger samples.


Still, this technique is certainly a good idea when it can be made to work.  UAVs are great “force multiplier” for ecologists, giving each scientist much greater range. Properly designed (by which I mean quiet) UAVs should be pretty unobtrusive, especially compared to human observers.

The same basic infrastructure can be used for many kinds of surface observations, not just bird colonies.  It seems likely that UAV surveying will be a common scientific technique in the next few decades.

The image analysis also has the advantage that it can be repeated and improved.  If the captured images are archived, then it will always be possible to go back with improved analytics and make new assessments from the samples.  In fact, image archives are becoming an important part of the scientific record, and a tool for replication, cross validation, and data reuse.


  1. Jarrod C. Hodgson, Rowan Mott, Shane M. Baylis, Trung T. Pham, Simon Wotherspoon, Adam D. Kilpatrick, Ramesh Raja Segaran, Ian Reid, Aleks Terauds, and Lian Pin Koh, Drones count wildlife more accurately and precisely than humans. Methods in Ecology and Evolution:n/a-n/a, http://dx.doi.org/10.1111/2041-210X.12974
  2. University of Adelaide, #EpicDuckChallenge shows we can count on drones, in University of Adelaide – News. 2018. https://www.adelaide.edu.au/news/news98022.html

 

 

Grownups Get Real About Blockchains

The grown ups have found out about blockchains, and are starting to make realistic assessments of the technology.  As usual, they are sucking all the fun out of things.

The US National Institute of Standards (NIST) issued an informative report, which is an excellent overview of blockchain technology [2].  Much of the report is straightforward, but NIST is careful to point out important technical limitations.

There is a high level of hype around the use of blockchains, yet the 
technology is not well understood. It is not magical; it will not solve all problems. As with all new technology, there is a tendency to want to apply it to every sector in every way imaginable.” ([2], p. 6)

I think the most important section of the report is Chapter 9, “Blockchain Limitations and Misconceptions”.  The authors explain many basic points, including the ambiguous nature of “who controls the blockchain” (everyone is equal, but devs are more equal than others), and the hazy accountability of potentially malicious users.

Technically, the blockchain has limited capacity, especially storage. Overall, it is difficult to estimate the resource usage of a blockchain because it is implemented on many independent nodes.

Most important of all, they parse the Nakamotoan concept of “trust”.  It is true that there is no third party that must be trusted (at least in permissionless blockchains), but there are many other elements that must be trusted including the basic fairness of the network and the quality of the software (!).

The report also calls attention to the fact that blockchains do not implement either key management or identity management. Identity is masked behind cryptographic keys, and if you lose your key, there is no way to either fix it or revoke it.  These are either features or bugs, depending on what you are trying to do and the kinds of risks you can stand.

Overall, many of the limitations described by NIST are end-to-end requirements:  no matter how a blockchain works, it only addresses part of the total, end-to-end transaction.

The use of blockchain technology is not a silver bullet,” ([2], p.7)


On the same theme, Bailey Reutze reports in Coindesk on an IBM briefing on the end-to-end engineering of blockchain systems [1].  The talk itself is not published, but Coindesk reports that IBM warns potential customers about the end-to-end security challenges using their Hyperledger technology.

As noted many times in this blog, there have been many hacks and oopsies in the cryptocurrency world, and most if not all of them have nothing to do with the blockchain and its protocols.

IBM approaches the challenge with a thorough threat analysis, that looks at the whole system. This is, in fact, exactly what you need to do with a conventional non-blockchain systems, no?

It seems clear that whatever a blockchain may achieve, it doesn’t “disrupt” IBM’s role as a heavy weight business consultant.

In the Coindesk notes, there is a hint at one more interesting point to think about: the global extent and “infinite” lifetime of the blockchain. Nominally, the blockchain maintains every transaction ever recorded, forever.  This means that, unlike most data systems, a worst-case breach somewhere in the system might expose data far and wide, back to the beginning of time. Whew!


Still, both NIST and IBM agree that there are potential use cases for the blockchain that are worth the trouble, including public records and supply chains. (And IBM will be glad to show you how to do it.)

Blockchains may be inscrutable, they ain’t magic.


  1. Bailey Reutzel (2018) IBM Wants You to Know All the Ways Blockchain Can Go Wrong. Coindesk, https://www.coindesk.com/ibm-wants-know-ways-blockchain-can-go-wrong/
  2. Dylan Yaga, Peter Mell, Nik Roby, and Karen Scarfone, Blockchain Technology Overview. The National Institute of Standards and Technology (NIST) Draft NISTIR NIST IR 8202, Gaithersburg, MD, 2018. https://csrc.nist.gov/CSRC/media/Publications/nistir/8202/draft/documents/nistir8202-draft.pdf

 

 

Cryptocurrency Thursday

Singaporean Robot Swans

Evan Ackerman calls attention to a project at National University of Singapore, that is deploying robotic water quality sensors that are designed to look like swans.

The robots cruise surface reservoirs, monitoring the water chemistry, and storing data as it is collected into the cloud via wifi.  (Singapore has wifi everywhere!)  The robots are encased in imitation swans, which is intended ‘to be “aesthetically pleasing” in order to “promote urban livability.”’ I.e., to look nice.

This is obviously a nice bit of work, and a good start.  The fleet of autonomous robots can maneuver to cover a large area, and concentrate on hot spots when needed, all at a reasonable cost. I expect that the datasets will be amenable to data analysis machine learning, which can mean a continuous improvement in knowledge about the water quality.

As far as the plastic swan bodies…I’m not really sold.

For starters, they don’t actually look like real swans.  They are obviously artificial swans.

Whether plastic swans are actually more aesthetically pleasing than other possible configurations seems like an open question to me.  I tend to thing that a nicely designed robot might be just as pleasing or even better than a fake swan.  And it would look like a water quality monitor, which is a good thing.

Perhaps this is an opportunity to collaborate with artists and architects to develop some attractive robots that say “I’m keeping your water safe.”


  1. Evan Ackerman, Bevy of Robot Swans Explore Singaporean Reservoirs, in IEEE Spectrum – Automation. 2018. https://spectrum.ieee.org/automaton/robotics/industrial-robots/bevy-of-robot-swans-explore-singaporean-reservoirs
  2. NUS Environmental Research Institute, New Smart Water Assessment Network (NUSwan), in NUS Environmental Research Institute – Research Tracks -Environmental Surveillance and Treatment 2018. http://www.nus.edu.sg/neri/Research/nuswan.html

 

Robot Wednesday

Yet More Robot Zebrafish

It seems to be the Year of the Robot Zebrafish.  Just as our favorite lab species are so thoroughly studied that they are now being “uploaded” to silicon, the widely studied zebrafish  (Danio rerio) is being digitized.

This winter researchers at NYU report on a very advanced robot zebrafish, which is very literally “biomimetic”—a detailed 3D animatronic fish.  These kinds of models are useful for learning about how animals interact with each other.  To achieve these goals, the model needs to look, smell, and behave just like a natural animal.  (Yes, even zebrafish can recognize a lame, unrealistic dummy.)

It’s not that difficult to create a visually accurate model, but achieving “realistic enough” behavior is very difficult.  It requires reproducing relevant motion, signals (including visual, auditory, chemical signals), and perception of relevant stimuli (again, potentially in several modalities).  Then, the model needs to act and react in real time in just the way a natural fish would.

In short, you have to really understand the fish, and create a complex real time simulation. As the researchers note, many previous studies have partially implemented the simulation, including an “open loop control”, i.e., employing human direction.  This new research is “closed loop”, and also allows 3D motion of the model.

The apparatus is an aquarium with a digitally controlled zebrafish, where natural fish can swim and interact with the robot.  The research employs 3D printed model fish, a digitally controlled mechanical system (which is quite similar to the mechanism of a 3D printer or router), and 3D computer vision.

Sketch of the experimental apparatus. The drawing shows the experimental tank, robotic platform, lightings, cameras, and holding frame. For clarity, the black curtain on the front of the frame is omitted and the focal fish and the robotic stimulus are magnified. From [1]

The first studies investigate the basic question of how effective closed loop control may be.  We all “know” that 3D, closed loop simulation will be “more fishlike”, but did anyone check with the zebrafish?

In the event, the results showed that the full 3D closed loop was not necessarily as “authentic” as a 2D closed loop, at least in the limited conditions in the study. One factor is that the closed loop motion was partly based on recordings of natural behavior, which, wait for it, seemed natural to the fish.  But overall, the robot was never mistaken for a real fish in any condition.

Although the new robotic platform contributed a number of hardware and so ware advancements for the implementation of biomimetic robotic stimuli, the larger shoaling tendency of zebrafish toward live conspecifics suggest that the replica was not perceived as conspecifics in any condition.” ([1], p. 12)

The researchers identify a number of limitations of the apparatus which probably contributed to the realism. Basically, the equipment used in this experiment probably wasn’t capable of mimicking natural motion precisely enough.  In addition, I would say that there is still much to be learned about what cues are important to the zebrafish.

However, this technology made it possible to quickly and precisely experiment with the real fish.  I’m confident that with improvements, this approach will enable systematic investigation of these questions.


  1. Changsu Kim, Tommaso Ruberto, Paul Phamduy, and Maurizio Porfiri, Closed-loop control of zebrafish behaviour in three dimensions using a robotic stimulus. Scientific Reports, 8 (1):657, 2018/01/12 2018. https://doi.org/10.1038/s41598-017-19083-2

 

Worm Brain Uploaded to Silicon?

Ever since the first electronic computers, we’ve been fascinated with the idea that a sufficiently accurate simulation of a nervous system could recreate the functions of a brain, and thereby recreate the mental experience of a natural brain inside a machine.  If this works, then it might be possible to “upload” our brain (consciousness?) into a machine.

This staple of science fiction hasn’t happened yet, not least because we have pretty limited understanding of how the brain works, or what you’d need to “upload”.  And, of course, this dream rests on naïve notions of “consciousness”.  (Hint: until we know the physical basis for human memory, we don’t know anything at all about the physical basis of “consciousness”.)

Neural simulations are getting a lot better, though, to the point where simulations have reproduced (at least some aspects of) the nervous system of simple organisms, including perennial favorites C. elegans (ring worms) and Drosophila (fruit flies). It would be possible to “upload” the state of a worm or fly into a computer, and closely simulate how the animal would behave.  Of course, these simple beasts have almost no “state” to speak of, so the simulations are not necessarily interesting.

This winter a research group from Technische Universität Wien report a neat study that used a detailed emulation of the C. elegans nervous system as an efficient controller for a (simulated) robot [2].

The key trick is that they selected a specific functional unit of the worm’s nervous system, the tap-withdrawal (TW) circuit.  In a worm, this circuit governs a reflex movement away from a touch to the worm’s tail. This circuit was adapted to a classical engineering problem, controlling an inverted pendulum, which involves ‘reflexively’ adjusting to deviations from vertical.  The point is that the inverse pendulum problem is very similar to the TW problem.

In real life, the worm reacts to touch – and the same neural curcuits can perform tasks in the computer. (From [1])

The study showed that this worm circuit achieves equivalent performance to other (human designed) controllers, using the highly efficient architecture naturally evolved in the worms.  Importantly, the natural neural system learned to solve the control problem without explicit programming.

This is an interesting approach not because the worm brain solved a problem that hadn’t been solved in other ways.  It is interesting because the solution is a very effective (and probably optimal) program based on a design developed through natural evolution.

The general principle would be that naturally evolved neural circuits can be the basis for designing solutions to engineering problems.

It’s not clear to me how easy this might be to apply to other, more complicated problems.  It is necessary to identify (and simulate) isolated neural circuits and their functions, and map them to problems.  In most cases, by the time we understand these mappings, we probably have efficient solutions, just like the TW – to –  inverted pendulum mapping in this study,

We’ll see what else they can do with this approach.

I also thought it was quite cool to see how well this kind of “upload” can be made to work with pretty standard, easily available software.  They didn’t need any super specialized software or equipment.  That’s pretty cool.


  1. Florian Aigner, Worm Uploaded to a Computer and Trained to Balance a Pole, in TU Wien – News. 2018. https://www.tuwien.ac.at/en/news/news_detail/article/125597/
  2. Mathias Lechner, Radu Grosu, and Ramin M. Hasani, Worm-level Control through Search-based Reinforcement Learning. arXiv, 2017. https://arxiv.org/abs/1711.03467

 

Cognitive Dissonance, Thy Name Is Ethereum

Ethereum was awarded the designation as CryptoTulip of 2017, and no small part of that distinction was due to its on-going efforts to deal with the catastrophic results of buggy “smart contracts”.

The DAO disaster of 2016 was “fixed” via an ad hoc hard fork that had the tiny side effect of creating a second, rump Ethereum currency.  Since that time, Ethereum has done several more forks to respond to problems.  And in 2017 a little oopsie resulted in millions of dollars worth of Ether being locked in inaccessible accounts.  This goof has not yet been addressed by a hard fork or any other technical fix.

The underlying problem, of course, is that Nakamotoan cryptocurrencies are designed to be “write once”, with the ledger being a permanent, unchangeable record.  This feature is intended to prevent “the man” from rewriting history to cheat you out of your money.  (This is a key part of the Nakamotoan definition of a “trustless” system.)

Ethereum has implemented executable contracts on top of this “immutable” data, which is where a lot of the problems come from.  Software is buggy, and “smart contracts” inevitably have errors or just plain produce incorrect or unintended results, such as theft.  But there is no way to correct the unmodifiable ledger, except by violating the write-once principle, i.e., a hard fork to rewrite history.

True Nakamotoists deeply believe in the unchangeable ledger not only as an engineering design but as the logical foundation of the new, decentralized world economy.  But Ether-heads have (mostly) acquiesced to multiple ad hoc forks to work around grievous bugs, which to my mind completely trash the whole point of the Nakamotoan ledger. The CryptoTulip Award citation noted “the tremendous cognitive dissonance Ethereum has engendered”.


It is very interesting, therefore, to see current discussions proposing to regularize this recovery process [2]. The idea, of course, is to reduce the risk and delay of ad hoc fixes with a more open proposal and review process.  Unfortunately, this process publicly endorses the very practice that the ledger is supposed to preclude.

This proposal has not been uncontroversial, for many obvious reasons.

In addition to the obvious problem with the whole idea of ever rewriting the ledger, the Ethereum community is dealing with questions about how “decentralized” decision making should work.

Theoretically, anyone on the Internet can have a stake in decisions about Ethereum software and protocols.  However, in the crypto world—and “open source” in general—some people are more equal than others.  Active programmers, AKA, “developers”, have influence and often veto power over technical developments.  And operators of large mining operations have veto power in their ability to adopt or reject particular features.

In the earlier ad hoc forks, the devs decided and then implemented the fork. There was little discussion, and the only alternative was the nuclear option of continuing to use the denigrated fork—which many people did. The result was two Ethereums, further muddled by additional changes and forks.

The proposed new process requires public discussion of forks, possibly including video debates. Critics complain (with good reason) that this is likely to introduce “politicians” into the process. I would say that it also will create factions and partisan maneuvering.  It is not inconceivable that (gasp) vote buying and other corruption might arise.

In short, this public decision-making process will be openly political.  What a development. The governance of Ethereum is discovered to be political!

Politics (from Greek: πολιτικα: Polis definition “affairs of the cities”) is the process of making decisions that apply to members of a group.

The explicit acknowledgement of human decision making creates a tremendous cognitive dissonance with the Nakamotoan concept of a “trustless” system, where all decisions are by “consensus”.  (In practice, “consensus” means “if you disagree, you can split off your own code”.)

But it also clashes with the core Ethereum idea of “smart contracts”, which are imagined to implement decentralized decision making with no human involvement. The entire idea of the DAO was to create an “unstoppable” enterprise, where all decisions were implemented by apolitical code.  When Ethereum forked to undo the DAO disaster, it essentially undermined the basic rationale for “smart contracts”, and for Ethereum itself.

And now, they want to have humans involved in the decision making!

The very essence of this dissonance is capture in a quote from Rachel Rose O’Leary:

For now, no further action will likely be taken on the proposal until ethereum’s process for accepting code changes, detailed in EIP-1, has been clarified.” [1]

In other words, EIP-867 is so completely inconsistent with the decision-making process it isn’t even possible to talk about it.  I guess they will continue to muddle through, ad hoc, violating the spirit of Nakamotoism.

I think that Ethereum is managing to radically “disrupt” itself and the whole concept of Nakamotoan cryptocurrency.


  1. Rachel Rose O’Leary (2018) Ethereum Devs Call for Public Debate on Fund Recovery. Coindesk, https://www.coindesk.com/ethereum-devs-call-public-debate-fund-recovery/
  2. Dan Phifer, James Levy, and Reuben Youngblom, Standardized Ethereum Recovery Proposals (ERPs). Etherium Ethereum Improvement Proposal, 2018. https://github.com/ethereum/EIPs/pull/867
  3. Rachel Rose O’Leary (2018) Ethereum Developer Resigns as Code Editor Citing Legal Concerns. Coindesk,  https://www.coindesk.com/ethereum-developer-resigns-as-code-editor-citing-legal-concerns/

 

 

Cryptocurrency Thursday