Category Archives: science and technology

Drones Counting Ducks Down Under

One of the oldest citizen science projects is bird watching.  For more than a century, enthusiastic birders have amassed vast datasets of avian sightings.  To date, technology has enhanced but not displaced this proud nerd army. Photography, GPS, and databases have vastly improved the data from birders, but nothing has replaced boots on the ground.


This month, a research project at the University of Adelaide reported a demonstration of a UAV mounted image system that, for once, beats human birders [1].

Specifically, the study compared the accuracy of humans versus a small survey quadcopter, on a task to count birds in a nesting colony.  In order to have a known ground truth, the tests used artificial colonies, populated by hundreds of simulated birds.  The repurposed decoys were laid out to mimic some actual nesting sites.

They dubbed it “#EpicDuckChallenge”, though it doesn’t seem especially “epic” to me.

The paper compares the accuracy of human counters on the ground, human counts from the aerial imagery, and computer analysis of the aerial imagery.

First of all, the results show a pretty high error for the human observers, even for the experienced ecologists in the study. Worse, the error is pretty scattered, which suggests that estimates of population change over time will be unreliable.

The study found that using aerial photos from the UAV is much, much more accurate than humans on the ground. The UAV imagery has the advantage of being overhead (rather than human eye level), and also holds still for analysis.

However, counting birds in an image is still tedious and error prone.  The study shows that machine learning can tie or beat humans counting from the same images.

Together, the combination of low-cost aerial images and effective image processing algorithms gave very accurate results, with low variability. This means that this technique would be ideal for monitoring populations over time, because repeated flyovers would be reliably counted.


This study has its limitations, of course.

For one thing, the specific task used is pretty much the best possible case for such an aerial census.  Unrealistically ideal, if you ask me.

Aside from the perfect observing conditions, the colony is easily visible (on an open, flat, uniform surface), and the ‘birds’ are completely static.  In addition, the population is uniform (only one species), and the targets are not camouflaged in any way.

How many real-world situations are this favorable?  (Imagine using a UAV in a forest, at night, or along a craggy cliff.)

To the degree that the situation is less than perfect, the results will suffer.  In many cases, the imagery will be poorer, and the objects to be counted less distinct and recognizable. Also, if there are multiple species, very active birds, or visual clutter such as shrubs, it will be harder to distinguish the individuals to be counted.

For that matter, I’m not sure how easy it will be to acquire training sets for the recognizer software.  This study had a very uniform nesting layout, so it was easy to get a representative subsample to train the algorithm.  But if the nests are sited less uniformly, and mixed with other species and visual noise, it may be difficult to train the algorithm, at least without much larger samples.


Still, this technique is certainly a good idea when it can be made to work.  UAVs are great “force multiplier” for ecologists, giving each scientist much greater range. Properly designed (by which I mean quiet) UAVs should be pretty unobtrusive, especially compared to human observers.

The same basic infrastructure can be used for many kinds of surface observations, not just bird colonies.  It seems likely that UAV surveying will be a common scientific technique in the next few decades.

The image analysis also has the advantage that it can be repeated and improved.  If the captured images are archived, then it will always be possible to go back with improved analytics and make new assessments from the samples.  In fact, image archives are becoming an important part of the scientific record, and a tool for replication, cross validation, and data reuse.


  1. Jarrod C. Hodgson, Rowan Mott, Shane M. Baylis, Trung T. Pham, Simon Wotherspoon, Adam D. Kilpatrick, Ramesh Raja Segaran, Ian Reid, Aleks Terauds, and Lian Pin Koh, Drones count wildlife more accurately and precisely than humans. Methods in Ecology and Evolution:n/a-n/a, http://dx.doi.org/10.1111/2041-210X.12974
  2. University of Adelaide, #EpicDuckChallenge shows we can count on drones, in University of Adelaide – News. 2018. https://www.adelaide.edu.au/news/news98022.html

 

 

Yet More Robot Zebrafish

It seems to be the Year of the Robot Zebrafish.  Just as our favorite lab species are so thoroughly studied that they are now being “uploaded” to silicon, the widely studied zebrafish  (Danio rerio) is being digitized.

This winter researchers at NYU report on a very advanced robot zebrafish, which is very literally “biomimetic”—a detailed 3D animatronic fish.  These kinds of models are useful for learning about how animals interact with each other.  To achieve these goals, the model needs to look, smell, and behave just like a natural animal.  (Yes, even zebrafish can recognize a lame, unrealistic dummy.)

It’s not that difficult to create a visually accurate model, but achieving “realistic enough” behavior is very difficult.  It requires reproducing relevant motion, signals (including visual, auditory, chemical signals), and perception of relevant stimuli (again, potentially in several modalities).  Then, the model needs to act and react in real time in just the way a natural fish would.

In short, you have to really understand the fish, and create a complex real time simulation. As the researchers note, many previous studies have partially implemented the simulation, including an “open loop control”, i.e., employing human direction.  This new research is “closed loop”, and also allows 3D motion of the model.

The apparatus is an aquarium with a digitally controlled zebrafish, where natural fish can swim and interact with the robot.  The research employs 3D printed model fish, a digitally controlled mechanical system (which is quite similar to the mechanism of a 3D printer or router), and 3D computer vision.

Sketch of the experimental apparatus. The drawing shows the experimental tank, robotic platform, lightings, cameras, and holding frame. For clarity, the black curtain on the front of the frame is omitted and the focal fish and the robotic stimulus are magnified. From [1]

The first studies investigate the basic question of how effective closed loop control may be.  We all “know” that 3D, closed loop simulation will be “more fishlike”, but did anyone check with the zebrafish?

In the event, the results showed that the full 3D closed loop was not necessarily as “authentic” as a 2D closed loop, at least in the limited conditions in the study. One factor is that the closed loop motion was partly based on recordings of natural behavior, which, wait for it, seemed natural to the fish.  But overall, the robot was never mistaken for a real fish in any condition.

Although the new robotic platform contributed a number of hardware and so ware advancements for the implementation of biomimetic robotic stimuli, the larger shoaling tendency of zebrafish toward live conspecifics suggest that the replica was not perceived as conspecifics in any condition.” ([1], p. 12)

The researchers identify a number of limitations of the apparatus which probably contributed to the realism. Basically, the equipment used in this experiment probably wasn’t capable of mimicking natural motion precisely enough.  In addition, I would say that there is still much to be learned about what cues are important to the zebrafish.

However, this technology made it possible to quickly and precisely experiment with the real fish.  I’m confident that with improvements, this approach will enable systematic investigation of these questions.


  1. Changsu Kim, Tommaso Ruberto, Paul Phamduy, and Maurizio Porfiri, Closed-loop control of zebrafish behaviour in three dimensions using a robotic stimulus. Scientific Reports, 8 (1):657, 2018/01/12 2018. https://doi.org/10.1038/s41598-017-19083-2

 

Worm Brain Uploaded to Silicon?

Ever since the first electronic computers, we’ve been fascinated with the idea that a sufficiently accurate simulation of a nervous system could recreate the functions of a brain, and thereby recreate the mental experience of a natural brain inside a machine.  If this works, then it might be possible to “upload” our brain (consciousness?) into a machine.

This staple of science fiction hasn’t happened yet, not least because we have pretty limited understanding of how the brain works, or what you’d need to “upload”.  And, of course, this dream rests on naïve notions of “consciousness”.  (Hint: until we know the physical basis for human memory, we don’t know anything at all about the physical basis of “consciousness”.)

Neural simulations are getting a lot better, though, to the point where simulations have reproduced (at least some aspects of) the nervous system of simple organisms, including perennial favorites C. elegans (ring worms) and Drosophila (fruit flies). It would be possible to “upload” the state of a worm or fly into a computer, and closely simulate how the animal would behave.  Of course, these simple beasts have almost no “state” to speak of, so the simulations are not necessarily interesting.

This winter a research group from Technische Universität Wien report a neat study that used a detailed emulation of the C. elegans nervous system as an efficient controller for a (simulated) robot [2].

The key trick is that they selected a specific functional unit of the worm’s nervous system, the tap-withdrawal (TW) circuit.  In a worm, this circuit governs a reflex movement away from a touch to the worm’s tail. This circuit was adapted to a classical engineering problem, controlling an inverted pendulum, which involves ‘reflexively’ adjusting to deviations from vertical.  The point is that the inverse pendulum problem is very similar to the TW problem.

In real life, the worm reacts to touch – and the same neural curcuits can perform tasks in the computer. (From [1])

The study showed that this worm circuit achieves equivalent performance to other (human designed) controllers, using the highly efficient architecture naturally evolved in the worms.  Importantly, the natural neural system learned to solve the control problem without explicit programming.

This is an interesting approach not because the worm brain solved a problem that hadn’t been solved in other ways.  It is interesting because the solution is a very effective (and probably optimal) program based on a design developed through natural evolution.

The general principle would be that naturally evolved neural circuits can be the basis for designing solutions to engineering problems.

It’s not clear to me how easy this might be to apply to other, more complicated problems.  It is necessary to identify (and simulate) isolated neural circuits and their functions, and map them to problems.  In most cases, by the time we understand these mappings, we probably have efficient solutions, just like the TW – to –  inverted pendulum mapping in this study,

We’ll see what else they can do with this approach.

I also thought it was quite cool to see how well this kind of “upload” can be made to work with pretty standard, easily available software.  They didn’t need any super specialized software or equipment.  That’s pretty cool.


  1. Florian Aigner, Worm Uploaded to a Computer and Trained to Balance a Pole, in TU Wien – News. 2018. https://www.tuwien.ac.at/en/news/news_detail/article/125597/
  2. Mathias Lechner, Radu Grosu, and Ramin M. Hasani, Worm-level Control through Search-based Reinforcement Learning. arXiv, 2017. https://arxiv.org/abs/1711.03467

 

Cognitive Dissonance, Thy Name Is Ethereum

Ethereum was awarded the designation as CryptoTulip of 2017, and no small part of that distinction was due to its on-going efforts to deal with the catastrophic results of buggy “smart contracts”.

The DAO disaster of 2016 was “fixed” via an ad hoc hard fork that had the tiny side effect of creating a second, rump Ethereum currency.  Since that time, Ethereum has done several more forks to respond to problems.  And in 2017 a little oopsie resulted in millions of dollars worth of Ether being locked in inaccessible accounts.  This goof has not yet been addressed by a hard fork or any other technical fix.

The underlying problem, of course, is that Nakamotoan cryptocurrencies are designed to be “write once”, with the ledger being a permanent, unchangeable record.  This feature is intended to prevent “the man” from rewriting history to cheat you out of your money.  (This is a key part of the Nakamotoan definition of a “trustless” system.)

Ethereum has implemented executable contracts on top of this “immutable” data, which is where a lot of the problems come from.  Software is buggy, and “smart contracts” inevitably have errors or just plain produce incorrect or unintended results, such as theft.  But there is no way to correct the unmodifiable ledger, except by violating the write-once principle, i.e., a hard fork to rewrite history.

True Nakamotoists deeply believe in the unchangeable ledger not only as an engineering design but as the logical foundation of the new, decentralized world economy.  But Ether-heads have (mostly) acquiesced to multiple ad hoc forks to work around grievous bugs, which to my mind completely trash the whole point of the Nakamotoan ledger. The CryptoTulip Award citation noted “the tremendous cognitive dissonance Ethereum has engendered”.


It is very interesting, therefore, to see current discussions proposing to regularize this recovery process [2]. The idea, of course, is to reduce the risk and delay of ad hoc fixes with a more open proposal and review process.  Unfortunately, this process publicly endorses the very practice that the ledger is supposed to preclude.

This proposal has not been uncontroversial, for many obvious reasons.

In addition to the obvious problem with the whole idea of ever rewriting the ledger, the Ethereum community is dealing with questions about how “decentralized” decision making should work.

Theoretically, anyone on the Internet can have a stake in decisions about Ethereum software and protocols.  However, in the crypto world—and “open source” in general—some people are more equal than others.  Active programmers, AKA, “developers”, have influence and often veto power over technical developments.  And operators of large mining operations have veto power in their ability to adopt or reject particular features.

In the earlier ad hoc forks, the devs decided and then implemented the fork. There was little discussion, and the only alternative was the nuclear option of continuing to use the denigrated fork—which many people did. The result was two Ethereums, further muddled by additional changes and forks.

The proposed new process requires public discussion of forks, possibly including video debates. Critics complain (with good reason) that this is likely to introduce “politicians” into the process. I would say that it also will create factions and partisan maneuvering.  It is not inconceivable that (gasp) vote buying and other corruption might arise.

In short, this public decision-making process will be openly political.  What a development. The governance of Ethereum is discovered to be political!

Politics (from Greek: πολιτικα: Polis definition “affairs of the cities”) is the process of making decisions that apply to members of a group.

The explicit acknowledgement of human decision making creates a tremendous cognitive dissonance with the Nakamotoan concept of a “trustless” system, where all decisions are by “consensus”.  (In practice, “consensus” means “if you disagree, you can split off your own code”.)

But it also clashes with the core Ethereum idea of “smart contracts”, which are imagined to implement decentralized decision making with no human involvement. The entire idea of the DAO was to create an “unstoppable” enterprise, where all decisions were implemented by apolitical code.  When Ethereum forked to undo the DAO disaster, it essentially undermined the basic rationale for “smart contracts”, and for Ethereum itself.

And now, they want to have humans involved in the decision making!

The very essence of this dissonance is capture in a quote from Rachel Rose O’Leary:

For now, no further action will likely be taken on the proposal until ethereum’s process for accepting code changes, detailed in EIP-1, has been clarified.” [1]

In other words, EIP-867 is so completely inconsistent with the decision-making process it isn’t even possible to talk about it.  I guess they will continue to muddle through, ad hoc, violating the spirit of Nakamotoism.

I think that Ethereum is managing to radically “disrupt” itself and the whole concept of Nakamotoan cryptocurrency.


  1. Rachel Rose O’Leary (2018) Ethereum Devs Call for Public Debate on Fund Recovery. Coindesk, https://www.coindesk.com/ethereum-devs-call-public-debate-fund-recovery/
  2. Dan Phifer, James Levy, and Reuben Youngblom, Standardized Ethereum Recovery Proposals (ERPs). Etherium Ethereum Improvement Proposal, 2018. https://github.com/ethereum/EIPs/pull/867
  3. Rachel Rose O’Leary (2018) Ethereum Developer Resigns as Code Editor Citing Legal Concerns. Coindesk,  https://www.coindesk.com/ethereum-developer-resigns-as-code-editor-citing-legal-concerns/

 

 

Cryptocurrency Thursday

Cornell Report on Cryptocurrency “Decentralization”

One of the outstanding features of Nakamotoan blockchains is that it is a “decentralized” protocol—a peer-to-peer (overlay) network produces consistent updates to the shared data with no privileged leader or controller [2].  This property is a significant technical feature of Bitcoin and its extended family, and has even more symbolic and cultural significance for crypto enthusiasts.

“Decentralization” is supposed to impart technical robustness (there is no single point of failure), and political independence (there is no “authority” to be manipulated or shut down).  The absence of a “central” node also means that the protocol is “trustless”—there is no central service that must be trusted in order to do business. (I.e., you only need to trust your counterparties, not the rest of the network.)

In short, Nakamotoan blockchains and cryptocurrencies are all about being “decentralized”.

But what does “decentralized” mean?

In fact, the notion of “decentralized”, as well as the many related concepts, are poorly defined. In the context of a computer network, “centralized” can mean many things.  Indeed, a network transaction may depend on a number of physical and virtual layers, with different degrees of centralization involved simultaneously.  For example, a wi-fi network has various routers, links, switches, firewalls, and so on.  Even the simplest point to point link may pass through a number of shared channels and chokepoints that are technically “central” services, though the overlying service is decentralized, or centralized in a different way.  (Does that sound confusing?  In practice, it truly is.)

However, Nakamotoan “decentralization” is mostly about the logical organization of digital networks, as developed in so called “peer-to-peer” networks.  A classic Internet service is “centralized” in the sense that  client (user) nodes connect with a single server, which manages the whole system.  Clients trust the service to implement the protocol and protect all the data.  Note that so-called “centralized” services often run on many computers, even in many locations.  They are logically a single server, even if not physically a single node. (Does that sound confusing?  In practice, it is.)

Nakamotoan systems replace a single “trusted” service with a peer-to-peer protocol based on cryptography and economic incentives.  One of the critical design features is the use of algorithms that are impossible for a single node to hack.  This is important because In a conventional “centralized” service, once a server is suborned (or subpoenaed), the whole network is controlled.

In contrast, Bitcoin is designed so that the system cannot be controlled unless the attacker controls more than 50% of all the participating nodes.  In this design, security is assured by having a very large number of independent nodes in the network. This widespread participation is made possible by making the code openly available and letting anyone connect to the network.

While the cryptography has a relatively straightforward technical basis, other aspects of this security guarantee are less easy to define and they are actually empirical features of the network that may or may not be realized at any given moment.

For example, everything depends on the Bitcoin network being “owned” by many, many independent people and organizations.  If one person owned 51% of the network, then they would own all the Bitcoin.  And in fact, if one person owned 51% of the computing power (not the number of computers), they would own all the Bitcoin.

The point—and I do have one—is that while the Bitcoin protocol is designed to work in a decentralized network, the protocol only works correctly is the network really is “decentralized” in the right ways.  And there is no formal definition of those “right ways”, nor much proof that various cryptocurrency networks actually are decentralized in the right way.


This winter Cornell researchers report on an imporatant study of precisely these questions on the real (as opposed to theoretical or simulated) Bitcoin and Ethereum networks [1].

there have been few measurement studies on the level of decentralization they achieve in practice” ([1]. p.1)

This study required a technical system to capture data about nodes of the relevant overlay networks (i.e., real life Bitcoin or Ethereum nodes).  In addition, the study examined key technical measures of the nodes, to discern how the overall capabilities are distributed (i.e., the degree of decentralization).  These measures include network bandwidth (data transmission), geographic clustering (related to “independence”), latency (a key to fairness and equal access), and the distribution of ownership of mining power.  The last is an especially important statistic, to say the least.

The Cornell research showed that both Bitcoin and Ethereum have distinctly unequal distribution of mining power.  In the study, a handful of the largest mining operations control a majority of the mining power on the network.  (Since some authorities own or collaborate with multiple mining operations these counts underestimate the actual concentration of power.)   In other words, these networks are highly centralized on this essential aspect of the protocol.  The researchers note that a small non-Nakamotoan network  (a Byzantine quorum system of size 20) would be effectively be more decentralized—at far less cost than the thousands of Nakamotoan nodes ([1], p. 11).

Although miners do change ranks over the observation period, each spot is only contested by a few miners. In particular, only two Bitcoin and three Ethereum miners ever held the top rank.” ([1], p. 10)

These findings are not a surprise to anyone observing the flailing failure of the “consensus” mechanism over the last two years, let alone the soaring transaction fees and demented reddit ranting.  Cryptocurrency systems are designed to be decentralized, but they are, in fact, dominated by a few large players.

By the way, the two networks studied here are likely the largest and most decentralized cyrptocurrency networks.  Other nets use similar technology but have far fewer nodes and often far more concentrated ownership and power.  So thees two are the good cases.  Other networks will be worse.


The general conclusion here is that Nakamoto’s protocol trades off a huge, huge costs in equipment, power consumption, and decision-making efficiency to achieve the supposed benefits of a “decentralized” system.  Yet the resulting networks are actually highly centralized, though in opaque and hidden ways.  I think this is a fundamental flaw in the engineering design, and also in the philosophical underpinnings of Nakamotoan social theory.

I’d love to see similar careful studies of other underpinnings of Nakamotoism, including the supposed properties of “openness”, “trustlessness”, and “transparency”.

A very important study.  Nice work.


  1. Adem Efe Gencer, Soumya Basu, Ittay Eyal, Robbert van Renesse, and Emin Gün Sirer, Decentralization in Bitcoin and Ethereum Networks. arXiv, 2018. https://arxiv.org/abs/1801.03998
  2. Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System. 2009. http://bitcoin.org/bitcoin.pdf

 

Cryptocurrency Thursday

Awesome 3D Display from BYU

For computer interfaces, one of the mountains we must climb is the free standing, 3D, interactive visual display.  Real 3D, holographic movies.  Hollywood aside, we’re still working on it.

This winter researchers from Brigham Young University report on a new technique, they call Optical Trap Display.  This uses lasers to trap air molecules and bouncing light at selected wavelengths—i.e., in full color. [1]  By ‘painting’ a volume of air with this laser guided point, a three dimensional image can be created, floating in air, visible from almost every angle.  Cool!

This isn’t the only open-air display, but it is a very, very impressive advance. If I understand correctly, this is sort of like mist displays, except the lasers are grabbing and manipulating the particles, rather than projecting on randomly drifting mist.  A simple but powerful advance.

Awesome!

The researchers indicate that this technique is vulnerable to air currents, which can push the particle out of control of the laser.  So it won’t be easy to use outdoors. And they report that “Higher beam power is correlated with better trapping until the particle begins to disintegrate.” (p. 487), which sounds like a cool failure mode.  (Everything was fine until my pixel exploded….)

Nice work, all.


  1. D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, A photophoretic-trap volumetric display. Nature, 553:486, 01/24/online 2018. http://dx.doi.org/10.1038/nature25176

 

Biometric Authentication for Mobile Device

Alina N. Filina and Konstantin G. Kogos from National Research Nuclear University, Moscow, report a method for continuous authentication to control access to a mobile device.  They propose to use non-invasive behavioral biometrics to authenticate a person, controlling access to the device.

Continuous authentication allows you to grant rights to the user, without requiring from him any unusual activities.” ([1],  p. 69)

The basic idea is to use the sensors on the device to detect gestures, and use machine learning to identify a unique, individual “signature”.  This is used in combination with other context (e.g., whether the network is trusted or not), to detect when the correct person is holding the device.

Continuous authentication is a great idea, and some kind of biometrics might be useful to achieve this.

But I have doubts about the F&K’s approach.

First, I have to wonder if the method can be accurate enough to be practical.  Machine learning based recognition always has some percentage of false positives and negatives.  In this application, the former would grant access when it shouldn’t, and the latter will block access to the authorized user. This is particularly problematic in this continuous authentication scenario, which repeatedly tests your identity. Imagine the inconvenience of your device dropping out every so often just because the recognizer has a 1% chance of a false rejection, and misses every few minutes.

Second, the supposedly unobtrusive behaviors used to recognize the person require active interaction. The researchers point out the need to detect context such as setting the device on a table, which produces no motion, idiomatic or not.  This case and others should not lock out the user.

The general point about using active behaviors is that in order to be unobtrusive the training samples should be selected from the users “common” or “normal” behavior.  And to be continuously checked, the training samples much cover an array of behaviors that cover a substantial proportion of normal use.  It is not clear to me how to identify and capture such training samples.

Third, this method is vulnerable to changes in user behavior.  If the user enters a new environment or begins a new activity, will his phone block him out?  There is also a problem if the user is injured or incapacitated.  For example, if the user is hurt, his movements may be altered, which could lock out the device.  (This is especially problematic should the user be prevented from calling for medical assistance because his device doesn’t recognize him.)

I would think that the sample behaviors used to authenticate should be difficult to mimic. The method rests on the assumption that users can be distinguished with high probability.  The current study does not explore how effectively the method discriminates  users, or possible imitation or replay attacks.  (I note that a robot might be used to generate replays.)

I’ll also point out that this method requires that all the sensors and data be continuously collected.  This is an immense amount of trust to place in the device, and an invitation to intrusive tracking.  This might be appropriate for high security environments which are already heavily monitored, but less desirable for broad consumer use.


This is an interesting study, but I think it needs a lot more work to show that it will really work.


  1. Alina N. Filina and Konstantin G. Kogos. 2017. “Mobile authentication over hand-waving.” 2017 International Conference “Quality Management,Transport and Information Security, Information Technologies” (IT&QM&IS), 24-30 Sept. 2017. http://ieeexplore.ieee.org/document/8085764/