All posts by robertmcgrath

Book Review: “A Horse Walks into a Bar” by David Grossman

A Horse Walks into a Bar by David Grossman

I hadn’t read anything by David Grossman, but this novel won a Man Booker prize in 2017, and Grossman is widely acclaimed in Europe.  So, it’s worth a look, no?

‘A Horse’ is set in a one-man standup show in Israel. The show isn’t exactly a jokefest, as the performer unloads a confused and unhappy recollection.  The crowd isn’t exactly an audience, as the performer seems to have invited people from his story to the occasion.

This peculiar performance, uninhibited in substance and style, is very difficult to grok. What is he up to?  Why is he subjecting us and himself to this nasty stuff?  And why is he doing it in this forum?

The mystery makes the story compelling, I suppose. But the unsympathetic character made it hard to continue to the (underwhelming) end.

Honestly, I didn’t really enjoy the story, and I’m not sure what the Man Booker people saw in this book particularly.


  1. David Grossman, translated by Jessica Cohen, A Horse Walks Into a Bar, New York, Alfred A. Knopft, 2017.

 

Sunday Book Reviews

Galactic Positioning System Using Pulsars

One of the challenges of space travel is knowing where you are.  On Earth, we have learned to use many methods, including the position of stars, land features, and ocean currents.  And we have built our own guides, including light houses, radio beacons, and Global Positioning System satellites.

But once away from Earth, most of this knowledge is little help.  The stars are still there, of course, but distances and velocities are huge, and the tiniest measurement error amounts to a gigantic miss.  (Plus, everything is moving around.  Plus there is all that relativity going on.)

In the absence of a positioning system, spacecraft operate in ways that hark back to preindustrial sailing:  continuous observation and course corrections to close in on the target, step by step.

In the fifty some years of the space age, a number of ideas have been proposed for ways to locate spacecraft [3].  In particular, it would be ideal to find some kind of celestial signal(s) that can serve as beacons by which to determine our position.  One obvious possibility are pulsars, distinctive periodic objects discovered in the 1960’s.

There are many pulsars, and their potential use for navigation depends on the granularity of the periodic signal.  In the last few decades, astronomers have observed visual and radio pulsars, X-ray pulsars, and most importantly, millisecond X-ray pulsars.

X-ray pulsars have signals that rival Earth orbiting GPS, and could theoretically give accuracy to about 30m anywhere in the galaxy!  (Possibly even under the ice of an ocean world.)

Indeed, this winter NASA demonstrated actual navigation using these pulsars  [1].

This animation shows how NICER scans the sky and highlights the mission’s main features. Credits: NASA’s Goddard Space Flight Center

This is pretty cool, and is potentially very useful for spacecraft navigation [2].  Even on Earth, it might be useful to have an astronomical back up to potentially vulnerable satellites (though I don’t know how well pulsars work on or under the Earth’s surface). If a practical pulsar positioning system was available, there would be less concern about the danger of hacking or destroying the GPS satellites and receivers, at least by humans.

 

If puslars are indeed useful for space navigation, then we are not likely to be the only or first to understand this.  Clément Vidal of Vrije Universiteit Brussel points out that any extraterrestrial navigators would probably understand and make use of these naturally occurring navigation beacons [3].

But are pulsars natural?  Or are they engineered to be a galactic positioning system?  Vidal makes a very interesting case that this question is very much unanswered at this time.

Vidal notes that these millisecond X-ray pulsars (MSXPs) have many unique features that make them “especially suitable a navigation beacons.” ([3], p. 6)  The signal is intense and penetrates most of the interference and so easy to catch everywhere. They have stable signals with unique identifiable signatures and here aren’t too many of them, so you can find the ones you are looking for.  And so on.

Vidal notes that they in fact have very similar properties to the GPS navigation systems we have built.

In the absence of a comprehensive theory of the origins and distribution of pulsars, they stand out as an anomaly begging for explanation.

He recalls the history of the discovery of pulsars. They were completely unexpected, and initially thought to be artificial sources.  It became clear that pulsars are not messaging systems, and current theories account for them as rare and peculiar natural phenomena.

At the very least, pulsars are unusual and, in is now clear, very useful, which raises the question of whether they are used or even created by intelligent ETs.

Vidal makes the critical point that positioning systems are quite different from message transmission, so the idea that pulsars might be used for navigation sheds a very different light on arguments that they are a natural phenomenon.  For instance, the prodigious energy consumption of a pulsar which seems unreasonable for a (low bit rate) transmission system, might be economically justified by the long term and widespread benefits of a navigation system.

Vidal notes that the question of whether pulsars are natural or artificial doesn’t have to be a binary choice.  It is possible that some pulsars are natural, and others artificial.  And it is conceivable that some (natural?) pulsars are manipulated to create precisely engineered positional beacons. In the latter case, the peculiar natural processes and prodigious energy expenditure would be harnessed for artificial purposes. Carl Sagan suggested the memorable analogy of communication via smoke signals, which modulate the otherwise undirected products of a natural energy source.

Vidal outlines a variety of research questions that might test the hypothesis that MSPs are deliberately created to be beacons. These include looking at the distribution and characteristics of MSPs.  Are they distributed in ways that are especially useful for navigation?  For example, to support navigation among the stars the beams should mostly point in the galactic plane (i.e., where the stars are).  Do MSPs sufficiently cover the area, i.e., enough to be useful for navigation?  If so, the case that they are engineered is stronger.

He has a variety of intriguing ideas about the temporal evolution of pulsars.  Perhaps binary stars are good sites for controlling a pulsar, so beacons will predominately be found in binary systems.  Also, it may be possible that we can observe signatures that betray the creation of new beacons. It may also be possible to identify networks of related beacons, perhaps encoded in identifying patterns in the pulses.

Vidal notes that it is difficult to synchronize or recalibrate timers in a large decentralized system.  If MSPs are used as beacons, are they synchronized?  If so, this would be seen in ‘glitches’ or other transient anomalies in the pulsars, and perhaps we can observe a calibration in progress?

There are also interesting questions about how MSPs might be used.  How would you navigate near the speed of light?  A system of galactic navigation beacons surely should support this.

For that matter, a practical navigation system would need a coordinate system, and would need to update position information periodically, just as Earth orbital GPS broadcasts ephemerides now and again.  Can we detect such signals, and decode the coordinate system(s)?  Pulsars are also candidates to be a source of stable timestamps, and therefore of galactic metadata!  Should SETI discover a message, will it be necessary to decode the coordinates of the sender and receiver?  (Real Star Dates!!)

Puslar based navigation would be useful for precise spaceflight, which would be very useful for directed panspermia – spreading life across the stars.  Precise positioning would also be useful for propulsion via radiation pressure, i.e., aiming the long-distance lasers. For that matter, the pulsars themselves might be used for (weak) propulsion, perhaps to directly push DNA to nearby star systems.


Overall, Vidal raises an intriguing case and some good ideas for astronomical research. If nothing else, he shows that SETI really should be on the lookout for something like a distributed network of navigation beacons, rather than just point to point messages.

I should note that I was pleasantly surprised by how accessible Vidal’s article is.  I expected a short, dense astronomy paper, but it is very readable and chock-a-block with really cool ideas.


  1. Lori Keesey and Clare Skelly, NASA Team First to Demonstrate X-ray Navigation in Space, in NASA – Technology. 2018. https://www.nasa.gov/feature/goddard/2018/nasa-team-first-to-demonstrate-x-ray-navigation-in-space
  2. David Schneider, What If GPS Stood for “Galactic Positioning System”?, in IEEE Spectrum – Tech Talk. 2018. https://spectrum.ieee.org/tech-talk/aerospace/space-flight/what-if-gps-stood-for-galactic-positioning-system
  3. Clément Vidal, Pulsar Positioning System: A quest for evidence of extraterrestrial engineering. arXiv, 2017. https://arxiv.org/abs/1704.03316

 

PS:  Some great names for bands:

SETI-XNAV
Pulsar Positioning System
Galactic Positioning System

 

Space Saturday

 

 

Drones Counting Ducks Down Under

One of the oldest citizen science projects is bird watching.  For more than a century, enthusiastic birders have amassed vast datasets of avian sightings.  To date, technology has enhanced but not displaced this proud nerd army. Photography, GPS, and databases have vastly improved the data from birders, but nothing has replaced boots on the ground.


This month, a research project at the University of Adelaide reported a demonstration of a UAV mounted image system that, for once, beats human birders [1].

Specifically, the study compared the accuracy of humans versus a small survey quadcopter, on a task to count birds in a nesting colony.  In order to have a known ground truth, the tests used artificial colonies, populated by hundreds of simulated birds.  The repurposed decoys were laid out to mimic some actual nesting sites.

They dubbed it “#EpicDuckChallenge”, though it doesn’t seem especially “epic” to me.

The paper compares the accuracy of human counters on the ground, human counts from the aerial imagery, and computer analysis of the aerial imagery.

First of all, the results show a pretty high error for the human observers, even for the experienced ecologists in the study. Worse, the error is pretty scattered, which suggests that estimates of population change over time will be unreliable.

The study found that using aerial photos from the UAV is much, much more accurate than humans on the ground. The UAV imagery has the advantage of being overhead (rather than human eye level), and also holds still for analysis.

However, counting birds in an image is still tedious and error prone.  The study shows that machine learning can tie or beat humans counting from the same images.

Together, the combination of low-cost aerial images and effective image processing algorithms gave very accurate results, with low variability. This means that this technique would be ideal for monitoring populations over time, because repeated flyovers would be reliably counted.


This study has its limitations, of course.

For one thing, the specific task used is pretty much the best possible case for such an aerial census.  Unrealistically ideal, if you ask me.

Aside from the perfect observing conditions, the colony is easily visible (on an open, flat, uniform surface), and the ‘birds’ are completely static.  In addition, the population is uniform (only one species), and the targets are not camouflaged in any way.

How many real-world situations are this favorable?  (Imagine using a UAV in a forest, at night, or along a craggy cliff.)

To the degree that the situation is less than perfect, the results will suffer.  In many cases, the imagery will be poorer, and the objects to be counted less distinct and recognizable. Also, if there are multiple species, very active birds, or visual clutter such as shrubs, it will be harder to distinguish the individuals to be counted.

For that matter, I’m not sure how easy it will be to acquire training sets for the recognizer software.  This study had a very uniform nesting layout, so it was easy to get a representative subsample to train the algorithm.  But if the nests are sited less uniformly, and mixed with other species and visual noise, it may be difficult to train the algorithm, at least without much larger samples.


Still, this technique is certainly a good idea when it can be made to work.  UAVs are great “force multiplier” for ecologists, giving each scientist much greater range. Properly designed (by which I mean quiet) UAVs should be pretty unobtrusive, especially compared to human observers.

The same basic infrastructure can be used for many kinds of surface observations, not just bird colonies.  It seems likely that UAV surveying will be a common scientific technique in the next few decades.

The image analysis also has the advantage that it can be repeated and improved.  If the captured images are archived, then it will always be possible to go back with improved analytics and make new assessments from the samples.  In fact, image archives are becoming an important part of the scientific record, and a tool for replication, cross validation, and data reuse.


  1. Jarrod C. Hodgson, Rowan Mott, Shane M. Baylis, Trung T. Pham, Simon Wotherspoon, Adam D. Kilpatrick, Ramesh Raja Segaran, Ian Reid, Aleks Terauds, and Lian Pin Koh, Drones count wildlife more accurately and precisely than humans. Methods in Ecology and Evolution:n/a-n/a, http://dx.doi.org/10.1111/2041-210X.12974
  2. University of Adelaide, #EpicDuckChallenge shows we can count on drones, in University of Adelaide – News. 2018. https://www.adelaide.edu.au/news/news98022.html

 

 

Grownups Get Real About Blockchains

The grown ups have found out about blockchains, and are starting to make realistic assessments of the technology.  As usual, they are sucking all the fun out of things.

The US National Institute of Standards (NIST) issued an informative report, which is an excellent overview of blockchain technology [2].  Much of the report is straightforward, but NIST is careful to point out important technical limitations.

There is a high level of hype around the use of blockchains, yet the 
technology is not well understood. It is not magical; it will not solve all problems. As with all new technology, there is a tendency to want to apply it to every sector in every way imaginable.” ([2], p. 6)

I think the most important section of the report is Chapter 9, “Blockchain Limitations and Misconceptions”.  The authors explain many basic points, including the ambiguous nature of “who controls the blockchain” (everyone is equal, but devs are more equal than others), and the hazy accountability of potentially malicious users.

Technically, the blockchain has limited capacity, especially storage. Overall, it is difficult to estimate the resource usage of a blockchain because it is implemented on many independent nodes.

Most important of all, they parse the Nakamotoan concept of “trust”.  It is true that there is no third party that must be trusted (at least in permissionless blockchains), but there are many other elements that must be trusted including the basic fairness of the network and the quality of the software (!).

The report also calls attention to the fact that blockchains do not implement either key management or identity management. Identity is masked behind cryptographic keys, and if you lose your key, there is no way to either fix it or revoke it.  These are either features or bugs, depending on what you are trying to do and the kinds of risks you can stand.

Overall, many of the limitations described by NIST are end-to-end requirements:  no matter how a blockchain works, it only addresses part of the total, end-to-end transaction.

The use of blockchain technology is not a silver bullet,” ([2], p.7)


On the same theme, Bailey Reutze reports in Coindesk on an IBM briefing on the end-to-end engineering of blockchain systems [1].  The talk itself is not published, but Coindesk reports that IBM warns potential customers about the end-to-end security challenges using their Hyperledger technology.

As noted many times in this blog, there have been many hacks and oopsies in the cryptocurrency world, and most if not all of them have nothing to do with the blockchain and its protocols.

IBM approaches the challenge with a thorough threat analysis, that looks at the whole system. This is, in fact, exactly what you need to do with a conventional non-blockchain systems, no?

It seems clear that whatever a blockchain may achieve, it doesn’t “disrupt” IBM’s role as a heavy weight business consultant.

In the Coindesk notes, there is a hint at one more interesting point to think about: the global extent and “infinite” lifetime of the blockchain. Nominally, the blockchain maintains every transaction ever recorded, forever.  This means that, unlike most data systems, a worst-case breach somewhere in the system might expose data far and wide, back to the beginning of time. Whew!


Still, both NIST and IBM agree that there are potential use cases for the blockchain that are worth the trouble, including public records and supply chains. (And IBM will be glad to show you how to do it.)

Blockchains may be inscrutable, they ain’t magic.


  1. Bailey Reutzel (2018) IBM Wants You to Know All the Ways Blockchain Can Go Wrong. Coindesk, https://www.coindesk.com/ibm-wants-know-ways-blockchain-can-go-wrong/
  2. Dylan Yaga, Peter Mell, Nik Roby, and Karen Scarfone, Blockchain Technology Overview. The National Institute of Standards and Technology (NIST) Draft NISTIR NIST IR 8202, Gaithersburg, MD, 2018. https://csrc.nist.gov/CSRC/media/Publications/nistir/8202/draft/documents/nistir8202-draft.pdf

 

 

Cryptocurrency Thursday

Singaporean Robot Swans

Evan Ackerman calls attention to a project at National University of Singapore, that is deploying robotic water quality sensors that are designed to look like swans.

The robots cruise surface reservoirs, monitoring the water chemistry, and storing data as it is collected into the cloud via wifi.  (Singapore has wifi everywhere!)  The robots are encased in imitation swans, which is intended ‘to be “aesthetically pleasing” in order to “promote urban livability.”’ I.e., to look nice.

This is obviously a nice bit of work, and a good start.  The fleet of autonomous robots can maneuver to cover a large area, and concentrate on hot spots when needed, all at a reasonable cost. I expect that the datasets will be amenable to data analysis machine learning, which can mean a continuous improvement in knowledge about the water quality.

As far as the plastic swan bodies…I’m not really sold.

For starters, they don’t actually look like real swans.  They are obviously artificial swans.

Whether plastic swans are actually more aesthetically pleasing than other possible configurations seems like an open question to me.  I tend to thing that a nicely designed robot might be just as pleasing or even better than a fake swan.  And it would look like a water quality monitor, which is a good thing.

Perhaps this is an opportunity to collaborate with artists and architects to develop some attractive robots that say “I’m keeping your water safe.”


  1. Evan Ackerman, Bevy of Robot Swans Explore Singaporean Reservoirs, in IEEE Spectrum – Automation. 2018. https://spectrum.ieee.org/automaton/robotics/industrial-robots/bevy-of-robot-swans-explore-singaporean-reservoirs
  2. NUS Environmental Research Institute, New Smart Water Assessment Network (NUSwan), in NUS Environmental Research Institute – Research Tracks -Environmental Surveillance and Treatment 2018. http://www.nus.edu.sg/neri/Research/nuswan.html

 

Robot Wednesday

Yet More Robot Zebrafish

It seems to be the Year of the Robot Zebrafish.  Just as our favorite lab species are so thoroughly studied that they are now being “uploaded” to silicon, the widely studied zebrafish  (Danio rerio) is being digitized.

This winter researchers at NYU report on a very advanced robot zebrafish, which is very literally “biomimetic”—a detailed 3D animatronic fish.  These kinds of models are useful for learning about how animals interact with each other.  To achieve these goals, the model needs to look, smell, and behave just like a natural animal.  (Yes, even zebrafish can recognize a lame, unrealistic dummy.)

It’s not that difficult to create a visually accurate model, but achieving “realistic enough” behavior is very difficult.  It requires reproducing relevant motion, signals (including visual, auditory, chemical signals), and perception of relevant stimuli (again, potentially in several modalities).  Then, the model needs to act and react in real time in just the way a natural fish would.

In short, you have to really understand the fish, and create a complex real time simulation. As the researchers note, many previous studies have partially implemented the simulation, including an “open loop control”, i.e., employing human direction.  This new research is “closed loop”, and also allows 3D motion of the model.

The apparatus is an aquarium with a digitally controlled zebrafish, where natural fish can swim and interact with the robot.  The research employs 3D printed model fish, a digitally controlled mechanical system (which is quite similar to the mechanism of a 3D printer or router), and 3D computer vision.

Sketch of the experimental apparatus. The drawing shows the experimental tank, robotic platform, lightings, cameras, and holding frame. For clarity, the black curtain on the front of the frame is omitted and the focal fish and the robotic stimulus are magnified. From [1]

The first studies investigate the basic question of how effective closed loop control may be.  We all “know” that 3D, closed loop simulation will be “more fishlike”, but did anyone check with the zebrafish?

In the event, the results showed that the full 3D closed loop was not necessarily as “authentic” as a 2D closed loop, at least in the limited conditions in the study. One factor is that the closed loop motion was partly based on recordings of natural behavior, which, wait for it, seemed natural to the fish.  But overall, the robot was never mistaken for a real fish in any condition.

Although the new robotic platform contributed a number of hardware and so ware advancements for the implementation of biomimetic robotic stimuli, the larger shoaling tendency of zebrafish toward live conspecifics suggest that the replica was not perceived as conspecifics in any condition.” ([1], p. 12)

The researchers identify a number of limitations of the apparatus which probably contributed to the realism. Basically, the equipment used in this experiment probably wasn’t capable of mimicking natural motion precisely enough.  In addition, I would say that there is still much to be learned about what cues are important to the zebrafish.

However, this technology made it possible to quickly and precisely experiment with the real fish.  I’m confident that with improvements, this approach will enable systematic investigation of these questions.


  1. Changsu Kim, Tommaso Ruberto, Paul Phamduy, and Maurizio Porfiri, Closed-loop control of zebrafish behaviour in three dimensions using a robotic stimulus. Scientific Reports, 8 (1):657, 2018/01/12 2018. https://doi.org/10.1038/s41598-017-19083-2

 

Serendipity: Antibiotics From The Soil

One of the great themes of early twenty first century science is the search for natural biological systems that can be exploited as human technology.  At the molecular level, there are vast ecologies of microbes that contain amazing biochemistry and nanotechnology. And they are everywhere. After millions of years of evolution head start, this is surely a good place to look for new (to humans) materials and processes.

This winter a team at Rockefeller University report on “New antibiotic family discovered in dirt”, as BBC put it. Cool!

Down there in the soil, it’s a wild kingdom of life and chemistry at various scales, microbes, fungi, insects, and so on.  There are zillions of beasties in there, all eating and being eaten.   Some of the chemical warfare going on involves repelling or killing other microbes—which is what medical antibiotics are called upon to do.  So how do we learn what these critters know how?

This is not my area of expertise, but I gather that this is a difficult challenge because there is so much complicated stuff in even a small sample of soil. (Probably more than 1000 species of bacteria per gram of soil!)  It is easy to accumulate a gigantic sample, but it’s too much to be able to search all the DNA at random.

Due to the complexity of soil metagenomes, it remains challenging to shotgun sequence deep enough to generate data that are broadly useful “ ([2], p. 1)

The study in question guided the search, looking for particular types of genes that are known to generate calcium dependent antibiotics.  These genes would be from the DNA if microorganisms that have naturally evolved antibiotics, presumably as defenses.  Once identified, the genes can be inserted in artificial genomes, to generate the chemical products.

The researchers describe their methods of assaying a large sample of DNA from soils.  They found evidence that there are many uncharacterized genes, and focused on one abundant, but previously unknown group.  This gene was activated and produced a new (to humans) antibiotic, that they show is effective against drug resistant strains of common bacteria.

This is really cool, and potentially a really, really important life-saver.

And it is important to remember that there were potentially many more novel antibiotics even in this one sample. And this group is only looking for one particular type of antibiotic.

There is so much more to be found, even in a few handfuls of soil!


  1. BBC News, New antibiotic family discovered in dirt, in BBC News – Helath. 2018. http://www.bbc.com/news/health-43032602
  2. Bradley M. Hover, Seong-Hwan Kim, Micah Katz, Zachary Charlop-Powers, Jeremy G. Owen, Melinda A. Ternei, Jeffrey Maniko, Andreia B. Estrela, Henrik Molina, Steven Park, David S. Perlin, and Sean F. Brady, Culture-independent discovery of the malacidins as calcium-dependent antibiotics with activity against multidrug-resistant Gram-positive pathogens. Nature Microbiology, 2018/02/12 2018. https://doi.org/10.1038/s41564-018-0110-1