Tribal Forest Management Shines

This winter there are various reports about sustainable forest management under the auspices of Native American tribes [3].   I’m pretty sure that the sustainable management practices used are well known, but the Native Americans do it well, and for their own reasons. There are deep cultural connections and traditions of sustaining the whole forest, which in turn sustains the people.

This “discovery” that tribes are capable of taking care of land and forests will not come at a surprise to many Indians.  Where able to govern their own lands, many tribes have outstanding records for sustainable harvests for the last century and more.  Indeed, many of the government thefts of Indian land have been done to enable the quick sacking by white corporations of the previously healthy forest.

In recent decades, many tribes have been reacquiring land. This is a good thing both for the people and likely will benefit the forests and wildlife.

Even ten years ago, Alison Berry published an interesting comparison of two forests side-by-side in Montana [1, 2], She compared the economic and ecological costs and results of Federally managed forest with the adjacent forest managed by the Confederated Salish and Kootenai Tribes.

The report notes that some of the key differences in the management lie in the objectives.  The US National Forests have a muddled mission, with multiple goals that do not include making a profit from timber sales.  Timber sales are not accounted for  since 1990s, but were losing a lot of money when there were accounts kept.

In comparison, the tribal forest management aims to provide sustained income for the tribe. And the sovereign tribal government keeps careful account of the costs and yields, for obvious reasons.

The tribal forests had lower costs not least due to fewer and lower paid workers. The tribal forest had higher revenue, in part because the federal forest sold more salvaged wood (at least in part, a side effect of intended fire prevention policies.)  Tribal management seems to have much less legal exposure than the federal operations, at least partly due to the sovereignty of the tribe.

Berry finds that the tribal forest does much better at “balancing” forest production with other uses including sustaining fish and wildlife.  The federal forest, ironically, does better at fish and wildlife, while lagging in timber production.

“In comparison with the CSKTs, the Lolo National Forest harvested much more timber from 1998 to 2005, yet it made far less money. A primary reason for the Lolo’s weaker economic performance is that Forest Service managers have less incentive or ability to generate income compared to tribal managers. “ ([1], p,  17)

Overall, it’s a complex picture, except for one aspect:  the local tribal government has both the incentive and the means to manage the forest productively and sustainably.  The federal mangers have little incentive less means to do so.  As Berry remarks, “Clearly, there is no need to “protect Indians and their resources from Indians.” Rather, it is the federal agencies that need to improve resource management.” ([1], p. 20)

This is certainly a welcome reversal from the racist paternalism that has marked US relations with Indian tribes over the centuries.  It seems that it would make sense to hire tribes to protect and preserve public resources from the feckless government.

Now, Berry is writing for the Property and Environment Research Center (PERC), which is “The home of free market environmentalism”.   From this perspective, she emphasizes the economic incentives and results of the tribal managers, who are, after all, managing their own forests for their own benefit.  Federal forests are managed by bureaucrats for a variety of stakeholders.

Berry floats ideas for giving local managers more flexibility, including reining in lawsuits. In my observation, this kind of “flexibility” generally leads to both inconsistency and wrong headed policies.  She also seems to favor the smaller and low paid staffs of the tribal organization, which may be good for making a profit off of the timber, but generally is not a good thing for workers.

Berry has a point about the economic incentives and wage advantages of tribal management.  These, of course, are hardly “Indian” things.  Almost anyone would manage their own land better than someone else’s land.

However, this analysis underplays a key cultural factor here.  Indian tribes have long and deep traditions of sustainable land use, and they consider land management to be a collective activity, for the benefit of the whole tribe.  These attitudes and related practices are deep motivations to take care of the forest and its products, and also generally motivate highly effective practices.

So, there is more than economics here, and more than just tribal self-interest.  This is a matter of sovereignty and identity and, most likely, pride.

I’m not a huge fan of outsourcing management of public lands, but if you suggested a program to contract out forest management to local tribes, I’s listen very carefully.  In fact, the latest farm bill has just such provisions, so we’ll see.


  1. Alison Berry, Two Forests Under the Big Sky: Tribal v. Federal Management. Property and Environment Research Center, Bozeman, MT, 2009. https://www.perc.org/wp-content/uploads/old/ps45.pdf
  2. Alison Berry, Two Forests Under the Big Sky: Tribal v. Federal Management, in PERC – Policy Reports. 2009. https://www.perc.org/2009/07/01/two-forests-under-the-big-sky-tribal-v-federal-management/
  3. Brian Bull, Native American Tribes Gaining Recognition For Timber And Forestry Practices, in KLCC – News. 2019. https://www.klcc.org/post/native-american-tribes-gaining-recognition-timber-and-forestry-practices

QuadrigaCX: Early Charge for Cryptotulip of theYear

The CryptoTulip of the Year competition is off to a fast start!

Canadian crypto exchange QuadrigaCX (QCX) exploded and cratered, with an unprecedented oopsie.  As even the mainstream media have reported, the big cheese of QCX died suddenly (in India), and no one seems to know where his secret encryption keys are.  This means that a bunch of customers’ cryptocurrency is locked up in accounts that no one can get to.  It’s not ever clear whether anyone knows all the accounts.

Oops!

In hindsight, everyone is wondering just what was going on. Why would you have a multimillion dollar service under the control of one guy, with no backup plan?  And why would anyone entrust their money to such a system—not that anyone really understood that was how it was set up.

The dark comedy continues, as the company managed to accidentally send yet more bitcoins to an account they can’t access [1]

Oops!  Again.

Amazingly enough, lawsuits are raining from the sky.

Yessir, definitely and innovative and disruptive technology!


QuadrigaCX has to be a strong candidate for CryptoTulip of the Year for 2019, not so much because of the epic and innovative oopsies, but because of the so-not-Nakamotoan nature of the oopsie.

The entire point of Nakamotoan cryptocurrency is to be decentralized,  so that the failure or corruption of a single institution or person does not cripple the system.

At the typical levels of irrational exuberance surrounding crypo technology, there is little room for downers like planning for the possible death of a key individual.  (“We’re disrupting money over here!  We have no time for legacy concerns like death and taxes!”)

In hindsight, it is obvious that QCX was highly centralized, and therefore highly non-Nakamotoan.

But, wait. Bitcoin is decentralized, no?  And there was no error in the Bitcoin protocols or blockchain, right?  So everything should be fine, true?

Obviously not.

QCX also gets CryptoTulip points  for being such a useful object lesson.  Essentially, QCX shows us that The alleged properties of Nakamotoan technology—decentralization, trustlessness, anonymity—cannot be assumed to be true for a real system built on top of the technology.

Once again, we see that regardless of the Nakamotoan protocol, the actual real world system includes lots more than the blockchain and “consensus” protocol.  And the rest of the system typically has “centralized” components, and “trusted” parties, and so on. The bad news is, the whole chain still is only as strong as its weakest link.

In fact, generally speaking, the trustlessness of the Nakamotoan protocol means that other parts of the system have to be trusted, including the users (who can very well lose their own keys) and exchanges (which, shockingly enough, have to enforce tax and money laundering laws).  And so on.

Basically, Nakamoto’s design has pushed “undesirable” properties (such as “centralization” and “trust”) out of the core protocol and into the rest of the system.  I think there is a sort of a conservation law for requirements, here.  The total amount of “trust” is inelastic: you can move it around, but you can’t get rid of it.

Psychologically, you may wish that you didn’t need to “trust” third parties, but the fact is that the world is mostly “third parties” and you have to figure out who and how to trust them. Like it or not.

(There Ain’t Any Such Thing As A Trustless System. (TRANSTAATS ?))


So let’s put QuadrigaCX on the board for potential Crypto Tulip of the Year for 2019.  There’s plenty of time left, of course, and who knows what great “oopsies” may happen in the rest of 2019.  (Ethereum is still working on a traumatic upgrade hard fork, so that will be interesting.)


  1. Nikhilesh De (2019) QuadrigaCX Lost Another $500K in Bitcoin By Mistake. Coindesk, https://www.coindesk.com/quadriga-inadvertently-sent-btc-to-dead-ceos-cold-wallet-ey-report

 

Cryptocurrency Thursday

Virtual Reality for Bees

Virtual Reality(VR)  has always been, at bottom, an applied psychology experiment.  Intercepting sensory inputs and replacing them with synthetic input, and seeing just how “real” the experience can be requires an end-to-end understanding of just how senses work and sensory data is processed—deep problems of psychological science.

Of course, Homo (“let’s talk about me”) sapiens was first interested in monkeying about with human experience.  But these days Virtual Reality has been explored for other species including flies, mice, fish, and bacteria (!).  These studies are interesting because they demand that we take a subjects eye view, trying to understand what non-humans actually sense about the world.  (This kind of decentration is not the strong suit of our species.)

This winter researchers from Berlin report on work that  combines two of my favorite topics, VR and bees [1].   The VR was not intended to entertain the bees, it was to let them move as naturally as possible while being monitored.  The subject bees were placed on a tiny VR treadmill, surrounded by a miniature conical CAVE that presents “realistic” visual stimuli, coordinated with the bee’s movement.

The goal was to examine how bees learn.  The virtual environment was designed to represent a maze, a classic, if not beloved, experimental task.  Bees, unlike rats, are difficult to run through physical maze.  (If rats could fly, they’d never put up with mazes, either!)  In addition, the goal is to measure neural activity during the operant learning, which ain’t easy unless you can keep the subject in one place.

So the VR is used to move the visual world around the bee as it walks on the treadmill.  The bee stays in place hooked up to electrodes, and the digital scenery rolls by.

Several standard conditions demonstrated that the bees learned to associate colors and odors with rewards. The neural studies showed that different areas fired in response to the rewarded or non-rewarded target color. This confirmed that the same “mushroom body” area is modified by this operant conditioning and in earlier studies of classical conditioning, and for visual as well as odor stimuli.  (It would be shocking if this were not true!)

The study was limited by the classic philosophical conundrum of operant learning:  the theory postulates a temporal sequence of presumed neural activity. In principle, the brain responds to neural activity signaling the stimuli, “decides” to approach or avoid the stimuli, and triggers motor signals.

However, it is very difficult to pinpoint this “decision” point, and the clever VR and neural recordings did not make it possible.

“In the search for neural correlates of operant learning, one would like to analyze the point of decision where the animal initiates a walk toward one of the two colors. Unfortunately, such decision points were not obvious since every walking trajectory included several turns or stops making it impossible to isolate attempts to walk toward the color” ([1], p.17)

This study definitely showed neural plasticity directly correlated to sensory experience.  Unfortunately, there is little known about the actual role of the observed neural activity in behavior.  It is important to keep in mind that these are relatively coarse grained measurements, and that the actual mechanism of both the changes (the correlates of learning) and how the firing influences other parts of the brain (the presumed recall and decision making).  For that matter, we don’t even know for sure if the neural activity seen is primary (i.e., the site of the learning) or some kind of secondary side-effect (e.g., routing of signals from elsewhere).

Using VR does seem to give a cleaner experimental environment for this kind of study. But it does not magically solve all the hard problems of actually understand a bee’s brain.


  1. Hanna Zwaka, Ruth Bartels, Sophie Lehfeldt, Meida Jusyte, Sören Hantke, Simon Menzel, Jacob Gora, Rafael Alberdi, and Randolf Menzel, Learning and Its Neural Correlates in a Virtual Environment for Honeybees. Frontiers in Behavioral Neuroscience, 12:279, 2019. https://www.frontiersin.org/article/10.3389/fnbeh.2018.00279

Lithium Mines Seen From Space

Where does Lithium come from?

I have known about the light metal element Lithium for decades as the mysteriously effective anti-depressant  which has led to synthetic analogs. In this role, Lithium has saved countless lives, even if we have only the sketchiest idea of why it works.

But in recent decades, most of us have been carrying Lithium in our pockets and bags, in the form of nearly ubiquitous Lithium ion batteries. In this case, we do know how it works, and we also have learned just how exciting Lithium can be when it meets water.  Boom!  If mobile devices weren’t so darned necessary to modern life, we’d never allow such crazily dangerous technology in our homes.

But where do we get Lithium from?

As it happens, there is one main source, in the fittingly exotic high desert of the Atacama.  This area is one of the most remote and hostile places on the planet, home to many important astronomical observatories, and the home of flamingos, alpacas, and other rare animals.

In places there are mineral sands which, in the exceptionally dry conditions, have accumulated water soluble minerals that most places quickly wash into the sea or been incorporated in plants.

Including Lithium.

The NASA Earth Observatory presented satellite imagery of industrial harvesting of Lithium in large evaporation ponds in Chile.[1] .  The article reports that underground brine is pumped to the surface, where evaporation concentrates it.  The concentrated mineral brine is trucked to plants to be purified into Lithium and other products, shipped to China and the rest of the world.

From [1]
Of course, there is concern that these sources may be exhausted as the demand for batteries for electronics and electric vehicles grows.  In particular, it is very probable that  these industrial processes will deplete the underground brine faster than the limited inflow of snow melt can replenish it.

As in so many cases, we are indirectly burning water for fuel, which does not seem sustainable for too long.


  1. Adam Voiland, Where Batteries Begin, in The Earth Observatory. 2019. https://earthobservatory.nasa.gov/images/144393/where-batteries-begin

 

Workbar on What is Coworking [repost]

[This was posted earlier here]

Boston based Workbar coworking talks about “What is Coworking?

Hey!  That’s my line!

So, what’s their take on the question?

Their subtitle gives a hint: “Benefits, Perks and the Most Important Facts You Need to Know About Shared Workspaces”.

The crux of the matter, in their view, is “Coworking offers many advantages that have proven to help companies and individual professionals grow.”   (This is not exactly the Coworking Manifesto).

Workbar does get the most important point:  “coworking is not only about sharing a physical space to get your work done. Most professionals using a coworking space enjoy the sense of community….

They list five key features:

  1. The Community Aspect
  2. Creativity and Productivity
  3. Effortless Networking
  4. Lower Costs and Flexibility
  5. All-Inclusive Services and Perks

They correctly identify that community thing as number one, and make the common assertion that connecting with others increases happiness and productivity (number two and three).

The other two points are arguments for Workbar’s specific approach.  In their view, coworking competes on price and convenience.  Obviously, your mileage may differ—these aspects shade into other workspaces, such as home offices and public cafes.

Workbar has a second list of benefits, “Five ways coworking makes your day great” [Infographic].

This list overlaps with the first one, but item number one is “No More Distractions”, which means “get out of the house”.  Item three is “Professional Space”, e.g., for meeting clients.


Finally, Workbar sees coworking as something of interest to “an increasing number of large companies”.  Clearly, this is an important potential market for Workbar.  But I remain extremely skeptical of how well it can work to have, say IBM and Microsoft workers salted in to a room full of freelancers.

“Today an increasing number of large companies are asking employees to work at coworking spaces or at least offering them the option to work from a remote shared workspace on a part-time or full-time basis.”

Sure, it’s cost effective, and might be popular with workers.  (I mean, who wouldn’t like the flexibility of a freelancer with the security of a real job?)  But I have to question whether these workers can really be fully part of a community of independent workers.

Is, say, IBM going to let its workers share their knowledge and activities with random non-IBMers?  They never have been easy about that in the past, with good reason

And should freelance workers freely commune and help out workers from, say, Microsoft?  This might be great for Microsoft, but I personally don’t like giving away knowledge to mega corporations who give me nothing in return.

And will IBM and Microsoft employees be able to talk to each other?  That’s generally not allowed, for good reason.

Look, the idea of a coworking community is that it is a community of like-minded peers.  And corporate workers may be “like-minded”, but they cannot be peers with people outside their organization.  And vice versa.

In short, conventional employees, especially of large corporations, are not going to fit, and may tend to break the community that is so critical for coworking. So I have to strongly disagree with the notion that coworking is going to work the same way for companies as it does for independent workers.

However, I can see that companies will like flexible, inexpensive, even Bring-Your-Own workspace.  And I imagine that some workers may like working near, if not exactly among them as peers.

But I really don’t think this is a formula for good community.


  1. Workbar. What Is Coworking | Learn the Many Benefits of Coworking | Workbar. 2019, https://www.workbar.com/what-is-coworking/.

(For more, see the book, “What is Coworking?“)

 

What is Coworking?

 

Book Reviews: Two Cool Books on Botany

Brilliant Green by Stefano Mancuso and Alessandra Viola
The Hidden Life of Trees by Peter Wohllenben

This isn’t your grandfather’s botany anymore!

In recent decades, we have learned a whole lot of stuff about life of all kinds. Entire new kingdoms of life.  Extremophiles.  Possible exospecies.  Metagenomics, genetic exchanges.  More and more details about nanoscale features of matter that blur the boundary of “life”.

But nowhere has there been cooler discoveries—including rediscoveries of long overlooked knowledge—than plants.  This is ironic, because all life, especially human life, depends on plants.  You’d think we would know them better.

Two recent books recount some of the most interesting new understandings of our leafy cohabitants.  In English, a “vegetable” is a metaphor for a crippled, senseless, unthinking, speechless person, alive, but nothing more.  We now know that real vegetables are far from this kind of “vegetable”.

These realizations are leading to a recognition that the plant kingdom is an alien intelligence, living among us. There are many reasons we don’t understand plants, the biggest being time: plants live long, slow, stately, Ent-like lives.  They also live underground, where surface dwellers cannot see a thing.  And plants are really, really different than animals, so our intuitions simply lead us astray when applied to plants.

To understand plants, we really need to step outside of normal human perception and thinking.  In part, this is aided by scientific instruments that extend our senses.  But it also means that we need to get over ourselves, and think for universally.  Does “hearing” mean “listening with an ear and a brain”?  Or does it mean “detecting and responding to vibrations”?  And so on.

These two books are passionate summaries of many new and old findings, painting the picture of plant life that is complex, intelligent, and social.  The authors are nakedly pro-plant, coming down strongly for ethical treatment of plants in ways that go beyond traditional religious teachings and current legal frameworks.


Brilliant Green by Stefano Mancuso and Alessandra Viola

This book is presented as “gee, whiz!  Nobody knew that plants could X”.

Well, actually, I already knew about a lot of this material, so the rhetoric was lost on me.

On the other hand, I can say that most of this is definitely based in sound science, or at least reasonable inference from incomplete understanding.  I.e., it’s real stuff, however it is presented.

The basic thrust is that plants have “senses”, and behave and communicate. And since these apparatus are deployed for survival and optimization, they are arguably “intelligent”.

Why is this even a little controversial?  Aside from religious objections (if “intelligence” is attributed to a “soul”, then there is a serious theological question about what kind of “soul” a plant has, and what that might mean), the big thing is that plants are so, so different from animals that we have to agree on the definitions of the words.

For example, if “hearing” is defined as what humans do via their ears, then plants (and snakes and sharks and lots of other things) don’t qualify as having a sense of “hearing”.  But if “hearing” is defined as detecting and responding to sound and vibration, then lot’s of species, including plants definitely have a sense of “hearing”.  A completely different implementation of the concept, but the same concept.

And so on.

Also, as everybody who has ever read Tolkien knows, plants operate at a totally different time scale from “hasty” hobbits and humans. Plants also live largely underground, where we can’t sense them.

And so on.

But if “intelligence” is problem solving, then everything that has solved the problem of staying alive is “intelligent”.

Mancuso and Viola give a detailed summary of plant intelligence, and it’s really interesting.

“The most recent studies of the plant world have demonstrated that plants are sentient (and thus are endowed with senses), that they communicate (with each other and with animals), sleep, remember, and can manipulate other species.  For all intents and purposes, they can be described as intelligent.  The roots constitute a continuously advancing front line, with innumerable command centers, so that the whole root system guides the plant like a kind of collective brain—or rather a distributed intelligence—which, as the plant grows and develops, acquires information important to its nutrition and survival.” ([1], p. 156)

 

There are flaws in this book.  The “gee whiz” tone is borderline annoying.  The philosophical and historical background is very Eurocentric and overall sketchy.  The citations and endnotes are far too sparse for my tastes, though there certainly is good material there.

I was particularly struck by the repeated claims that plants are 99% of life on Earth, compared to less than one percent animals. This number is basically wrong, because 99% of life is microorganisms, and most of them are not really plant or animal. And the large scale plants and animals are heavily symbiotic with and hosts to vast legions of microorganisms.  Plants are cool, and there are more of them than animals, but they are still only a tiny fraction of all life.


The Hidden Life of Trees by Peter Wohllenben

This is a super cool book.  Sensei Wohllenben is a career forester in Germany, definitely a tree lover.  But also, he really, really knows trees, or more importantly, forests.

This book is jam packed with interesting facts about trees and forests. He has a rather poetic style (even in the English translation), but everything is backed up with scientific findings as well as personal experience.

Some of this material I already heard, including the amazing underground networks of roots and fungi.  Other things I hadn’t thought about, like how shedding leaves helps a tree survive high winds in winter.

There are also some things I “knew” that are probably wrong.  In grade school we were taught theories about how trees suck water up to their high, leafy crowns. These hand wavy just-so stories turn out to be physically impossible.  (E.g., capillary action can’t go higher than a meter or two, the supposed osmotic pressure from respiration isn’t anywhere near strong enough to pull water 50 or 100 meters into the tree—and doesn’t happen at night, anyway.)

Sensei Wohllenben appears calm and wise (very much like “his” forest) and is unafraid to write in a very Romantic style. He delivers his science in a very animistic, almost mystical, fashion; speculating on how trees “feel”, what they “think”, “learn”, “remember”, and how they “talk” to each other. He documents how trees care for their “children” and help their “friends”. And he frequently empathizes with the “pain” caused by injuries and assaults.

We aren’t meant to take these terms literally, they are beautiful metaphors for the wondrous features and behavior of trees and forests. They also help us bridge the cognitive gap between our own life and the very slow, long lives of trees.

By the end, he is willing to own his own deep empathy with forests, and we are ready for some breathtakingly radical sentiments, “I, for one, welcome breaking down the moral barriers between animals and plants. When the capabilities of vegetative being become known, and their emotional lives and needs are recognized, then the way we treat plants will gradually change, as well.” P. 244

Animal rights, hell.  Vegetative being rights!

This leads to complete rethinking of “forestry” policy. It should not be about producing wood for people to use, it should be “analogous to our treatment of animals—whether we spare the trees unnecessary suffering when we do.” ([2], p. 242)

It isn’t too surprising that, from the point of view of trees, humans and human activities are almost all bad news.  Many human interventions are poorly thought out (if any thought was applied at all) and usually counter-productive. The best thing for forests is to leave them alone, though that’s pretty much impossible by now.

And speaking of humans, I hadn’t realized just how bad “urban forests” are for the poor trees that make them up.  Urban trees live very unnatural lives in many ways, and generally don’t live that well compared to less disturbed forests.  I love the trees in my yard and in my town.  But I am looking at them with new sympathy for the cruel, short lives we have condemned them to.

I learned a lot from this book and I enjoyed reading the beautiful prose. All science should be so artfully phrased.  This is not a book to zip through.  I found it necessary to read slowly and carefully.  Like a tree.


 

Plant intelligence is alien—hard for use to understand or even detect.  This reminds us of one of the answers to Fermi’s Paradox (“Where is everybody?”)—Aliens are all around us, but we don’t recognize them.)

There is so much cool stuff here.  I find myself wanting to start a new career (or several new careers), exploring plant intelligence and plant societies.  We need a community model of a plant root tip.  We need detailed lexicons of the signaling systems of plants.  We need methodologies to non-destructively monitor the behavior and communication of plants.  So much great science to be done!

There is also a great source of bioinspired design.  Mancuso and Viola point out the possibility of “robot plants”, inspired by the decentralized, swarm intelligence of plants.

It is interesting how both these books come around to the same basic philosophical position favoring expanded ethical consideration for plants.  These books help make the case for the Swiss declaration on the dignity of plants.

I for one welcome this argument.  Let’s move away from the purely human centered “pragmatic” arguments about all the benefits we accrue from plants. Of course, we need to take care of our food web and everything else we depend on.  But human need is not the only thing we should care about.

As Wohllenben put it:

 “The real question is whether we help ourselves only to what we need from the forest ecosystem, and—analogous to our treatment of animals—whether we spare the trees unnecessary suffering when we do.” ([2] p. 242)



  1. Stefano Mancuso and Alessandra Viola, Brilliant Green: The Surprising History and Science of Plant Intelligence, Washington, Island Press, 2015.

  2. Peter Wohllenben, The Hidden Life of Trees: What They Feel, How They Communicate, Vancouver, Greystone Books, 2016.

 

Sunday Book Reviews

 

Automated Bug Finding and Fixing

Way back when I was a young programmer, we dreamed of not only automated code generation, but also automated code analysis.  The former was hard, the latter was even harder.  If we are going to create software to optimize everything else, then surely we can optimize our own dogfood.

Automatic detection of bugs is very difficult for the simple reason that software has so many possible paths it might follow.  And, almost by definition, a bug is a rare event, a path not often traveled, often a very obscure path way off the main roads.  (A bug that happens every time is not a “bug” it is “software that is wrong” and so is never even used.)

Finding bugs in software is basically searching all the possible things that it might do, to discover the bad ones.  The good news is that there are almost certainly lots bugs to find.  The bad news is that the haystack is so enormous that its very difficult to search it.

There are two generic approaches to finding the buggy needles.  One is to somehow trace paths, checking each one.  The other is some form of button mashing:  bombard the code with slues of input to discover what happens.

Both of these methods are difficult to scale up.  Real software has so many paths that it is impossible to cover them all.  For that matter, a lot of real software has unknown paths, especially from external events.  Anything connected to the Internet could, in principle, receive a packet at any moment from any other software on the Internet.  Rather a large area to search!

But Moore’s Law applies here.  So progress has been made.

This winter, David Brumley reports on  “Mayhem, the Machine That Finds Software Vulnerabilities, Then Patches Them”  [2].  This system was entered in DARPA’s 2016 Cyber Grand Challenge, which was a contest to see what unassisted automated systems could accomplish.  Brumley reports that despite crashing less than half way through, they won.

For someone who encountered the problem back when our computers used steam, brass, and rivets, this program is unbelievable.  It analyses binary programs, discovers bugs, and generates patches for the bugs.  I literally do not know how this could possibly work!  But apparently it does.  (Definitely, a case of  “Clark’s Third Law”.)

So what could possibly be wrong with this picture?

The system works in part by translating the executable code into an abstract description that can be rapidly “executed” to discover anomalies.  Creating and validating this translation is no easy task, and must be done for every execution format, which grow and change all the time.  And, by the way, you have to prove that both the hardware and the symbolic execution software are correct—a nasty logical recursion.

Bear in mind that both hardware and software are constantly changing.  And I do mean constantly, like every minute of every day.  This is why we need automated systems lie Mayhem, but it also means that it will need to run over and an over on the same software every time something important changes.  Human debuggers are familiar with this, and automated debuggers face the same challenge—can we finish fixing it before it changes and we have to start over?

As I noted, the bug checking really can’t find bugs external to the program.  Some of the most dangerous bugs have been hidden side effects or evil interactions when otherwise harmless code runs in certain environments or beside certain other software or with certain combinations of inputs.  No program checker can deal with these cases.

For that matter, most software runs on top of operating systems and uses libraries and services. In principle, this means that correctness depends on a huge, hard to define constellation of software. For example, a tiny little ‘hello world’ on a Microsoft lap top actually uses millions of lines of code.

I’ll also note that the automatic patches generated will need to be checked.  Human debuggers have a tendency to see the trees and not the forest, generating a patch that “fixes” the symptom without fixing the cause.  The automated patcher will surely have this challenge as well.  (Note that it is pretty common to patch a bug and have it reappear in the next version of the software, because the source of the bug has not been fixed.)

To be fair, this technology is not really intended to be applied to all software all the time.  DARPA and others are interested in superhardening key infrastructure at the core of systems and networks. This is a relatively small amount of software, and generally tightly controlled so it does not change in uncontrolled ways.  Improving the quality of these core services gives a more secure base to deal with other, possibly more buggy software.

One final point.  I’m sure that DARPA and others are well aware of the “mental” hazards of a great tool like this.  To the degree that Mayhem works so magically, there is a tendency to trust it, and even to trust it exclusively.  Push it through Mayhem, do the patches, and we’re good and done, right?  Well obviously, we should still be thinking carefully and skeptically about software, even more so when it has been “magically” improved.  I expect that one of the features of Mayhem will have to be “explanations” of its findings and fixes, which engineers will have to review carefully.  (And we’ll probably make tools for automatically mining and understanding such explanations….)

Make no mistake, this is absolutely awesome software that will be incredibly valuable.  But it certainly isn’t going to put software debuggers out of business.


  1. David Brumley, Mayhem, the Machine That Finds Software Vulnerabilities, Then Patches Them, in IEEE Spectrum – Software. 2019. https://spectrum.ieee.org/computing/software/mayhem-the-machine-that-finds-software-vulnerabilities-then-patches-them
  2. David Brumley, The White-Hat Hacking Machine: Meet Mayhem, winner of the DARPA contest to find and repair software vulnerabilities. IEEE Spectrum, 56 (02):30-35, 2019.

A personal blog.

%d bloggers like this: