How flowering plants conquered the world

Back in the days of the dinosaurs, there were lots of ferns and other plants. (Yum, yum.)

But in the Cretaceous a new line, the angiosperms—flowers—emerged and flourished, and soon dominated the Earth. (A whole lot of new yummy!)

Just how did these new life forms emerge and achieve success so rapidly and completely?  What features or circumstances enabled these plants to out compete other plants?

This month Kevin A. Simonin and Adam B. Roddy report a new theory of what is special about angiosperms [2].

The underlying concepts depend on deep biochemical details of photosynthesis, which depends on the availability of water inside the leaves. Leaves are exposed to air from which CO2 is absorbed, but maintaining phtosynthesis requires the leaves not dry out. So, “increasing leaf surface conductance to CO2 also requires increasing rates of leaf water transport in order to avoid desiccation” ([2]. p. 2).

Water transport is, in turn, limited by the size of cells in the plant. “eaves with many small stomata and a high density of veins can maintain higher rates of gas exchange than leaves with fewer, larger stomata and larger, less numerous veins” ([2], p. 2)

The third piece of the argument is that of the many factors that influence the size of cells in a plant, the minimum size of a cell is constrained by the size of its nuclear material, which is primarily its genome. Plants vary greatly in the number of genes, and larger genomes generally have larger cells.

The idea, then, is that plants with smaller genomes can develop smaller cells, with higher density. This leads to higher water transport and higher photosynthesis.

S&R support this hypothesis with a survey of 400 (contemporary) plant species. The data show strong correlation between genome size and cell size and density, and as a consequence, with gas transport. These relationships hold across all plants.

The evolutionary story, then, is about the ‘strategic’ downsizing of genomes. Over time, plants evolve larger and smaller genomes through various mechanisms. S&R argue that in the Cretaceous, some species developed smaller genomes, and “genome downsizing expands the range of final cell size that is possible” ([2], p. 8).  This plasticity increases the potential breath of habitats, as well as higher maximum productivity.

The bottom line is that Cretaceous angiosperms, and only the angiosperms, developed smaller genomes, which “allowed them to outcompete other plants in almost every terrestrial ecosystem” ([2], p. 9), even in the face of world wide declines in atmospheric CO2 levels.  The rest, as they say, is evolutionary history.


Flowers are one of the signatures of planet Earth (we could as well call it “Planet Flower”), and we humans deeply love and connect with them.  They are also entwined with the development of animal life.  In the millennia after they emerged, they coevolved with pollinators (bees!) and herbivores (dinosaurs!), and spread to fill the land with color and scent and munchy goodness.

But this study suggests that the original success over other plants is due to very fundamental biochemical and mechanical processes.  That’s pretty cool.

Our current Anthropocene Age is pressing hard on these glorious life forms. Loss of habitat and encroachment of human activities are threatening many species and whole ecosystems. Humans coevolved with plants, and it is far from clear how humans will fare in future with a dramatically changed plantscape.

  1. Helen Briggs, How flowering plants conquered the world, in BBC News – Science & Environment. 2018.
  2. Kevin A. Simonin and Adam B. Roddy, Genome downsizing, physiological novelty, and the global dominance of flowering plants. PLOS Biology, 16 (1):e2003706, 2018.

Genome downsizing physiological novelty and the global dominance of flowering plants

PS.  Some good names for bands:

Rapid genome downsizing
Diffusivity of Water in Air
The Gymnosperms



Slow Down, Work Better?

The contemporary “Gig Economy” is said to be the New Way of Working. Freelance workers are “free” to hustle for gigs and work as much or as little as they want.

But people are still people, and work still sucks, mostly.

But workers are on their own.

It isn’t too surprising to me that both the Coworking Movement and the Freelancers Union are coming to talk about mental health.  Liz Elam includes “wellness” and dealing with loneliness as a top megatrend in coworking.

And this month, Sensei Tyra Seldon muses on “slowing down” in the Freelancers Union Blog.

I admit that my reaction to here headline, “Can slowing down make you more productive?” was, “I hope the answer is, ‘yes’?”  For one thing, going slow is definitely in my personal wheelhouse. : – )  But also, advancing faster by moving slower is a natural strength of older workers, who face brutal challenges in the gig economy.

Anyway, what Sensei Seldon is actually talking about is not so much working slower, as living simpler.  In particular, she’s talking about turning it off.

She starts with the ubiquitous problem of digital distraction. Recording how she spends her time yielded alarming results: lot’s of activity, much of it irrelevant.

Whereas I thought my 60-hour weeks were signs of my being a dedicated entrepreneur and being uber productive, this reality check proved otherwise.

She did the obvious experiment, i.e., turning it off.  Spending more time in face-to-face conversations.  She also started to redefine “productivity”, to include “things that were meaningful and valuable”, such as meditation, prayer, and journalng.

And she liked it.

Even better, she worked better.

I don’t think I can fully go back to the person who I was

I’m not in the least surprised by Seldon’s experience.  There is a large and growing literature that tells us that constant digital engagement is bad for you in many ways. (here, here, here, here, here, here)

It is also true that one of the principle reasons that contemporary coworking was created is to deal with the need for face-to-face interactions.  Today’s workers are well connected digitally, but many are more socially isolated than ever.   It is important not just to unplug to take care of yourself, we have to take care of each other. The best way to do that is to talk face-to-face.

These problem have been around for a long time.  Working in a conventional organization is generally just as bad or worse as freelancing in this regard. In a conventional job, it isn’t easy to tell your boss that you don’t look busy because you are doing something more important than her deliverables.

The best thing here is that Freelancers actually can unplug and focus on more than being “busy”.  In this, the contemporary Gig Economy is directly attacking one of the most critical problems facing contemporary workers.  If Freelancing and Coworking end up actually helping people  live a better life, then they will be counted as great and successful innovations in working.

  1. Tyra Seldon, Can slowing down make you more productive?, in Freelancers Union Blog. 2018.

Narayanan and Clark on Bitcoin’s academic roots

For an old grey-headed programmer, Bitcoin has always been a bit weird technology.

The big thing, of course, is that it is deliberately designed to be slow. My whole career has been basically about trying to make software go faster, so the computation that has no purpose except to take a long time just feels wrong.  I understand it intellectually, but it’s just not right, deep down.

The other thing about Bitcoin is that all of the pieces are not new, though the specific way they are used is. For example, I was doing peer-to-peer networks (with hash addresses) before the Nakamoto paper [1], so there was no news there.

So what, exactly is new, about Bitcoin?

I was very pleased to read Arvind Narayanan and Jeremy Clark’s recent article reviewing “Bitcoin’s academic pedigree[2].  N&C review the academic papers that present many of the key technical features used in Nakmotoan cryptocurrencies.

[B]y tracing the origins of the ideas in bitcoin, we can zero in on Nakamoto’s true leap of insight—the specific, complex way in which the underlying components are put together.” (p. 38)

They point to six lines of technical innovations from the 1980s and 90s that are critical to Nakmotoan cryptocurrencies:

  1. Linked Timestamping, Verifiable Logs
  2. Digital Cash
  3. Proof of work
  4. Byzantine Fault Tolerance
  5. Public Keys as Identities
  6. Smart Contracts

Figure 1. Chronology of key ideas found in bitcoin. (from [2,, p. 38)
In some cases, Nakamoto acknowledges the academic predecessors, and in others he doesn’t. In part that is because some of the ideas were so widely known that they seem “obvious” and “common knowledge”, even if they were first written about only in the last forty years.  It is also possible that Nakamoto may have reinvented some of the concepts, perhaps inadvertently reverse engineering from example systems know to him, without tracing their origins.

Nakamoto was obviously following up on earlier concepts for digital money, including hashcash, which used a form of proof-of-work using hashing.  N&C note that there was a lot of academic interest in proof-of-work, and several lines of work seem to have independently converged on ideas about using hashing as proof-of-work in peer-to-peer networks. In the last fifteen years, these efforts have been recognized to be the same idea, and the terminology, including the term “proof-of-work” have been standardized.

Nakamoto also uses widely known public key cryptography (PKI) to implement secure but anonymous digital signatures. The use of public keys as identifiers is central to Bitcoin, and Bitcoin is one of the most successful implementations of that concept. However, Nakamoto actually punts the problem of key management, which has certainly led to issues as well as development of alternative cryptocurrencies that deal with keys and identity in different ways.

N&C argue that Nakamoto’s contribution, his “genius”, was “the intricate way in which they fit together” these pieces from academic and practical research. Nakamoto’s system is a triad, with each piece supporting the logical flaw in the other pieces (p. 42).

Secure Ledger Prevents double spending, ensures the currency has value Needs distributed consensus
Distributed consensus (mining) Ensures security of ledger Needs to be incentivized, i.e., by a valuable currency
Valuable Currency Incentivizes the honesty of nodes Needs a secure ledger

This is an extremely useful insight, which explains why it has been so difficult to describe the “one big idea” underlying Bitcoin.  In fact, it is a clever combination of big ideas, glued together in a specific way that works pretty well in practice.

It would be an interesting follow up to this paper to identify the “innovations”, if any, in various alternative and derivative cryptocurrencies. There have been a number of alternatives to the Nakmotoan proof-of-work proposed and explored.  There have been alternatives to the peer-to-peer topology of the consensus network, as well as many different ideas about incentives. In short, there is probably a landscape of contemporary cryptocurrency design, with many neighbors in Bitcoins’s neighborhood.

I would add that there is a social dimension to the Bitcoin story (besides incentives).  Bitcoin succeeded beyond the simple merits of its technology because it hit a particular time and place (the 2009 global crash) and had supremely effective salesman (“Satoshi Nakamoto”, and the legions of enthusiastic Nakamotoans) who told and retold and still tell the story.

This combination of a clever technology built “just right” from existing concepts, arriving at the right moment, announced by a supreme salesman reminds me of NCSA Mosaic.  I remember that when I first saw the Mosaic browser, I immediately knew all the pieces it was built from.  Yet it was a new wrinkle, combining the familiar technologies, “just right”.  It also hit at the right moment (the Internet was exploding) and found a cheerleader in Larry Smarr—one of the greatest sales-beings I have ever encountered.

Bitcoin too succeeded by having a clever combination of technologies (including the strategically critical “leaving out” of key management), a fortunate historical moment, and an able storyteller.  (We can also see parallels in the overheated claims and financial bubbles of the early WWW and Bitcoin.)

This is a great paper, well worth the read.  N&C give us a better idea of the “genius” of Satoshi Nakamoto, and also insight into ongoing technical and social developments.

  1. Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System. 2009.
  2. Arvind Narayanan and Jeremy Clark, Bitcoin’s academic pedigree. Communications of the ACM, 60 (12):36-45, November 2017.


Cryptocurrency Thursday

IoT Isn’t Even Close To Trustworthy

I have already beefed about the current version of the Internet of Things, which is of dubious value and badly engineered, to boot. (Here, here, here, here, here, here)

The most visible face of these developments are the network connected home “Assistants”, such as Alexa, Siri, Google Home, and so on. Aside from the extremely questionable rationale (Why do I need a voice interface to my refrigerator? Why do I need my refrigerator to connect to the entire freaking Internet?) there are famous cases that illustrate that these beasts are deeply invasive.

Last fall, Hyunji Chung and colleages at US National Institute for Standards and Technology (NIST) wrote about the trustworthiness of these systems.

[S]uch interactions should be solely between you and the device assisting you. But are they? How do you know for sure?” (p. 100)

These are complicated, network connected systems which are not trivial to understand and evaluate. But they are in our homes, so everyone needs to know just how far to trust them.

The researchers sketch the “ecosystem” of network connected components and services. The very fact that they are complex enough to warrant the term, “ecosystem”, is the fundamental problem.

[W]e performed cloud-native artifact analysis, packet analysis, voice-command tests, application analysis, and firmware analysis” (p. 101)

Uh, oh. Does anyone besides me see a problem with deploying such a system unsupervised in private homes?

The threat envelope is huge. The basic logic of the assitant is implemented mainly in “the cloud”, with components on local devices that communicate with the cloud. Many assistants have third party apps as well. They report that the Alexa “Skill Store” has 10,000 such voice-actuated apps.

The point of the analysis is, of course, risk assessment. They identify many, many risks—basically, everything that might threaten the Internet.

  • Wiretapping
  • Compromises devices
  • Malicious voice commands
  • Eavesdropping

Wireless communication is, of course, a weakness. The researchers report the appalling fact that not all the communications are encrypted. Even when encrypted, traffic sniffing can still reveal considerable information about the devices and users.

Obviously, devices may be hacked. In this case, there is no expert IT department to defend the network, detect intrusions, or patch bugs. One has to think that home devices are relatively defenseless, and certain to be cracked over time.

One reason I don’t like voice commands is that they are hard to secure. Even the best voice recognition systems are vulnerable to mistakes, and low-cost, consumer-maintained systems probably aren’t top of the line. (And who wants your Alexa to reject commands because it isn’t certain that you are really you.)

And, of course, every link is a potential channel for someone to listen in on your life.

This article makes clear that these systems have a lot of potential issues, even if they are configured correctly and work as designed. Unfortunately, personal and home devices are not likely to be carefully configured or monitored. I have a PhD in computer science and have done my share of sysadmin, and I have not the remotest clue how to set up and keep one of these systems.

These researchers carefully don’t answer the question, “can I trust you?” But it is very clear that the answer is “no”.

I’m afraid that people are taking these devices on faith. They are sold as appliances, and the look like appliances, so they must be as safe as a consumer appliance, right?

Well, no.

This is a really great article, and everyone should read it before turning on any cloud service, let alone installing an “assistant” in their home.

And if you don’t understand what this article says, then you definitely shouldn’t install one of these assistants in your home.

  1. Hyunji Chung, Michaela Iorga, Jeffrey Voas, and Sangjin Lee, “Alexa, Can I Trust You?”. Computer, 50 (9):100-104, 2017.

“Wearable” Sensors for Plants

I saw the headline about “wearable sensors for plants”, so I had to have a look.

Of course, the word “wearable” is kind of dumb here.

However, the technology is actually pretty cool: “a simple and versatile method for patterning and transferring graphene-based nanomaterials onto various types of tape to realize flexible microscale sensors.” [2]

Printing various patterns on tape can create sensors that measure strain, pressure, or moisture, for instance.  The sticky tape can attach to anything, including leaves of plants.  This is a cheap way to whip up and add sensors to the real world, including agricultural crops.

Pretty cool, even if plants don’t actually “wear” them.

  1. Liang Dong, Engineers make wearable sensors for plants, enabling measurements of water use in crops, in Iowa State University – News Service. 2018.
  2. Seval Oren, Halil Ceylan, Patrick S. Schnable, and Liang Dong, High-Resolution Patterning and Transferring of Graphene-Based Nanomaterials onto Tape toward Roll-to-Roll Production of Tape-Based Wearable Sensors. Advanced Materials Technologies, 2 (12):1700223-n/a, 2017.


Dark Energy Survey Data Available

If the fate of the Antarctic ice is the single most important question about our own planet, looking outward, the most important question surely must be “What is Dark Energy?

For the past decade, the Dark Energy Survey has begun to measure fast swaths of the visible sky, with the goal to better understand DE.  The DES is an awesome project, and a world-wide collaboration: the paper that ‘splains the data dump has 200 authors listed.

I’m particularly fond of this project not only because of the shear romantic appeal (we basically have no idea about the physics 95% of our universe), but also because the data is collected every night in Chile, and shot up the spine of the Americas to the National Center for Supercomputing Applications, my old institution. (I used to have an office just down the hall from the team who built that part of the data system.)

After the first three years of data collection, the DES has just dropped a huge public “Data Release 1”.  Come and get it!

I haven’t really looked at the data in any detail, though I can confirm that it is definitely open to the public.

I’ll note that this is yet another example of the challenges of “citizen science”. Anyone can have this data, and can do whatever they want with it.  Should we expect a flood of cool discoveries from the Internet “crowd”?  I wouldn’t bet on it.

The data is not pretty pictures, and doing science with it requires quite a bit of technical knowledge.  In fact, just understanding how the data was created requires a ton of background. The researchers have gone to a lot of work to create solid, useful data [1].

This just goes to show that real science (as opposed to Hollywood or Washington science) isn’t just looking at a screen and saying, “aha”.  Making data available is great, but it neither makes scientists redundant, nor necessarily generates more knowledge.

  1. T. M. C. Abbott, F. B. Abdalla, S. Allam, A. Amara, J. Annis, J. Asorey, S. Avila, O. Ballester, M. Banerji, W. Barkhouse, L. Baruah, M. Baumer, K. Bechtol, M . R. Becker, A. Benoit-Lévy, G. M. Bernstein, E. Bertin, J. Blazek, S. Bocquet, D. Brooks, D. Brout, E. Buckley-Geer, D. L. Burke, V. Busti, R. Campisano, L. Cardiel-Sas, A. C arnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, R. Cawthon, C. Chang, C. Conselice, G. Costa, M. Crocce, C. E. Cunha, C. B. D’Andrea, L. N. da Costa, R. Das, G. Daues, T. M. Davis, C. Davis, J. De Vicente, D. L. DePoy, J. DeRose, S. Desai, H. T. Diehl, J. P. Dietrich, S. Dodelson, P. Doel, A. Drlica-Wagner, T. F. Eifler, A. E. Elliott, A. E. Evrard, A. Farahi, A. Fausti Neto, E. Fernandez, D. A. Finley, M. Fitzpatrick, B. Flaugher, R. J. Foley, P. Fosalba, D. N. Friedel, J. Frieman, J. García-Bellido, E. Gaz tanaga, D. W. Gerdes, T. Giannantonio, M. S. S. Gill, K. Glazebrook, D. A. Goldstein, M. Gower, D. Gruen, R. A. Gruendl, J. Gschwend, R. R. Gupta, G. Gutierrez, S. Hamilton, W. G. Hartley, S. R. Hinton, J. M. Hislop, D. Hollowood, K. Honscheid, B. Hoyle, D. Huterer, B. Jain, D. J. James, T. Jeltema, M. W. G. Johnson, M. D. Johnson, S. Juneau, T. Kacpr zak, S. Kent, G. Khullar, M. Klein, A. Kovacs, A. M. G. Koziol, E. Krause, A. Kremin, R. Kron, K. Kuehn, S. Kuhlmann, N. Kuropatkin, O. Lahav, J. Lasker, T. S. Li, R. T. Li, A. R. Liddle, M. Lima, H. Lin, P. López-Reyes, N. MacCrann, M. A. G. Maia, J. D. Maloney, M. Manera, M. March, J. Marriner, J. L. Marshall, P. Martini, T. McClintock, T. McKay, R . G. McMahon, P. Melchior, F. Menanteau, C. J. Miller, R. Miquel, J. J. Mohr, E. Morganson, J. Mould, E. Neilsen, R. C. Nichol, D. Nidever, R. Nikutta, F. Nogueira, B. Nord, P. Nugent, L. Nunes, R. L. C. Ogando, L. Old, K. Olsen, A. B. Pace, A. Palmese, F. Paz-Chinchón, H. V. Peiris, W. J. Percival, D. Petravick, A. A. Plazas, J. Poh, C. Pond, A. Por redon, A. Pujol, A. Refregier, K. Reil, P. M. Ricker, R. P. Rollins, A. K. Romer, A. Roodman, P. Rooney, A. J. Ross, E. S. Rykoff, M. Sako, E. Sanchez, M. L. Sanchez, B. Santiago, A. Saro, V. Scarpine, D. Scolnic, A. Scott, S. Serrano, I. Sevilla-Noarbe, E. Sheldon, N. Shipp, M.L. Silveira, R. C. Smith, J. A. Smith, M. Smith, M. Soares-Santos, F. Sobre ira, J. Song, A. Stebbins, E. Suchyta, M. Sullivan, M. E. C. Swanson, G. Tarle, J. Thaler, D. Thomas, R. C. Thomas, M. A. Troxel, D. L. Tucker, V. Vikram, A. K. Vivas, A. R. Wal ker, R. H. Wechsler, J. Weller, W. Wester, R. C. Wolf, H. Wu, B. Yanny, A. Zenteno, Y. Zhang and J. Zuntz, The Dark Energy Survey Data Release 1. The DES Collaboration, 2018.



Book Review: “Quillifer” by Walter Jon Williams

Quillifer by Walter Jon Williams

Williams new novel is labeled “Book One”, and as expected, the story introduces a new character, Quillifer (only one name), and a new fantasy world. There will surely be sequels.

The fantasy world has horse-and-gunpowder technology plus magic, a variety of interesting religions and institutions, intriguing architecture, and the country has just entered a civil war. Williams describes the economy in some detail, including a charmingly elaborate guild system.  There is also naval and land combat, in considerable detail.

The world is worked out in juicy detail. The buildings and cities are described in such detail, that I suspect he has built them in a computer simulation.  The same goes for the garments.

Quillifer himself is a bit of a rogue, though he’s good hearted and generally nice.  (Bearing in mind that this is a first person narrative.) His pleasant small town life is overturned by war, and he goes out into the world.  Stuff happens, more stuff happens, and so on.  By the end of “Book One”, he has garnered some fame and fortune, but we aren’t in any way sure what is going to happen next.

Quillifer has a smart mouth. This leads to fun and trouble, especially when he crosses paths with rich powerful men.

Quillifer really likes women, and they seem to like him. This leads to fun and trouble, especially when he attracts the attentions of rich and powerful women.

Williams is a really good writer, and this meets our expectations.   Overall, I liked this book a lot and look forward to more of Quillifer in the future.

  1. Walter Jon Williams, Quillifer, New York, Simon & Schuster, 2017.


Sunday Book Reviews

A personal blog.

%d bloggers like this: