Ethereum faces classic software engineering problems

The Ethereum developers have been rediscovering (but certainly not reinventing) software engineering.  Under the benign dictatorship of Vitalik Buterin, Ethereum has struggled to maintain professional quality software without a conventional top down, closed organization.  (See also this and this).

This month we read about their struggle with planning and scheduling “hard forks”-significant software updates that are incompatible with earlier code [1].  In conventional software development, these are managed through a central distribution, and users must take the update or loose compatibility.  In cryptoland, “hard forks” are “voted on”, and if a significant fraction of the network does not accept the change, the network splits.  And there may be competing “forks” that address a problem in different ways.  Sigh.  This is no way to run a railroad.

While this approach is “disruptive”, at least in the sense of “less organized and much harder” than conventional software project management, But it hardly “reinvents” software development.  Amazingly enough, all the hard problems of software maintenance are found in cryptocurrency software, and still have to be solved.

Crypotoland is already famous for its non-functional planning and decision-making.  What changes should be made?  More important, what changes must be made, versus might be made?  What are the effects and implications of a proposed change?  And so on. (See this and this and this.)

The “hard forks” problem is basically the difficult question of compatibility.  Some changes are so drastic that you basically have to throw away all the old software and data—it’s effectively a whole new product.  These changes are painful for users, and the more users you have—the more successful you are—the more difficult such upgrades become.  There is too much sunk cost in the old software to blithely toss it out and redo it.

This is not just conservatism or laziness.  As the Ethereum folks recognize, there are a lot of people using the software that simply do not have money, people, or expertise to port their stuff to a new version, let alone to do so over and over again.  And if they do try to keep up, they may spend most of their time just chasing the releases, with no time for their own software or business.  (Been there, done that.)

In the case of Ethereum, they also face a classic software dilemma.  They are working on “Ethereum 2.0”, which is a pretty complete rework of the basic Ethereum protocol.  In principle, everything will be wonderful in 2.0, but it will be quite a while before 2.0 is ready—it’s already a couple of years of discussions, and probably several more years before it might be done.

In the mean time, there are many changes that might be made to the current version of Ethereum core software.  Some of these may be critical fixes, others are good ideas, and others are, well, who knows?  But all these changes will be obsolete when the great day comes and Ethereum 2.0 comes out.  (Although, some changes might be applicable to both old and new—so they have to be done twice.)

So just how frequent should these “hard forks” be?  Too few, and the software may suffer.  Too many, and downstream developers will be overwhelmed?  And everything you do before V2.0 will essentially be thrown away.

Coindesk reports that the developers are discussing setting a regular schedule of forks, every six months, or even every three months.  Of course, if history is a guide, they probably can’t hit such a target anyway, presumably because the process of testing and preparing (and, I hope, documenting) the code takes longer than hoped.  (Definitely been there, done that.)

The ultimate kicker is that unlike conventional software, every one of these updates can be a political disaster, potentially causing a schism in the network, with untold consequences for users.  Software maintenance is hard enough without having to have to worry about civil wars among different interest groups.

It is good to see that Ethereum folks seem to have some understanding of these challenges, and are taking them seriously.  It is reported that the God Emperor of Ethereum, VB, wants a conservative policy for changes, doing only those necessary for survival of Ethereum, until 2.0 is out.  On the other hand, the schedule for 2.0 is uncertain and seems to slip farther into the future every year, so there is desire to improve what exists, rather than what might exist someday.

However “innoruptive” you think cryptocurreny/blokchain is, it’s still software, and it’s still hard.  And the more you succeed, the harder it gets.

Welcome to the software biz!


  1. Christine Kim (2019) Ethereum Core Developers Debate Benefits of More Frequent Hard Forks. Coindesk, https://www.coindesk.com/ethereum-core-developers-debate-benefits-of-more-frequent-hard-forks

 

Cryptocurrency Thursday

Hacking Tesla’s Autopilot

The folks that brought you the Internet are rushing to get you into a self-driving, network connected car.

What could possibly go wrong?

Setting aside the “disruption” of this core economic and cultural system, there have been quite a few concerns raised about the safety of these contraptions.  Automobiles are one of the most dangerous technologies we use, not least because we use them a lot.  Pretty much everything that can go wrong does go wrong, eventually.

Well, buckle up, cause it’s as bad as anyone might have thought.

The spring the Tencent Keen Security Lab report on successful hacks of Tesla cars [2].  The focus was on the autopilot self-driving system.  In fact, they were able to root the car, monkey with the steering and other controls, and reverse engineered the computer vision lane detection to make a simple hack that could cause the vehicle ot suddenly change lanes.

“In our research, we believe that we made three creative contributions:
1. We proved that we can remotely gain the root privilege of APE and control the steering system.
2. We proved that we can disturb the autowipers function by using adversarial examples in the physical world.
3. We proved that we can mislead the Tesla car into the reverse lane with minor changes on the road.” ([2], p.1)

Gulp.

Rooting the car is obviously bad for many reasons, and in this case they used their access to discover weaknesses in the other systems.  Taking over the steering is, well, just about as bad as it could get.  Tesla’s response is that this isn’t a “real” problem, because the driver can always override at any time.  But doesn’t that defeat the purpose of the autopilot?

The lane changing hack is interesting, if rather academic.  They found a case where just the right paint on the road could fool the algorithm.  As Evan Ackerman puts it, “Three Small Stickers in Intersection Can Cause Tesla Autopilot to Swerve Into Wrong Lane[1]. But this is really a rare case, and would probably be overridden if oncoming traffic were present.  As Ackerman comments, though, this brittleness is worrying because “the real world is very big, and the long tail of very unlikely situations is always out there,”.

To be fair to Tesla, a big part of the problem is that the car is full of software that is connected to a public network.  The hackers got in through the automatic updating system over the network.  Tesla is hardly the only car designed this way, and more companies are moving to remote updates for on board software.  Sigh.

There are many other autopilot and self-driving systems under development (including at Tencent).  They will have similar vulnerabilities.  If the received wisdom from software engineering holds true for these systems, there will be one bug for every ten lines of code—thousands and millions of bugs to exploit!

I’ll also note that a key part of the attack was that the root access allowed them to examine the system in detail and at leisure.  This raises a big question about proposals to release or open source software for cars and other systems.  Yes, this can lead to the rapid discovery of flaws. But it also means that hackers can have their way with the system.  And it certainly doesn’t mean that bugs will be fixed quickly out in the field.

My own view is that cars and other life-threatening technology should never, ever be connected to a public network, and should never do software updates over public networks.  That’s less convenient and costlier for the manufacturer, but that’s just tough.


  1. Evan Ackerman, Three Small Stickers in Intersection Can Cause Tesla Autopilot to Swerve Into Wrong Lane. 2019: IEEE Spectrum — Cars That think. https://spectrum.ieee.org/cars-that-think/transportation/self-driving/three-small-stickers-on-road-can-steer-tesla-autopilot-into-oncoming-lane
  2. Tencent Keen Security Lab, Experimental Security Research of Tesla Autopilot. Tencent Keen Security Lab 2019-03, 2019. https://keenlab.tencent.com/en/whitepapers/Experimental_Security_Research_of_Tesla_Autopilot.pdf

 

Robot Wednesday

Local Solar Power: Lot’s of Progress, but Some Pieces are Still Missing

In recent years, photovoltaic (PV) systems have become cheap enough that they now are cheaper than coal and competitive with natural gas.

Solar energy can be harvested at many scales, from giant arrays (potentially in outer space), buildings and campuses, individual homes, and, of course, gadgets [1].  It’s basically the same technology.

One of the most interesting things about solar power is that it really is possible to distribute the generating systems at many scales, including personal, neighborhood and community systems. For me, this means that it is possible to “put the tools in the hands of the workers”, so people can own their own electric power generation.  How can I not want this?

Achieving this vision requires meeting a number of challenges.

While PV is cheaper to install each year, and very cheap to operate, building and installing a solar array needs a significant financial investment.  Second, any scheme to share electricity (solar or other) requires distribution to the users, ideally via existing grid connections.  And  third, this requires political and economic structures for the technical systems.

In recent months I’ve reported on some developments that are addressing these challenges [2,3].

This being the US, any problem that can be solved by monkeying around with money, is a problem we can solve, yessir.  So, for instance, there is now a Clean Energy Credit Union (CECU), which offers all the advantages of an insured credit union, and is dedicated to financing PV and other clean energy for consumers and small businesses [2].

Locally, there is also a bulk purchase program, which negotiates a good deal from a good provider, and then promotes installation of PV on homes and businesses [3].  In both these cases, the institutions help pool and direct local people’s money to local projects (and local workers).

These are good things, and help everyone “Think Heliocentrically, Act Locally”.  But this isn’t the end of the story.

The vast majority of people do not own their own home or business.  To date, the only way to get PV power is through public utilities, assuming the utility has renewable energy generation and can and will “sell it” to customers (e.g., through a check off that requests renewable energy).   In some places, including my local area, cities are generating renewable energy for government and sometimes local consumers.

But how can average people, without a lot of money, invest in solar power, and reap the benefits of generating their own power?

We can see that the CECU credit union is implementing the “consumer lending” model that helped get two cars in every garage, as well as a lot of people into houses with garages  (for better or worse).

For generating power, we might look for other models from how people finance their housing, such as condominiums, time shares, and cooperatives.  The basic idea is for people who don’t necessarily own property to pool money and build PV arrays near by.  The power generated is shared out to investors, and any profits would go to them.  It is quite possible that an owner/customer’s investment might be entirely paid back in a decade or so, from reduced utility bills.

There are more than one way to skin this particular cat, but I’m particularly interested in local cooperative model for community solar projects.  There have been electric coops for a century and more, usually in underserved rural areas.  This same model can work for a small solar farm in town or on roof tops.

What does it take to do this kind of project?   I’m still learning the ins and outs of how it might be done.  In general, there are a variety of organizational and legal models [5]   Personally, this old bolshie heart beats fastest for a pure cooperative, a la People Power Solar Cooperative [4] .  Much depends on local laws.

Beyond legal charters, the key is, as usual, the right people and leadership.  Identifying and mobilizing the right people in the right way.  Easym peasey!

I have a lot of work to do before I’ll see anything coming true.

More later.


  1. Robert McGrath, Tiny Watts – Solar Power For Everyone, in Tiny Watts Blog. 2018. https://www.ases.org/tiny-watts-solar-power-for-everyone/
  2. Robert E. McGrath, A New Option to Finance A Clean Energy Future for Everyone, in The Public I: A Paper of the People. 2018. http://publici.ucimc.org/2018/12/a-new-option-to-finance-a-clean-energy-future-for-everyone/
  3. Robert E. McGrath, Think Heliocentrically, Act Locally, in The Public I: A Paper of the People. 2019. http://publici.ucimc.org/2019/04/think-heliocentrically-act-locally/
  4. People Power Solar Cooperative. People Power Solar Cooperative. 2019, https://www.peoplepowersolar.org/.
  5. Trebor Scholz and Nathan Schneider, eds. Ours to Hack and Own: The Rise of Platform Cooperativism, A new Vision for the Future of Work and a Fairer Internet. OR Books: New York, 2017. http://www.orbooks.com/catalog/ours-to-hack-and-to-own/

Book Review: “Drearcliff Grange School” by Kim Newman

 The Secrets of Drearcliff Grange School (2015) by Kim Newman
The Haunting of Drearcliff Grange School (2018) by Kim Newman

British boarding schools are horrible and terrifying.  British girl’s schools just as much as boys.  And, as Amy Thomsett notes, when a place has a name like “Drearcliff”, it’s likely to be accurate.  People generally want to mislead you into thinking it’s not so “drear”.

Drearcliff Grange is even more terrifying than an ordinary school, because it has a tranche of “Unusuals”, girls with unique and supernatural abilities.  It’s hard enough for any teen to figure out who you are and how you fit in.  Add in strange powers, each one unique, and you have a formula for angst to the max.

At Drearcliff, “coming out” is far, far more traumatic than mundane questions of sexual identity and preferences.  What does it mean to be human in this unusual way?  Are “unusuals” even human?

Every teenager has to make choices, and decide to be good or bad. For “unusuals” this process means looking right at the face of the evil that lurks inside each of us.  With great power comes great responsibility, and great risk.  How can a girl cope?

Drearcliff itself is not only “drear”, it is more than a little weird.  The history is obscure, and some of the faces in the class photos are the same for decades—and never age.   Drearcliff seems to be a nexus of supernatural incursions and nefarious plots.  It also seems to be a recruiting station for the Diogenes Club, and its graduates go on to prominent positions throughout the country. but especially in the police and intelligence services.

Amy Thomsett is an unusual (she can float). She also is a moth enthusiast, so her secret identity becomes the Kentish Glory, and her gang of friends is The Moth Club.  The moths include girls with a variety of strange abilities and remarkable family connections on both sides of the law.

The Moth Club is dedicated to adventuring, and specifically to setting things right and true.  And boy, do they find plenty to tackle!

This being Tim Newman’s world, we find that Drearcliff has connections to all kinds of interesting people from other stories, as well as nineteenth and twentieth century popular culture.

Good stuff.


  1. Kim Newman, The Secrets of Drearcliff Grange School, New York, Titan Books, 2015.
  2. Kim Newman, The Haunting of Drearcliff Grange School, London, Titan Books, 2018.

 

Sunday Book Reviews

A Synthetic Genome?

One of the things—maybe the biggest thing—I learned in my decades of professional programming was “management by demo”.  Softwareland is full of a lot of talk; what could happen, how great it’s gonna be, innovation, disruption, etc.  That’s all fine, but to convince me you rally have something, you have to actually build it.  Show it to me, and I’ll start to believe you know what you are doing.  (And maybe I’ll be able to tell just how “innoruptive” it may really be.)

This rush to hack up a rigged demo is aggravating, but does actually represent a deep principle:  the test of a theory is whether you can actually use it to produce new things.  In the case of software, if you’ve got a new idea, the test is whether you can actually build it, not just  talk about it.

(And, by the way, a delivering a cool demo is also an intense psychological high, the pure white powder direct into the vein.  Watch this!  You’ve never seen anything like it, because there never has been anything like it. Whoo.)

Genomes are another area with a lot of talk, and gazoogalbytes of data.  People actually win prizes simply for decoding all the genomes in a cubic meter of soil or sea water. And there are endless stories of “innoruption” and “disovation”, not to mention musings over what it all means.

This spring researchers from Zürich report a cool project that demonstrates just how well we understand genomics.  Much of what we have learned about genes has come from poking around in existing genomes, and genetic engineering has mostly been about tinkering with found genomes—editing, not composing.

This new study takes this technology to a logical limit:  the project rewrote the entire genome of a bacteria, editing so that the new version was functionally identical to the original.  To the degree that this was successful, it demonstrates a complete understanding of all the genes, as well as the ability to generate a functioning genetic code.

In early research, attempting to synthetically reproduce a genome proved tricky.  Tiny errors in the wrong place render the whole sequence inoperative. It is critical to know what does what, which generally reveals that some sequences are “don’t care”—unused or redundant information.

This kind of inquiry leads to the goal of creating the shortest possible equivalent genome, including everything that matters, and omitting everything that doesn’t. This, in turn, can open the way to the addition of new codes, e.g., to engineer a new organism.

The methods described in the paper remind me of computer memory and disk compaction. Duplicates and unneeded blocks are identified and can be overwritten by a single copy of all the live data. The result is logically the same, packed into the minimally required space.

“we used sequence rewriting to reduce the number of genetic features present within protein-coding sequences from 6,290 to 799 .” ([1], p. 8)

This doesn’t seem to be “synthesizing” a genome, but it is a pretty thorough rewrite, and only one step away from generating a completely new sequence.

Much of this paper is beyond my puny understanding of genetics. So I’m taking much of it without ability to independently assess the arguments or procedures.

I don’t really understand how to assess whether the synthetic genome is “equivalent” to the natural one, so I read those sections with interest.  If I understand the paper correctly, this was assessed by inserting segments of the synthetic chromosomes in an organism alongside its natural genome. If the synthetics are equivalent, they act as redundant back up copies of each other.

When more than one copy of a gene is present, some of them will be disrupted and disabled and the organism will continue, using the redundant copy.  In the absence of extra copies, the gene will either be repaired or  the organism will die.  Either way, non-redundant genes will be retained in correct form, while redundant ones will accumulate damaged copies.  (Note that this is a test for the indispensability of a gene, whether we fully know what it actually does or not.)

The paper argues that this method should be able to reveal many details of what groups of genes do.  The paper reports that quite a few codons were shown to have functions not previously noted in the databases.  I.e., this technique revealed cases there the best knowledge  from earlier studies was incomplete or incorrect.

This goes to show that, as in a lot of things, there is a big difference between inferring how something works and being about to build a functional instance.  That’s why we do demos and prototypes, no?

The most controversial finding was that “This result suggests that, in most essential genes, the primary mRNA sequence, the secondary structure, or the codons.” ([1], p. 9)

This finding is, as they say, “surprising”.  Very surprising, even to an amateur like me.

There is plenty of evidence that a lot of the “non-essential” stuff has significant effects on the function of the genes. While some of the “non-essential” stuff may indeed be non-functional left-overs or accidents, it is also possible that they have functions not apparent from the tests used in this study.

One point to consider is the possibility of rare combinations of environmental conditions that activate some of the “unused” code to modify the “usual” behavior. There may also be undiscovered meta information coded in the unneeded code.  And so on.

If so, then it will be interested to test these synthesized organisms in different ways, over longer periods of time, in the presence of other organisms, and so on.

This is really neat stuff, and only the beginning of some very interesting exploration.


  1. Jonathan E. Venetz, Luca Del Medico, Alexander Wölfle, Philipp Schächle, Yves Bucher, Donat Appert, Flavia Tschan, Carlos E. Flores-Tinoco, Mariëlle van Kooten, Rym Guennoun, Samuel Deutsch, Matthias Christen, and Beat Christen, Chemical synthesis rewriting of a bacterial genome to achieve design flexibility and biological functionality. Proceedings of the National Academy of Sciences:201818259, 2019. http://www.pnas.org/content/early/2019/03/29/1818259116.abstract

Evidence of Fallout form the Chicxulub Impact?

There is quite a bit of news about the first reports of a fossil bed that appears to record a massive die out due to falling debris—right at about the time of the Chicxulub impact.  This could be a snapshot of the actual event, showing the devastation hundreds of kilometers from the crater.

The first official paper describing the site is just published [1].  There is also a popular article in The New Yorker, published simultaneously (“The Day the Earth Died” [2]). The latter gives us the Hollywood version, which is romantic, if not totally convincing.

But let’s look at the scientific paper.

The finds are in the Hell’s Creek area, which is one of the richest fossil beds, particularly for Cretaceous dinosaurs.  At the time of the deposits, it was the northern end of a sea that extended from the Gulf of Mexico.  The deposit has been tagged ‘Tanis’ (more Hollywood), and is reported to lie between known Cretaceous and Paleocene strata—right where traces of Chicxulub are found world wide.

“the Cretaceous and Paleogene strata are separated by the Event Deposit” ([1], p. 3)

However, this deposit isn’t a centimeter thin layer of distinctive fallout as found all over the world. It is a thick and complicated jumble, 1.3m deep.  The deposits seem to be multiple, violent surges and retreats, which carried live and recently dead animals and plants. Much of the material is charred, indicating it was burned or burning at the time of deposit.

There are also many ejecta characteristic of a meteor strike.  These range in size up to a centimeter or so. These  molten blobsresemble the Chicxulub ejecta found elsewhere, and the size distribution varies over the depth as would be expected from heavier particles falling first. There are also spherules captured in amber which were preserved in original form, and also some found in the gills of fish.  The latter suggests that the fish were alive during the fallout.

The jumbled remains include many plants and animals in incredibly complete preservation.  This is interpreted as indicating that they were killed in a massive flood, or series of floods, rather than deposited after death.  In any case, the materials described are incredibly, absurdly, insanely rich!

The researchers speculate that this represents flooding caused by the Chicxulub impact, which likely caused magnitude 11 earthquakes, as well as the rain of hot molten rock that set fire to forests.  They hypothesize a time line that puts these events within the first hours after the impact—making it a picture of the actual event as it happened.  Which is, like wow!  The greatest find since Schliemann at Troy!

they also suggest that the well preserved carcasses on the top surface show no signs of scavengers, suggesting that there weren’t any animals alive.

“The absence of scavenging despite the shallow burial of plentiful, large carcasses and the lack of root traces along the upper surface of the Event Deposit may suggest a depleted local biodiversity after deposition ” ([1], p.8)

This paper is mainly describing the deposit and the overall picture.  This paper has very little information about the plants and animals found, and the evidence of the ecology at the time of the event.

One of the questions not answered yet is the presence of dinosaurs, and generally what was living at the time of the disaster.  The popular article indicates that one dinosaur bone was found, at the top of the deposit [2].  This might be evidence that the dinosaurs were, indeed, still around at the moment of the impact, which would settle a long-standing controversy as to whether the dinosaurs were actually wiped out by the strike, or were already dying out from other causes.

There are said to be more papers in the pipeline, which will presumably fill in these details


This is clearly a spectacular site. It seems too good to be true, which is why people are being cautious.  The swashbuckling backstory also raises worries about overly optimistic interpretation.  This is a fascinating site, though it might not be exactly what De Palma et al. think it is at this time.

It is good to see that considerable expertise has already been brought in (the coauthors are a solid group), and we can expect a lot more careful attention now that they are finally publishing.

As they note, there is so much material here, it will take decades to sift through it.  I think we’re all eager for more.


  1. Robert A. DePalma, Jan Smit, David A. Burnham, Klaudia Kuiper, Phillip L. Manning, Anton Oleinik, Peter Larson, Florentin J. Maurrasse, Johan Vellekoop, Mark A. Richards, Loren Gurche, and Walter Alvarez, A seismically induced onshore surge deposit at the KPg boundary, North Dakota. Proceedings of the National Academy of Sciences:201817407, 2019. http://www.pnas.org/content/early/2019/03/27/1817407116.abstract
  2. Douglas Preston, The Day the Earth Died, in The New Yorker. 2019. p. 52-65. https://www.newyorker.com/magazine/2019/04/08/the-day-the-dinosaurs-died

Another great name for a band:

Chicxulub ejecta

 

A personal blog.

%d bloggers like this: