Tag Archives: Alyssa Hertig

Trustless Software Engineering for Bitcoin?

One of the foundational principles of Nakamotoan cryptocurrencies is the faith in Open Source Software.  Specifically, open source communities, especially cryptography and security communities, build trust in software by publishing and sharing the source code.  The idea is that many eyeballs assure both quality and authenticity.  In addition, you always have the option to build it yourself, so you can know exactly what you are getting, in principle.  (There is also a psychological benefit in this transparency: we’re all in it together, and everybody knows everything.)

Bitcoin and many cryptocurrencies use this approach, too. The idea is that you know that there aren’t any secret gotchas hidden in the code, and also, you hope that lots of people have checked for bugs.

In other words, open source is a crucial mechanism for creating “trust” in the software and protocols, without having to trust a single centralized source.  “Trustless trust” is the essence of Nakamotoism.

Problem solved, right?


Software depends on a massive web of dependencies (it’s much more complex than the conventional term, “stack”), and you are essentially trusting everything that your software depends on.  Even if you build the software yourself, you are trusting software libraries, the operating system, and the build tools (and see [2] about “trusting trust”).  Frankly, Ii’s hard to even know what you are trusting, because it’s hard to know what these dependencies are.

In the case of Bitcoin, you might decide to download the source code, check it, and build your own copy.  (Note that you might well need other software, such as a wallet, or and API, which is yet more code to trust.)  Doing such a build is, well, quite an experience.  I remember building a very early alpha version of gnu C++ from source.  It took a week.  The last step was to rebuild the compiler using the version of the compiler you just built with another compiler….

Nevertheless, this build process gives you a certain trust that you know what the code really does.  At least theoretically.

However, even this build requires you to trust a bunch of other entities: the source of the code, the tools used to download and build, and even the tools used to read the code.  You also have to trust the operating system (which is many millions of lines of code), the network, and so on.

Yoiks!  What’s a programmer to do?

Normal people tend to break out of this mental spiral, by deciding to trust someone.  For instance, most of us trust our operating system and language tools.  And our own computer.  And our own network (though that’s a closer call).  And so on.

But all this “trust” just doesn’t cut it for true Nakamotoans.  The whole notion of Bitcoin is to have a “trustless” system.  So what can be done?

Alyssa Hertig reports on work by Carl Dong, who is implementing support to enable “reproducible builds” for Bitcoin core code [1].  The basic idea is to document (in executable form) all the dependencies and steps in the build, so you can be assured that what you have is what you think you have.  Or at least, to assure that you can build it exactly the same way over and over.

This is clever and useful stuff.  It is called configuration management, and it is a standard part of software engineering.  Serious software engineering projects have been doing this for decades, especially for security sensitive and mission critical code.  So, yes, this is a good idea.

Does this make the build process more “trustless”?  No,  of course not.  We are still “trusting” all kinds of stuff, mostly in ways we don’t know about (and can’t do anything about anyway).

unfortunately it is a somewhat infeasible task to remove trusted third parties from the build process completely,”  (Carl Dong, quoted in [1])

The thing is, the software build depends not only on the software you use directly, but on the whole shebang:  the operating system, the network, and, I’m sorry to have to remind you, the hardware in all the computers and networks involved. The configuration management described is a very good thing, but it’s really only the tip of the iceberg of the dependencies.

Configuration management makes the Bitcoin build sort of trustless, but not completely. It’s better but is it good enough?  I dunno.  Obviously, Dong thinks it is worth the trouble.

But this raises a different question: is there any such thing as “trustless” computing?

Personally, I think the answer is no.

It’s obviously infeasible to even understand the dependencies of any non-trivial computing system.  Worse, the entire concept of a “trustless” system is basically about “trusting” logical proofs.  E.g., using cryptographic signatures to validate software is actually based on a proof that it is very, very, very unlikely that this is a forgery.   If you trust the proof, you trust the signature.  But should you trust the proof?  That’s hard to know for sure.

A true Nakamotoan believes that a mathematical proof is more trustworthy than any human being (other than yourself).  And I hate to be the one who tells you this, but that means you are placing faith in unprovable and unknowable abstractions.

My own view is that this whole deal about “trust” should be taken as a sort of Bayesian problem, in which we have to take estimates of trustworthiness, evaluate evidence, and make decisions about “how much trust is enough”.  It is never going to be possible to have 100% trust, and, more important, “trust” is a subjective value in the Bayesian sense.

In this view, technical measures such as Nakamotoan consensus and the configuration management discussed by Hertig are confidence building exercises that push our posterior trust higher.

If my priors are different than yours (either more or less trusting), a given engineering practice will have greater or lesser impact on my posterior than yours.

From this perspective, we can see that much of the talk about “trustless” systems is rather misguided, or at least poorly defined.

“Trust” is a subjective value held by a human, and Nakamotoan “trustless” systems are actually systems designed to have specific kinds of trust (math proofs) and not others (third parties on the network). Claims about “trustless” are arguments that must be taken in the context of your own priors.

In this light, technical innovations have a significant psychological component in them, and that means that they have different value for different people.

I would say that Nakamotoans have extremely high prior distrust of governments and corporations, high prior trust of cryptography, and high prior trust in “markets”, among other things.  Nakamotoan cryptocurrencies deploy a suite of technical measures that aim to push these priors to a high posterior “trust” in the overall system, without trusting governments or corporations.  (If this paragraph is logically muddled, it’s probably because Nakamotoans use the term “trust” in more than one way.)

Non-Nakamotoans may start with different priors, and so the same technical measures may result is different posterior “trust” in the system.  For example, a low prior trust of “markets” may mean that Nakamotoan consensus protocols do not produce high posterior trust in the overall system.

In short, There Aint Any Such Thing As A Trustless System.  But…good engineering is always better than poor engineering.

  1. Alyssa Hertig (2019) ‘Building’ Bitcoin’s Software Just Got a Bit More Trustless. Coindesk, https://www.coindesk.com/building-bitcoins-software-just-got-a-bit-more-trustless
  2. Ken Thompson, Reflections on trusting trust. Communications of the ACM, 27 (8):761-763, August 1984. https://dl.acm.org/citation.cfm?doid=358198.358210


Cryptocurrency Thursday

Litcoin – Local P2P Power Market

One of the perennial use cases for blockchain is P2P electricity markets—direct purchase of power from the producer.  This is often intended to support local community generation, usually from roof top or other small PV arrays. Blockchain transactions fit nicely into a market that manages automated meters and routing.

Community solar generation and purchase itself is a very tempting idea for many reasons. It is a way to build up local clean energy resources and jobs and offer consumers a cost-effective option to purchase green energy.  It also helps people who can’t generate their own power (e.g., because they rent an apartment) invest in local sources. And some people may be able to and want to generate far more power than they consume, which they can sell this to neighbors.

This scheme can work at the scale of root tops up to fairly large fields of generators.  In fact, there aren’t really any technical barriers.  The key problems to solve are financial and legal.

Delivering power from one house to another requires infrastructure, and building new infrastructure would be expensive and insane.  There already is infrastructure, but it highly regulated and not open to just anyone.  The default business model is to sell and buy power via the utility, who charges a lot for access.

The use case for blockchain here is to bypass the utility financially, allowing anyone to purchase electricity from anyone (i.e., “peer-to-peer”).  As is always the case, it is perfectly possible to build a P2P system with conventional technology.  But this kind of simple asset purchase is just the kind of thing that blockchains can do pretty well, at least conceptually.

So this is a compelling case for blockchain and surely a real world need.  Why is it taking so long for blockchain (or any) P2P power markets to come true?

Alyssa Hertig reports on the experience of an emerging system in Germany, called Litcoin [1]. Litcoin is built on Ethereum and uses “Smart Contracts” to implement an exchange for direct consumer purchases of power.  They have 700 users across Germany.

“Once a user finds the energy they want to buy, they make a payment in euros to Lition. Behind the scenes, an ethereum smart contract detects this payment and automatically sends the customer their energy.”

Does this concept need a blockchain?  Not really.  We have similar markets where I live.  But it probably is pretty cheap to implement this with Ethereum, and the cryptographic signatures and protocols make the system pretty secure (assuming that the customer and producer facing code is secure, which it probably isn’t).

Litcoin makes some interesting claims. It makes a carefully qualified claim to be the first “P2P energy trading solution that is fully licensed and commercially live in a mass market (Germany)”.  This has to be qualified because there are many other similar projects in other places in various stages of development (this, this, this…).  Litcoin does seem to be in the biggest market I’ve seen, although 700 users in Germany is scarcely a success story.

They make other intriguing claims, including, “Private data is stored on private sidechains. Quantum-computer safe.”  I’m not totally sure what that means exactly. I assume that their side chain uses what they hope is quantum-safe cryptography. (The main Ethereum blockchain of course, is definitely not quantum safe.)

While a P2P power exchange is very Nakamotoan in spirit, LItcoin has a number of non-Nakamotoan features.  The aforementioned “side-chains” are a bag on the side of the main blockchain, effectively a pretty conventional distributed data store with a blockchain layer.  For that matter, the exchange is operated by a “centralized” organization.  Note that they also take and make payment in Euros, one of the fiat-iest of fiat currencies.

The reason for the centralized organization illustrates the heart of the problem.  Litcoin has got as far as it has by working within the legal structures of the German power grid.  When they say they are “licensed”, that means that they are an officially recognized legal entity, entitled to buy and sell power across the grid.  This policy structure is the key to Litcoins very existence, and has nothing to do with blockchain, and everything to do with politics in Germany.

Hertig reports that the Ethererum blockchain is actually unsatisfactory for this use.  I suspect that it was easy to boot up a working system, but they have found that it is slow. They also do not need a public blockchain, which is 99+% not their business, i.e., the blocks have everyone in the world’s transactions, so it is mostly spam from the point of view of the electricity market.  And being oriented to clean energy, the LItcoin people are reported to be uncomfortable with the ghastly wastefulness of Nakamotoan “mining”—and rightly so.

Consequently, the company is allied with SAP (the epitome of a “centralized” organization, if there ever was one!) to create a “private” blockchain.  In this, they join many serious businesses seeking the benefits of low overhead transactions without the waste and latency of a public blockchain.

It is highly probably that the resulting system will not use Ethereum or any generic blockchain.  For one thing, a public blockchain is way, way overkill for the needs of the system.  The Ethereum version would let me purchase power from German producers or sell to them, even though there is no way for me to actually transfer the electricity to and from the German power grid (and it would probably not be legal to do so).  So why pay the overhead of a global system, when it can only be used locally?

It seems very likely that this won’t be implemented with Ethereum, though the ultimate system might have many features similar to Ethereum.  For example, they might implement a private blockchain with executable contracts similar to (but more efficient than) Ethereum.  (But then again, conventional databases have had executable scripts forever.)

Will Litcoin succeed?  They might, though its not clear that blockchain will ultimately be critical to success. Success will depend on the availability of producers and the acceptance of consumers. Those will depend on many factors such as the costs of electricity from other sources, public policies, and how the design of the user experience.  (Most people are not interested in spending more than a minute of two on their electricity bills—so using Litcoin has to be really, really simple.)

Not A CryptoTulip!

I’ll note that Litcoin is not really a strong candidate for the CryptoTulip of the Year.  This is a real use case, and they are serious about solving it. Above all, they are interested in solving the problem, and willing to abandon blockchain technology where it isn’t helping the solution.

They aren’t irrationally exuberant, they are rationally critical.  So Litcoin gets praise, but can not win the CryptoTulip Award.

  1. Alyssa Hertig, Ethereum Energy Project Now Powers 700 Households in 10 Cities, in Coindesk. 2018. https://www.coindesk.com/ethereum-energy-project-now-powers-700-households-in-10-cities/


Blockchain Thursday

CryptoTulip 2018: Bitcoin vs Ethereum

The heavyweights are battling it out for the CryptoTulip 2018 award!

Defending CryptoTulip Award winner Ethereum has thrashed all year on basic governance and scaling issues, with no resolution in sight.  Ehthereum is also the platform of choice for other notable CryptoTulip contenders, including EOS, FOAM, and the plethora of ICOs.  (Ethereum:  the Tulip that other Tulips grow on!)

But the grand patriarch of the Nakamotoan family, Bitcoin, is not to be denied.  With a stunningly non-Nakamotoan bug fix (at least we hope the bug is really fixed!), and the neverending scaling debates, Bitcoin has done its own thrashing.  (And, by the way, the bug in Bitcoin has been copied into any number of other “coins”, so it affects a whole extended family of cryptocurrencies. )  (Bitcoiin: the Tulip from which all other Tulips are descended!)

This month we see further action from both these contenders for the non-coveted CryptoTulip of the Year Award.

This month, a cunning plan was floated to help Bitcoin scale.  It is now clear that this great idea is not only not strictly Nakamotoan, it also relies on a wrinkle in the consensus protocol that many consider to be a bug [2].    Yessiree, let’s turn a bug into a feature!

Astonishing enough, many people want to fix the bug (which would kill the scaling concept), and many people want to keep the ‘bug’ (and use it to improve the network).  It also seems that the Bitcoin code has grown complex enough that it isn’t even easy to plug this hole in the protocol if you wanted to, at least not without a side effect of splitting the network.  (I don’t understand the details of these fixes, I’m relying on second hand info.)

A tough choice—valid data, or scaling up the network.  Bitcoin is certainly contending for the CryptoTulip Award this year.

Of course, Ethereum continues to thrash along on its own path.  The long awaited, much debated “Constantinople” upgrade—way overdue, still contested, and somewhat trimmed down—is entering live testing.

Or, it would be, if only people would do the tests.  At this stage, the proposed new version was booted on a test network, but “stalled”, not processing more data.  It is reported that there aren’t enough miners who are running the code.

However, the testing did succeed in revealing a serious bug “which caused two different iterations of the same software upgrade to run on testnet.  I don’t really understand this problem, but it doesn’t sound ready for release, to me. But what do I know?

This test run doesn’t bode well for the November release date.  Enthusiasm seems to be low, quality seems iffy.  So just when, if ever, will this upgrade really happen?

Update 20 October 2018:  The release has been pushed back to January 2019 or later.

With this disaster, Ethereum certainly might repeat as CryptoTulip of the Year this year.  We’ll see what happens.

Both these contenders are showing that the software is buggy and the maintenance process unworkable.  Plus, significant fractions of the “community” don’t even want the upgrades to happen at all.

This is a technical and governance horror show, and it is not the universe Nakamoto envisioned!

But we all still believe, really believe, in our CryptoTulips!

That’s the essence of CryptoTulip Mania, isn’t it?

  1. Alyssa Hertig (2018) ‘Bitcoin Bug’ Exploited on Crypto Fork as Attacker Prints 235 Million Pigeoncoins. Coinbase, https://www.coindesk.com/bitcoin-bug-exploited-on-crypto-fork-as-attacker-prints-235-million-pigeoncoins/
  2. Alyssa Hertig (2018) Not Everyone Wants to Fix Bitcoin’s ‘Time Warp Attack’ – Here’s Why. Coindesk, https://www.coindesk.com/not-everyone-wants-to-fix-bitcoins-time-warp-attack-heres-why/
  3. Christine Kim (2018) Ethereum’s Next Blockchain Upgrade Faces Delay After Testing Failure. Coindesk, https://www.coindesk.com/ethereums-next-blockchain-upgrade-faces-delay-after-testing-failure/


Cryptocurrecy Thuraday

Big Bitcoin Bug

As the competition for CryptoTulip of the year contest enters the final stretch, we now hear from the arch, patriarch of all Tulips:  Bitcoin.

This month we learned that there was a huge, massive, bug in the oldest, stablest, “most secure” cryptocurrency of them all, Bitcoin   (There are also an unknown number of copycats who use code from the Bitcoin source base, so the bugs may affect other systems, too.)

Actually, there were two bugs, one a possible denial of service attack, and another that could allow double spending.  Nothing major, just a potential for a crippling shutdown and/or counterfeit coinage!  The bugs were accidentally introduced two years ago!


The bugs themselves aren’t especially notable. All software has bugs.  Bitcoin is software.  Ergo, Bitcoin has bugs.

The interesting and Tulip-y thing is how it was handled.

Notably the “open source”, “transparent” development team took it upon themselves to keep quiet about most serious part of the problem until there was a patc [1].  This is, of course, perfectly standard and reasonable behavior for proprietary code.  The developers took responsibility for the welfare of the code and its users, and tried to get the patch out before the details of the flaw were explained to potential attackers.

This is a sensible process, but it is not exactly a Nakamotoan process.  Bearing in mind that many enthusiasts advocate the principle that “the code is the law”, which means that, for a while, it was perfectly proper, even “intended” that people might be able to ravage Bitcoin through these loopholes in “the law”.  And the unelected developers in fact took it upon themselves, without consultation or notice, to change “the law” to preclude these highly profitable moves.

Naturally, this being cryptoland, the unannounced bug was, in fact, soon unofficially leaked by non-cooperative folks. (Thanks for helping, guys.) And even according the official announcement, only half the affected systems had been patched so far- probably.  The bug notice itself essentially begs people to update with the bug fix. And no one can do more.

Apparently, many Bitcoinistas believed their own propaganda about how ‘secure’ this stuff is, and about how invincible ‘open source’ code is.  So some people were “shocked” by this bug. [2]   In response, there have been naïve calls for more and better testing, as if any software ever has enough and good enough testing.  (And, by the way, decentralized, asynchronous, network protocols are really, really hard to test.)

There have also been calls for multiple implementations, which is a good idea until it isn’t a good idea.

As Alyssa Hertig reports, “developers don’t necessarily agree on exactly what needs to be done.”

At this point, we might ask, “Is this bug really patched?”  Who knows?

At this time we believe over half of the Bitcoin hashrate has upgraded to patched nodes. We are unaware of any attempts to exploit this vulnerability.” (from [1])

Not exactly a ringing assurance.

This episode shows just how vulnerable this technology really is.  There can and surely will be huge bugs, but they can be patched only through the indirect and voluntary cooperation of many anonymous operators.  And, as we have seen with Ethereum and the DAO, a bug can be exploited in seconds, but may take years to fix.

The CryptoTulip award will surely have to consider this episode.

Bitcoin was lucky this time (as far as we know).  With billions on the line, it’s only a matter of time before this CryptoTulip explodes.

  1. BitcoinCORE, CVE-2018-17144 Full Disclosure.  Bitcoin Core Notice, 2018. https://bitcoincore.org/en/2018/09/20/notice/
  2. Alyssa Hertig (2018) In Wake of ‘Major Failure,’ Bitcoin Code Review Comes Under Scrutiny. Coindesk, https://www.coindesk.com/in-wake-of-major-failure-bitcoin-code-review-comes-under-scrutiny/
  3. Alyssa Hertig (2018) The Latest Bitcoin Bug Was So Bad, Developers Kept Its Full Details a Secret. Coindesk, https://www.coindesk.com/the-latest-bitcoin-bug-was-so-bad-developers-kept-its-full-details-a-secret/


Cryptocurreny Thursday

FOAM: Decentralized Localization Using Ethereum

FOAM is a technology that seeks to use blockchain and Ethereum contracts to create mapping and location based services.  The project wants to address a complex of perceived problems: GPS is spoofable, maps are owned by big actors, and location services aren’t private.  In addition, they think that “people lie about their location,” (Ryan John King, the co-founder and CEO of FOAM, quoted in Coindesk [3])  The solution deploys blockchain technology and Nakamotoan philosophy [2].

Looking at their materials, it is clear that FOAM is mainly focused on replicating Internet location-based services, not on navigation or engineering or geoscience.  The geospatial model is a two-dimensional map of the surface of the Earth.

The location service depends on many local low-power radio beacons instead of satellites. They imagine an ad hoc mesh of locally operated beacons, which are recognized and validated via Nakamotoan style consensus rather than a central authority (such as a government space agency). These beacons are used to trangulate positions.  Good behavior and trustworthiness of the beacons is supposedly assured by cryptocurrency tokens, in the form of incentives, notably buy in and security deposits.

They imagine this to be used to construct datasets of “Points of Interest”, which are “where are the stores, cafes, restaurants and malls, where a fleet of vehicles in a ride sharing program like Uber should be anticipating if demand is shifting or surging, or which traffic bottlenecks drivers should avoid on an app such as Waze.”  These are stored and validated through a decentralized protocol. “[G]ranting control over the registries of POI to locally-based markets and community forces, allowing the information provided to be validated by those who contribute to the relevant locality.

These datasets are to be created through bottom up efforts, presumably incentivized by desire to operate local services. “FOAM hopes that the Cartographers and users will contribute the necessary individual work, resources, and effort themselves to contribute to the ongoing community-driven growth and supplement this important cartography project.

Interestingly, the crypto token-based incentive system relies on negative incentives, namely buy ins and “security deposits” that can be forfeited by consensus. I’m not sure I’ve seen another Nakamotoan project with this sort of punishment based (dis-)incentive.  (I’ll note that psychologists generally find that the threat of punishment does not engender trust.)

Obviously, this entire concept will depend on the development of the localization network and the datasets of “Points of Interest”.  As far as I can see, realizing this is based on “hope” that people will contribute. I’d call this “faith-based engineering”

We can pause to reflect the irony of this “trustless” system that appears to be entirely based on “hope” and the threat of punishment.

As far as the actual technology, it is, of course, far short of a “map of the world”.  The local beacons are fine for a dense urban setting, but there is little hope of coverage in open space, and no chance that it will be useful at sea, up in the air, inside significant structures, or underground. Sure, there are ways to deploy beacons indoors and other places, but it isn’t easy, and doesn’t fit the general use cases (Points of Interest).

Ad hoc networks aren’t immune to jamming or interference, either, and are essentially defenseless against determined opposition.  In classic fashion, the protocol “routes around” interference, discarding misbehaving nodes and corrupted data. Unfortunately, this means that the response to a determined and sustained attack is to shut down.

The incentive system is somewhat unique, though the notion of a “security deposit” is widely used. How well will it work?   (How well do security deposits work?)  It’s hard to say, and there doesn’t seem to be much analysis of potential attacks.  The notion that the loss of security deposits and other incentives will guarantee honest and reliable operation remains a theoretical “hope”, with no evidence backing it.

The system depends on a “proof of location”, but it isn’t clear just how this will work in a small, patchy network. In particular, assumptions about the security of the protocol may not be true for small, local groups of nodes—precisely the critical use case for FOAM.

Finally, I’ll note that the system is built on Ethereum, which has had numerous problems. To the degree that FOAM uses Ethereum contracts, we can look forward to oopsies, as well as side effects from whatever emergency forks become necessary.

Even if there are no serious bugs, Ethereum is hardly designed for real time responses, or for datasets at the scale of “the whole world”.  Just what requirements will FOAM put on the blockchain, consensus, and Ethereum virtual machine?  I don’t know, and I haven’t seen any analysis of the question.

This is far from an academic question.  Many location services are extremely sensitive to time, especially to lag.  Reporting “current position” must be really, really instantaneous.  Lags of minutes or even seconds can render position information useless.

Can a blockchain based system actually deliver such performance?

Overall, FOAM really is “a dream”, as Alyssa Hertig says.  A dream that probably will never be realized.

  1. Foamspace Corp, FOAM Whitepaper. Foamspace Corp, 2018. https://foam.space/publicAssets/FOAM_Whitepaper_May2018.pdf
  2. FoamSpcae Corp, The Consensus Driven Map of the World, in FOAM Space. 2017. https://blog.foam.space/
  3. Alyssa Hertig (2018) FOAM and the Dream to Map the World on Ethereum. Coindesk, https://www.coindesk.com/foam-dream-map-world-ethereum/


Cryptocurrency Thursday

Nakamoto’s Fifty One Percent Problem

“As long as a majority of CPU power is controlled by nodes that are not cooperating to attack the network, they’ll generate the longest chain and outpace attackers.” (Nakamoto, 2009 [3] )

At the very core of Nakmotoan cryptocurrencies is a consensus protocol that relies on the principle that a decentralized voting system spread across a very large network cannot be manipulated because the majority of nodes are “honest” and not cooperating to game the voting.

The system works as long as “honest nodes control a majority of CPU power.”

On the other hand, if more than 50% of the CPU power on the network coordinate, they can manipulate Nakamotoan protocols, and produce “dishonest” results.  This is the dreaded “51% attack”. Depending on the situation and the details of the specific protocol, this attack can result in canceled payments, improper payments, double spending, or other forms of theft via manipulation of the ledger–pretty much exactly what Nakamotoan blockchains are supposed to preclude.

It is important to note that the all too many troubles reported about cryptocurrencies; hacking, fraud, theft, and criminal activities; all have nothing to do with a 51% attack. Indeed, in most of these cases, the protocol is working just fine, and securely implementing the nefarious activities  which are enabled by dishonesty and breaches in other parts of the system.

Indeed, actual 51% attacks have been so rare that they seemed purely theoretical.  However, as Alyssa Hertig points out, with the growth in the number of cryptocurrencies, these attacks have become more common [2].

So what is going on here?

The Nakamotoan project is based in large part on what amount to probabilistic claims about the participating nodes of the network.  The consensus protocol is secured as long as the “majority of the CPU power is honest” condition remains true.  (We may pause to note the irony of a “trustless” protocol depending on the trustworthiness of vast numbers of independent computers on the Internet….)

Bitcoin and it’s extended family seem to work, and people have come to have confidence in these decentralized networks.  But why would we believe that this condition can be met in a real implementation?

Confidence is based on intuitive beliefs about the Internet and cryptocurrency networks on the Internet.  The basic intuition is that 1) the internet has a huge number of independent nodes with a huge aggregate computing power and 2) the computing power is distributed approximated evenly across the Internet.  The idea is that It is impractical to round up zillions of independent mom-and-pop nodes to make up a majority.

The size of the Internet is indisputable, though it is important to remember that cryptocurrencies operate on only a fraction of the nodes. Dreams of a universal blockchain, computed and protected by every computer everywhere are a long way from reality, and probably unrealistic. (Remember, many of the nodes are mobile devices and dedicated systems that are not necessarily available for computations.)

There are many blockchains and cryptocurrencies, and their networks are not necessarily very large.  Actually, some “private” blockchains are not only small, but also not open to the public, so we don’t really know anything about them.

Whatever the number of nodes, the CPU power is never evenly spread, not even approximately.  In this assumption, we see the Libertarian instincts of Nakamotoists overriding knowledge about networks.  Statistically, networks are never egalitarian, they are always hierarchical [4].  Cryptocurrency networks are no different, and no sensible person would expect them to be different.

In the real world, therefore, cryptocurrencies fail to meet one or both of these intuitive criteria.

Only the largest cryptocurrencies, such as Bitcoin or Ethereum, have truly massive numbers of nodes.  For many alternative cryptocurrencies or blockchains it is hard to know just how big the network might be, though they obviously start out small.  At the extreme, a small network means that only a relative handful of nodes could collude to control it.

Even the largest networks are characterized by huge concentrations of CPU power in the hands of a few operations.  There may be millions of ordinary Joe’s out there, but there are gigantic server farms with millions of times Joe’s CPU power.  The largest of them all, Bitcoin, still has a handful of operations that control 51% of the computing power and therefore theoretically could control the network.  This has been evident in the governance stalemate over scaling issues, with the interests of a few operators successfully blocking upgrades that disadvantage them.

The upshot is that the likelihood of a 51% attack is unknown (and can depend on a lot on obscure details of the implementation ), but is clearly more likely if the network is small and computing power concentrated. In these cases, “honest nodes control a majority of CPU power” is more of a hope than a solid assumption.

So we shouldn’t be surprised that these problems are increasing, especially in smaller networks, and especially in opaque networks [2].  This is definitely a case where being “just as good as Bitcoin (or Ethereum or whatever) is scarcely a guarantee that it will work just as well.

  1. Alyssa Hertig (2018) Blockchain’s Once-Feared 51% Attack Is Now Becoming Regular. Coindesk, https://www.coindesk.com/blockchains-feared-51-attack-now-becoming-regular/
  2. Alyssa Hertig (2018) Verge’s Blockchain Attacks Are Worth a Sober Second Look. Coindesk, https://www.coindesk.com/verges-blockchain-attacks-are-worth-a-sober-second-look/
  3. Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System. 2009. http://bitcoin.org/bitcoin.pdf
  4. Mark Buchanan, Nexus: Small Worlds and the Groundbreaking Theory of Networks, New York, W. W. Norton and Company, 2002.


Cryptocurrency Thursday

Yet Another “Blockchain for Provenance” System

In the short decade since the Nakamoto paper [5] cryptocurrency enthusiasts have put forward a variety of use cases for blockchains and cryptocurrencies.  It is notable that most of the exciting use cases aren’t actually in the canonical paper itself, and most of them have yet to prove out in the real world. (And the most successful use cases are the ones not put forward as good examples–extortion, dark commerce, money laundering, etc.)

One of the perennial favorite use cases is Provenance:  tracing goods from source to consumer.  For companies, this is “logistics” or “supply chain”, for ordinary consumer this is about quality control.  This the same problem that scientists (and anyone) faces with data quality—where did this data come from, and what has been done to it?  In the latter form, this is called “provenance” and we were struggling with solutions a long time ago (before Nakamoto, Ante Bitcoin) [3].

This month yet another company touted this use case at the Ethereal Summit in NYC [1] .  The presentation by Viant traced a Tuna from Fiji all the way to the conference sushi plates.  Tagged with RFID, records of the sales and transportation of the fish are on the Ethereum blockchain, so everyone can check that the fish they are eating is “moral”. (How it can be “moral” to harvest increasingly rare wild animals and fly them half way around the world beats me.)

This is the yuppie version of Provenance (making sure that my luxury goods are authentic and “moral”), but the technology is the same as any supply chain.

Looking at Viant’s web site, they seem to have a reasonable grasp on the problem.  They have a logical model of provenance that includes “four pivotal aspects of an asset: Who, What, When, and Where”.  The model includes “Actors” and actions, and “Roles” that define permissions.  IMO, this is the right stuff (See [3]) .

They also have RFIDs to tag and geo track, and apps to implement operations (e.g., sales to distributors).  These are certainly the right technology, and they are lucky to have ubiquitous mobile devices and “the cloud” to implement these concepts we pioneered in the late twentieth [4].

So what does blockchain technology bring to the table?

First of all, it is used as a shared database, essentially a bulletin board.  The cryptocraphically signed and immutable records provide an unfudgeable trace of the object’s life.  And the blockchain is available to anyone, so ordinary consumers can get the authenticated traces of the object. (More likely, any third party can create apps that deliver the information to consumers – no normal person monkeys around with the blockchain itself.)

The second feature is the use of Ethereum “smart contracts” to process the transactions. This technology lets the company post standard scripts for, say, transfer of an asset. The script is available anywhere, and executes the same way for everyone.

These features are, of course, available from conventional databases and file systems as well.  But the Ethereum blockchain is available to everyone, and is maintained by the Ethereum network rather than dedicated servers.  This is the third advantage of the blockchain—deployment (no need for server farms), availability (no server access required) and maybe cost (TBD).

It is interesting to point out one feature of Nakamotoan blockchains that is not really used here:  trustlessness.  While the system boasts that it is decentralized and therefore “trustless”, this is misleading.

Provenance is literally all about trust. The point of tracing the object is to assure that it is what it is supposed to be, and that requires knowing who did what, etc.  Furthermore, it needs to establish a trusted trace, with each actor and action attested by a trusted source.

Using a blockchain, or, indeed, any digital system, is not sufficient to achieve this.  The company will tell you this.  The RFID can be removed or destroyed.  Actors can make mistakes or be suborned.  On the blockchain, false records look the same as correct records (and can never be removed).  Trust involve real world protocols, including authentication of identities.

In this area, the blockchain may actually be a liability. The “trustless” data cannot be trusted.  Part of what the company is doing with the “smart contracts” is overlaying a network of trusted records on the trustless blockchain.

There are other potential draw backs of using a blockchain in this use case.

Let’s talk about privacy.  Think about it. It’s not clear just how “moral” it is for anyone in the world to know where every bit of sushi came from and ended up.  Individual fishing captains don’t necessarily want any kid on the Internet snooping on their business, not to mention rival captains and possible criminal gangs.  And the caterer doesn’t necessarily want random people, competitors, or criminals tracking their business. And so on.

Second, there is no way to correct mistakes. Even if the software is always correct (which is unlikely), people make mistakes and are dishonest. If bad information gets onto the blockchain, it can’t be removed or corrected.

So, imagine that a bad actor somehow gets a bunch of bad fish entered as OK fish.  The blockchain shows that this is “moral tuna”, even though it isn’t.  Even if we find out about the fraud, the blockchain could still have the evil records forever.

One last point.  Viant is one of I don’t know how many companies trying to implement this kind of Provenance.  With all these variations out there, it will be extremely important to have interoperability standards, so you can combine tracking from a number of sources.  (See the W3C PROV working group.)

Using standards would seem to be both obvious and compatible with the philosophy of decentralization.  After all, if the only way to do tracking is to use Viant’s proprietary data model and software, then a key advantage of the decentralized blockchain is out the window.

Overall, Viant and others are doing the right thing.  It remains to be see whether using a blockchain will be a net win or not.  And all of them should implement the standards we started developing back at the turn of the century.

  1. Alyssa Hertig (2018) Moral Food: A Fish’s Trek From ‘Bait to Plate’ on the Ethereum Blockchain. Coindesk, https://www.coindesk.com/moral-food-a-fishs-trek-from-bait-to-plate-on-the-ethereum-blockchain/
  2. Robert E. McGrath, Semantic Infrastructure for a Ubiquitous Computing Environment, in Computer Science. 2005, University of Illinois, Urbana-Champaign: Urbana. http://hdl.handle.net/2142/11057
  3. Robert E. McGrath and Joe Futrelle, Reasoning about Provenance with OWL and SWRL, in AAAI 2008 Spring Symposium “AI Meets Business Rules and Process Management”. 2008: Palo Alto.
  4. Robert E. McGrath, Anand Ranganathan, Roy H. Campbell, and M. Dennis Mickunas. Incorporating “Semantic Discovery” into Ubiquitous Computing Environments. In Ubisys 2003, 2003.
  5. Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System. 2009. http://bitcoin.org/bitcoin.pdf


Cryptocurrency Thursday