Category Archives: “About Cryptocurrency Narratives”

Grownups Get Real About Blockchains

The grown ups have found out about blockchains, and are starting to make realistic assessments of the technology.  As usual, they are sucking all the fun out of things.

The US National Institute of Standards (NIST) issued an informative report, which is an excellent overview of blockchain technology [2].  Much of the report is straightforward, but NIST is careful to point out important technical limitations.

There is a high level of hype around the use of blockchains, yet the 
technology is not well understood. It is not magical; it will not solve all problems. As with all new technology, there is a tendency to want to apply it to every sector in every way imaginable.” ([2], p. 6)

I think the most important section of the report is Chapter 9, “Blockchain Limitations and Misconceptions”.  The authors explain many basic points, including the ambiguous nature of “who controls the blockchain” (everyone is equal, but devs are more equal than others), and the hazy accountability of potentially malicious users.

Technically, the blockchain has limited capacity, especially storage. Overall, it is difficult to estimate the resource usage of a blockchain because it is implemented on many independent nodes.

Most important of all, they parse the Nakamotoan concept of “trust”.  It is true that there is no third party that must be trusted (at least in permissionless blockchains), but there are many other elements that must be trusted including the basic fairness of the network and the quality of the software (!).

The report also calls attention to the fact that blockchains do not implement either key management or identity management. Identity is masked behind cryptographic keys, and if you lose your key, there is no way to either fix it or revoke it.  These are either features or bugs, depending on what you are trying to do and the kinds of risks you can stand.

Overall, many of the limitations described by NIST are end-to-end requirements:  no matter how a blockchain works, it only addresses part of the total, end-to-end transaction.

The use of blockchain technology is not a silver bullet,” ([2], p.7)

On the same theme, Bailey Reutze reports in Coindesk on an IBM briefing on the end-to-end engineering of blockchain systems [1].  The talk itself is not published, but Coindesk reports that IBM warns potential customers about the end-to-end security challenges using their Hyperledger technology.

As noted many times in this blog, there have been many hacks and oopsies in the cryptocurrency world, and most if not all of them have nothing to do with the blockchain and its protocols.

IBM approaches the challenge with a thorough threat analysis, that looks at the whole system. This is, in fact, exactly what you need to do with a conventional non-blockchain systems, no?

It seems clear that whatever a blockchain may achieve, it doesn’t “disrupt” IBM’s role as a heavy weight business consultant.

In the Coindesk notes, there is a hint at one more interesting point to think about: the global extent and “infinite” lifetime of the blockchain. Nominally, the blockchain maintains every transaction ever recorded, forever.  This means that, unlike most data systems, a worst-case breach somewhere in the system might expose data far and wide, back to the beginning of time. Whew!

Still, both NIST and IBM agree that there are potential use cases for the blockchain that are worth the trouble, including public records and supply chains. (And IBM will be glad to show you how to do it.)

Blockchains may be inscrutable, they ain’t magic.

  1. Bailey Reutzel (2018) IBM Wants You to Know All the Ways Blockchain Can Go Wrong. Coindesk,
  2. Dylan Yaga, Peter Mell, Nik Roby, and Karen Scarfone, Blockchain Technology Overview. The National Institute of Standards and Technology (NIST) Draft NISTIR NIST IR 8202, Gaithersburg, MD, 2018.



Cryptocurrency Thursday

Cognitive Dissonance, Thy Name Is Ethereum

Ethereum was awarded the designation as CryptoTulip of 2017, and no small part of that distinction was due to its on-going efforts to deal with the catastrophic results of buggy “smart contracts”.

The DAO disaster of 2016 was “fixed” via an ad hoc hard fork that had the tiny side effect of creating a second, rump Ethereum currency.  Since that time, Ethereum has done several more forks to respond to problems.  And in 2017 a little oopsie resulted in millions of dollars worth of Ether being locked in inaccessible accounts.  This goof has not yet been addressed by a hard fork or any other technical fix.

The underlying problem, of course, is that Nakamotoan cryptocurrencies are designed to be “write once”, with the ledger being a permanent, unchangeable record.  This feature is intended to prevent “the man” from rewriting history to cheat you out of your money.  (This is a key part of the Nakamotoan definition of a “trustless” system.)

Ethereum has implemented executable contracts on top of this “immutable” data, which is where a lot of the problems come from.  Software is buggy, and “smart contracts” inevitably have errors or just plain produce incorrect or unintended results, such as theft.  But there is no way to correct the unmodifiable ledger, except by violating the write-once principle, i.e., a hard fork to rewrite history.

True Nakamotoists deeply believe in the unchangeable ledger not only as an engineering design but as the logical foundation of the new, decentralized world economy.  But Ether-heads have (mostly) acquiesced to multiple ad hoc forks to work around grievous bugs, which to my mind completely trash the whole point of the Nakamotoan ledger. The CryptoTulip Award citation noted “the tremendous cognitive dissonance Ethereum has engendered”.

It is very interesting, therefore, to see current discussions proposing to regularize this recovery process [2]. The idea, of course, is to reduce the risk and delay of ad hoc fixes with a more open proposal and review process.  Unfortunately, this process publicly endorses the very practice that the ledger is supposed to preclude.

This proposal has not been uncontroversial, for many obvious reasons.

In addition to the obvious problem with the whole idea of ever rewriting the ledger, the Ethereum community is dealing with questions about how “decentralized” decision making should work.

Theoretically, anyone on the Internet can have a stake in decisions about Ethereum software and protocols.  However, in the crypto world—and “open source” in general—some people are more equal than others.  Active programmers, AKA, “developers”, have influence and often veto power over technical developments.  And operators of large mining operations have veto power in their ability to adopt or reject particular features.

In the earlier ad hoc forks, the devs decided and then implemented the fork. There was little discussion, and the only alternative was the nuclear option of continuing to use the denigrated fork—which many people did. The result was two Ethereums, further muddled by additional changes and forks.

The proposed new process requires public discussion of forks, possibly including video debates. Critics complain (with good reason) that this is likely to introduce “politicians” into the process. I would say that it also will create factions and partisan maneuvering.  It is not inconceivable that (gasp) vote buying and other corruption might arise.

In short, this public decision-making process will be openly political.  What a development. The governance of Ethereum is discovered to be political!

Politics (from Greek: πολιτικα: Polis definition “affairs of the cities”) is the process of making decisions that apply to members of a group.

The explicit acknowledgement of human decision making creates a tremendous cognitive dissonance with the Nakamotoan concept of a “trustless” system, where all decisions are by “consensus”.  (In practice, “consensus” means “if you disagree, you can split off your own code”.)

But it also clashes with the core Ethereum idea of “smart contracts”, which are imagined to implement decentralized decision making with no human involvement. The entire idea of the DAO was to create an “unstoppable” enterprise, where all decisions were implemented by apolitical code.  When Ethereum forked to undo the DAO disaster, it essentially undermined the basic rationale for “smart contracts”, and for Ethereum itself.

And now, they want to have humans involved in the decision making!

The very essence of this dissonance is capture in a quote from Rachel Rose O’Leary:

For now, no further action will likely be taken on the proposal until ethereum’s process for accepting code changes, detailed in EIP-1, has been clarified.” [1]

In other words, EIP-867 is so completely inconsistent with the decision-making process it isn’t even possible to talk about it.  I guess they will continue to muddle through, ad hoc, violating the spirit of Nakamotoism.

I think that Ethereum is managing to radically “disrupt” itself and the whole concept of Nakamotoan cryptocurrency.

  1. Rachel Rose O’Leary (2018) Ethereum Devs Call for Public Debate on Fund Recovery. Coindesk,
  2. Dan Phifer, James Levy, and Reuben Youngblom, Standardized Ethereum Recovery Proposals (ERPs). Etherium Ethereum Improvement Proposal, 2018.
  3. Rachel Rose O’Leary (2018) Ethereum Developer Resigns as Code Editor Citing Legal Concerns. Coindesk,



Cryptocurrency Thursday

Cornell Report on Cryptocurrency “Decentralization”

One of the outstanding features of Nakamotoan blockchains is that it is a “decentralized” protocol—a peer-to-peer (overlay) network produces consistent updates to the shared data with no privileged leader or controller [2].  This property is a significant technical feature of Bitcoin and its extended family, and has even more symbolic and cultural significance for crypto enthusiasts.

“Decentralization” is supposed to impart technical robustness (there is no single point of failure), and political independence (there is no “authority” to be manipulated or shut down).  The absence of a “central” node also means that the protocol is “trustless”—there is no central service that must be trusted in order to do business. (I.e., you only need to trust your counterparties, not the rest of the network.)

In short, Nakamotoan blockchains and cryptocurrencies are all about being “decentralized”.

But what does “decentralized” mean?

In fact, the notion of “decentralized”, as well as the many related concepts, are poorly defined. In the context of a computer network, “centralized” can mean many things.  Indeed, a network transaction may depend on a number of physical and virtual layers, with different degrees of centralization involved simultaneously.  For example, a wi-fi network has various routers, links, switches, firewalls, and so on.  Even the simplest point to point link may pass through a number of shared channels and chokepoints that are technically “central” services, though the overlying service is decentralized, or centralized in a different way.  (Does that sound confusing?  In practice, it truly is.)

However, Nakamotoan “decentralization” is mostly about the logical organization of digital networks, as developed in so called “peer-to-peer” networks.  A classic Internet service is “centralized” in the sense that  client (user) nodes connect with a single server, which manages the whole system.  Clients trust the service to implement the protocol and protect all the data.  Note that so-called “centralized” services often run on many computers, even in many locations.  They are logically a single server, even if not physically a single node. (Does that sound confusing?  In practice, it is.)

Nakamotoan systems replace a single “trusted” service with a peer-to-peer protocol based on cryptography and economic incentives.  One of the critical design features is the use of algorithms that are impossible for a single node to hack.  This is important because In a conventional “centralized” service, once a server is suborned (or subpoenaed), the whole network is controlled.

In contrast, Bitcoin is designed so that the system cannot be controlled unless the attacker controls more than 50% of all the participating nodes.  In this design, security is assured by having a very large number of independent nodes in the network. This widespread participation is made possible by making the code openly available and letting anyone connect to the network.

While the cryptography has a relatively straightforward technical basis, other aspects of this security guarantee are less easy to define and they are actually empirical features of the network that may or may not be realized at any given moment.

For example, everything depends on the Bitcoin network being “owned” by many, many independent people and organizations.  If one person owned 51% of the network, then they would own all the Bitcoin.  And in fact, if one person owned 51% of the computing power (not the number of computers), they would own all the Bitcoin.

The point—and I do have one—is that while the Bitcoin protocol is designed to work in a decentralized network, the protocol only works correctly is the network really is “decentralized” in the right ways.  And there is no formal definition of those “right ways”, nor much proof that various cryptocurrency networks actually are decentralized in the right way.

This winter Cornell researchers report on an imporatant study of precisely these questions on the real (as opposed to theoretical or simulated) Bitcoin and Ethereum networks [1].

there have been few measurement studies on the level of decentralization they achieve in practice” ([1]. p.1)

This study required a technical system to capture data about nodes of the relevant overlay networks (i.e., real life Bitcoin or Ethereum nodes).  In addition, the study examined key technical measures of the nodes, to discern how the overall capabilities are distributed (i.e., the degree of decentralization).  These measures include network bandwidth (data transmission), geographic clustering (related to “independence”), latency (a key to fairness and equal access), and the distribution of ownership of mining power.  The last is an especially important statistic, to say the least.

The Cornell research showed that both Bitcoin and Ethereum have distinctly unequal distribution of mining power.  In the study, a handful of the largest mining operations control a majority of the mining power on the network.  (Since some authorities own or collaborate with multiple mining operations these counts underestimate the actual concentration of power.)   In other words, these networks are highly centralized on this essential aspect of the protocol.  The researchers note that a small non-Nakamotoan network  (a Byzantine quorum system of size 20) would be effectively be more decentralized—at far less cost than the thousands of Nakamotoan nodes ([1], p. 11).

Although miners do change ranks over the observation period, each spot is only contested by a few miners. In particular, only two Bitcoin and three Ethereum miners ever held the top rank.” ([1], p. 10)

These findings are not a surprise to anyone observing the flailing failure of the “consensus” mechanism over the last two years, let alone the soaring transaction fees and demented reddit ranting.  Cryptocurrency systems are designed to be decentralized, but they are, in fact, dominated by a few large players.

By the way, the two networks studied here are likely the largest and most decentralized cyrptocurrency networks.  Other nets use similar technology but have far fewer nodes and often far more concentrated ownership and power.  So thees two are the good cases.  Other networks will be worse.

The general conclusion here is that Nakamoto’s protocol trades off a huge, huge costs in equipment, power consumption, and decision-making efficiency to achieve the supposed benefits of a “decentralized” system.  Yet the resulting networks are actually highly centralized, though in opaque and hidden ways.  I think this is a fundamental flaw in the engineering design, and also in the philosophical underpinnings of Nakamotoan social theory.

I’d love to see similar careful studies of other underpinnings of Nakamotoism, including the supposed properties of “openness”, “trustlessness”, and “transparency”.

A very important study.  Nice work.

  1. Adem Efe Gencer, Soumya Basu, Ittay Eyal, Robbert van Renesse, and Emin Gün Sirer, Decentralization in Bitcoin and Ethereum Networks. arXiv, 2018.
  2. Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System. 2009.


Cryptocurrency Thursday

Zcash Symbolic Security Means Very Little

This week I was struck by Coindesk headline, “Latest Zcash Ceremony Used Chernobyl Nuclear Waste”.

Wow!  Even cryptocurrency cheerleaders recognize this silliness for what it is: an elaborate symbolic gesture, showing their determination to prevent dark forces from somehow compromising their key generation.

The Zcash ceremony used the randomness of radioactive decay to generate random numbers, which eliminates any question of hacked or tapped software. In addition to performing the ceremony in the sky, the magic was given additional potency by using radioactive material from Chernobyl.  Sigh.

This is not the first such ritual from the Zcash gang. The whole point of Zcash is that it is supposed to be really, really secure from all instantiations of “the man”.  These technological spirit dances are part of their nearly religious narrative about “privacy”.  (“There are enemies everywhere.  But we are smarter than “them”.  Trust us.”)

Of course, it’s all mainly symbolic.

in the security game, you have to cover all threats, and your moves in the game must defeat particular moves by perceived and real adversaries.

What threat is this elaborate move supposed to defeat?  Basically, this move counters general fears that networks and software have secret backdoors by which dark forces can steal keys and breach security.  The ceremony also prevents “the devs” from using or implanting such backdoors in the software.

In short, this mystical theater is supposed to increase “trust” in the software, by assuring that the random numbers are really random and private, and therefore algorithms that use them can be relied upon to be secure.

(I’m pretty sure that serious folks such as the NSA, GCHQ, and other national agencies, have been doing equivalent rituals for decades, albeit with a lot more paperwork and acronyms.)

This process is all fine, I guess, and you could say it is ‘necessary’ to secure the software.

But the important question is, is this sufficient to secure it and achieve “trust”?

Obviously not.

For one thing, the biggest security and trust problems in cryptocurrency are not in the algorithms but in the peripherals and users.  The same week as Coindesk reported this shamanistic ceremony in the Midwestern skies, it reported on a multimillion dollar heist, corporate shenanigans, and government anti-corruption actions.

The point is that when all these other holes exist, you can secure the random numbers all you want, but it won’t make the system trustworthy.

As I have said before, and will say again, trust is an end-to-end property.

I’ll add a couple of other points.

If you are wound up about sneak attacks via random number generator, you should probably be even more worried about backdoors in the chips themselves.  As we have seen this month, the global Intel monoculture has some grievous flaws buried inside.  I’m pretty sure that the reported memory leaks probably subvert Zcash keys just like everyone else’s.

And for that matter, quantum cryptography will probably overwhelm their algorithms, no matter how random the random numbers.

So basically, I view this Zcash ceremony mainly as PR, with limited impact on security.

On the other hand, the PR may have been effective for developing “trust”, at least in the sense of faith-based, “I’m one of you”, kind of trust.

  1. Nikhilesh De (2018) Latest Zcash Ceremony Used Chernobyl Nuclear Waste. Coindesk,
  2. Graz University of Technology. Meltdown and Spectre. 2018,
  3. Andrew Miller. [zapps-wg] Powers of Tau. 2018,



Cryptocurrency Thursday

Ethereum explores actual software engineering

Cryptocurrency software generally has rather spotty quality.  Aside from the usual woes of ‘open source’ software, including a rush to market, minimal budgets, endless happy talk from the business side, and inexperienced programmers; cryptocurrencies suffer from their ‘decentralized’ governance model, which makes serious engineering difficult.

As a result, the major cryptocurrencies all have grievous performance problems, and, of course, awesomely dumb bugs.

Ethereum (winner of the 2017 CryptoTuilip of the Year award) has provided a veritable clinic on the shortcomings of decentralized management of software.

Now, it is true that Ethereum has an actual human founder (Vitalik Buterin), who does intervene to nudge (or even bulldoze) the community in certain directions.  But engineering changes are still done in the decentralized mode:  build it first, and then see if everyone agrees to use the modified software.  Astonishingly enough, letting users decide on every engineering detail (retroactively) is at best awkward, and at worst disastrous.

As widely noted, Ethereum has been experiencing performance issues due to the success of the CryptoKitties game.  In particular, the transactions for this one application have filled up the shared ledger, and sucked down processing time for all the nodes of the network.

Let’s be clear, these technical problems are perfectly normal, and, in fact, they are a sign that the software is successful and maturing.  There are any number of technical moves that might be made to increase capacity or ration usage or both.  Unfortunately, these normal engineering decisions required to save the whole system will disadvantage some individual users. Unfortunately, with the decentralized governance approach, anything that cannot command near unanimous approval cannot be implemented.

Amazingly enough, faced with potential collapse, Ethereum is seeing something almost unthinkable:  actually serious software engineering, beyond happy talk and and wringing in Reddit.

This month saw excitement over actual optimizations of the Ethereum system. Among other amazing fixes, various temporary files have been eliminated, vastly decreasing the storage needed.  Other fixes deal with gross inefficiencies in handling the data structures in memory.  It’s hard work, but pays off with much better performance.

Buterin himself is exploring serious redesigns including “stateless” clients and sharding.  These designs replace the mindless replication of all the data on every node with more clever ways to split up the data and work. These approaches are well known and well tried: they have been in use for decades in systems such as massively multiplayer online games (think World of Warcraft and other similar systems).  I’m sure they can be made to work pretty well for Ethereum.

Obviously, these are good ideas. They were good ideas twenty years before Ethereum was built.  It’s about time that these systems with millions and millions of dollars riding on them got up to reasonable levels of engineering.

I could be grumpy and point out that , back when I was a lad, we used to think things through before we released the product, not after several years, and hundreds of millions of dollars worth of goofs.

More important, though, is the observation that the optimizations that are rolling out now have been implemented by companies and individuals not constrained by a decentralized ‘consensus’ mechanism. The client-side software is more or less ‘open source’, but it isn’t governed by the same ‘everybody or nobody’ consensus rules.  Hence, it is possible to change the code relatively radically and relatively quickly.

In contrast, the server side stuff (e.g., sharding) is moving slowly. It’s worse than design by committee, it’s design by … who knows?  And even if a good plan emerges, it still has to survive the consensus process.  This could take years.

We’ll see.

As I said, this is a becoming case study in the difficulties of engineering decentralized systems.

  1. Alexey Akhunov. 2018. “Roadmap for Turbo-Geth.” Medium, January 6.
  2. Vitalik Buterin. 2017. “The Stateless Client Concept.” EtherResearch, October.
  3. Rachel Rose O’Leary,  2018. Blockchain Bloat: How Ethereum Clients Are Tackling Storage Issues. Coindesk.



Cryptocurrency Thursday


Narayanan and Clark on Bitcoin’s academic roots

For an old grey-headed programmer, Bitcoin has always been a bit weird technology.

The big thing, of course, is that it is deliberately designed to be slow. My whole career has been basically about trying to make software go faster, so the computation that has no purpose except to take a long time just feels wrong.  I understand it intellectually, but it’s just not right, deep down.

The other thing about Bitcoin is that all of the pieces are not new, though the specific way they are used is. For example, I was doing peer-to-peer networks (with hash addresses) before the Nakamoto paper [1], so there was no news there.

So what, exactly is new, about Bitcoin?

I was very pleased to read Arvind Narayanan and Jeremy Clark’s recent article reviewing “Bitcoin’s academic pedigree[2].  N&C review the academic papers that present many of the key technical features used in Nakmotoan cryptocurrencies.

[B]y tracing the origins of the ideas in bitcoin, we can zero in on Nakamoto’s true leap of insight—the specific, complex way in which the underlying components are put together.” (p. 38)

They point to six lines of technical innovations from the 1980s and 90s that are critical to Nakmotoan cryptocurrencies:

  1. Linked Timestamping, Verifiable Logs
  2. Digital Cash
  3. Proof of work
  4. Byzantine Fault Tolerance
  5. Public Keys as Identities
  6. Smart Contracts

Figure 1. Chronology of key ideas found in bitcoin. (from [2,, p. 38)
In some cases, Nakamoto acknowledges the academic predecessors, and in others he doesn’t. In part that is because some of the ideas were so widely known that they seem “obvious” and “common knowledge”, even if they were first written about only in the last forty years.  It is also possible that Nakamoto may have reinvented some of the concepts, perhaps inadvertently reverse engineering from example systems know to him, without tracing their origins.

Nakamoto was obviously following up on earlier concepts for digital money, including hashcash, which used a form of proof-of-work using hashing.  N&C note that there was a lot of academic interest in proof-of-work, and several lines of work seem to have independently converged on ideas about using hashing as proof-of-work in peer-to-peer networks. In the last fifteen years, these efforts have been recognized to be the same idea, and the terminology, including the term “proof-of-work” have been standardized.

Nakamoto also uses widely known public key cryptography (PKI) to implement secure but anonymous digital signatures. The use of public keys as identifiers is central to Bitcoin, and Bitcoin is one of the most successful implementations of that concept. However, Nakamoto actually punts the problem of key management, which has certainly led to issues as well as development of alternative cryptocurrencies that deal with keys and identity in different ways.

N&C argue that Nakamoto’s contribution, his “genius”, was “the intricate way in which they fit together” these pieces from academic and practical research. Nakamoto’s system is a triad, with each piece supporting the logical flaw in the other pieces (p. 42).

Secure Ledger Prevents double spending, ensures the currency has value Needs distributed consensus
Distributed consensus (mining) Ensures security of ledger Needs to be incentivized, i.e., by a valuable currency
Valuable Currency Incentivizes the honesty of nodes Needs a secure ledger

This is an extremely useful insight, which explains why it has been so difficult to describe the “one big idea” underlying Bitcoin.  In fact, it is a clever combination of big ideas, glued together in a specific way that works pretty well in practice.

It would be an interesting follow up to this paper to identify the “innovations”, if any, in various alternative and derivative cryptocurrencies. There have been a number of alternatives to the Nakmotoan proof-of-work proposed and explored.  There have been alternatives to the peer-to-peer topology of the consensus network, as well as many different ideas about incentives. In short, there is probably a landscape of contemporary cryptocurrency design, with many neighbors in Bitcoins’s neighborhood.

I would add that there is a social dimension to the Bitcoin story (besides incentives).  Bitcoin succeeded beyond the simple merits of its technology because it hit a particular time and place (the 2009 global crash) and had supremely effective salesman (“Satoshi Nakamoto”, and the legions of enthusiastic Nakamotoans) who told and retold and still tell the story.

This combination of a clever technology built “just right” from existing concepts, arriving at the right moment, announced by a supreme salesman reminds me of NCSA Mosaic.  I remember that when I first saw the Mosaic browser, I immediately knew all the pieces it was built from.  Yet it was a new wrinkle, combining the familiar technologies, “just right”.  It also hit at the right moment (the Internet was exploding) and found a cheerleader in Larry Smarr—one of the greatest sales-beings I have ever encountered.

Bitcoin too succeeded by having a clever combination of technologies (including the strategically critical “leaving out” of key management), a fortunate historical moment, and an able storyteller.  (We can also see parallels in the overheated claims and financial bubbles of the early WWW and Bitcoin.)

This is a great paper, well worth the read.  N&C give us a better idea of the “genius” of Satoshi Nakamoto, and also insight into ongoing technical and social developments.

  1. Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System. 2009.
  2. Arvind Narayanan and Jeremy Clark, Bitcoin’s academic pedigree. Communications of the ACM, 60 (12):36-45, November 2017.


Cryptocurrency Thursday

Bitcoin’ Unsustainable Energy Consumption

Nakamotoan Cryptocurrencies are designed to suck CPU cycles, which means sucking electricity. The entire concept rests on a deliberately inefficient computation, which prevents replay and other attacks by being too difficult (expensive) to replicate [3].

This is a clever and successful technique, that yields a robust, global ledger at relatively low costs, at least to the end-users.

But there is a massive side-effect, and that is the power consumption of the decentralized network.  Any given node is just a computer, but the whole idea is that there are zillions of notes.

How many?  A lot.

And they use a Iot of electricity.

Eric Holthaus says that the Bitcoin network consumes 100,000 times the power of the top 500 supercomputers, more than many countries, and may be on track to use more power than the USA.  Congratulations, Bitcoin.  You are out consuming the most notorious energy hogs on the planet.

Personally, this kind of thing deeply offends my engineer’s soul.  Inefficient energy usage is bad, but deliberately wasting electricity is bad engineering, and, well evil.

Apologists for cryptocurrency point out that this usage is equivalent to the conventional banking system.  They also say that some miners are using clean energy.  And so on.

These rhetorical points hinge on the assumption that Bitcoin is a good thing, along with the implicit claim that Bitcoin is displacing conventional financial systems.  (There is no sign of that displacement happening any time soon.)

And, of course, most cryptomining isn’t clean energy, and where it is using renewables it is displacing other users from public sources.  It is hard to be happy about precious electricity pouring down a rathole.

It is important to remember that this isn’t just Bitcoin.  There are many cryptocurrencies and blockchains.  They are smaller than Bitcoin, but they add to the load, and any of them can grow if it becomes “successful”.

These days there is also great interest in “smart contracts”, and Distributed Autonomous Organizations.  These CryptoTulips use the same power-sucking technology to implement their applications.  In principle, these things are doing useful work, though the accounting is really screwy.

Part of the point of the blockchain is that transaction costs are low compared to conventional systems.  But one reason the cost is low is that the cost of the electricity is not born by the people using and profiting from the transactions.  That kind of unaccountability is a formula for overconsumption of the unaccounted resources.

I’ll also note that there are increasing levels of derivative activity built on top of cryptocurrency.  The mania for “Initial Coin Offerings” (unregulated securities on the blockchain) and futures trading mean that there is considerable additional resource usage beyond the basic cryptocurrency itself.  Much of this activity uses conventional technology for most of the work.  So the energy footprint isn’t “Bitcoin versus Conventional”, it is “Conventional + Bitcoin versus Conventional”).

The bottom line is that this all seems unsustainable.  Something will have to give.  There will have to be less cryptocurrency mining, less of everything else, or a lot more power generation.  Even if you think crytocurrency is a great thing, the latter two choices are rather bad for humans and the world.

  1. Michael J. Casey, Bitcoin Mining Wastes Energy? What If That’s Good? Coindesk.January 9 2018,
  2. Eric Holthaus, Bitcoin could cost us our clean-energy future, in Grist. 2017.
  3. Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System. 2009.
  4. Adam Rogers, The Hard Math Behind Bitcoin’s Global Warming Problem, in Wired – Science. 2017.


Cryptocurrency Thursday