Tag Archives: Christine Kim

OpenLibra – “Not run by Facebook”

Libra seems to be sucking all of the air out of the crypto world.  Right now, Libra and all of its spinoffs have to be leading the competition for the not-at-all-coveted Crypto Tulip of the Year Award for 2019.

This month yet another the Libra circus opened a new ring, OpenLibra (which isn’t even launched, but has already seen controversy) [1, 2] .

I don’t really understand Libra in much detail.  Since I don’t do Facebook, it’s pretty irrelevant to me, and I plan to keep it that way.

But, if Libra is opaque to me, OpenLibra is opaque-squared.

Basically, it’s a fork of the Libra code that is “Not run by Facebook.”  As far as I can tell, OpenLibra will use Libra as its asset (which is supposed to be tethered to some kind of “stable” basket of assets), but will have its own codebase.  The plan seems to be to replicate the work to create and maintain the software, aiming to have exactly the same results. And the value will depend on whoever manages Libra’s “reserve” of assets, not to mention whoever manages the assets in the reserve.

Why is this worth the trouble?


There isn’t a huge amount of information explaining the OpenLibra project.

The web page has a brief manifesto that outlines the perceived “dangers” posed by Libra’s governance, i.e., perceived ownership by Facebook and its collaborators.

Libra

  • “will be distributed but not decentralized.
  • Will require permissions to interact with.
  • Will not have privacy guarantees.
  • Will be run by a plutocracy.”

Obviously, being controlled by a monopolistic corporation is anti-democratic, to say the least.  These folks will take the profits and run the system to produce profits for themselves. Not a good deal for the customers.

Finally, the OpenLibra manifesto complains about the real possibility of “Surveillance finance. One’s ability to engage financially (e.g. borrow in Libra) will potentially be determined by their social graph and online activity.”

So, long story short, there are certainly potential problems with Libra.

How is OpenLibra a solution for these problems?


As far as I can tell, the main point of OpenLibra is to open up both the “permission to use” and the control of the software.  The latter is necessary to guarantee the former.

The OpenLibra is intended to be a fork of the proprietary Libra code, compatible in every way, including, it seems, running “smart contracts” that transact in Libra.  In fact, the main point seems to let you use Libra without the permission of the Libra Foundation (i.e., the corporate masters).

In short, OpenLibra “trusts” everything about Libra except Facebook’s management of it.  OpenLibra aims to have a more Nakamotoan ‘decentralized’ governance, while gaining all of the value created by the permissioned Libra system.

In Libra we trust, in Facebook we don’t.” (Lucas Geiger quoted in [2])

Phrased that way, this sounds parasitic, and seems to be trying to get a free ride.


OpenLibra seems to mainly address concerns about governance.  We may wonder how well it will address those problems, given the history of governance of cryptocurenccy.  (How long will it be until there is a fork of the fork?)

But aren’t there many other problems with Libra?

The most peculiar thing about OpenLibra is that they seem to be rather complacent about using the Libra currency itself, apparently trusting in the tethering to the “reserve”. OpenLibra is entirely dependent on Libra for its value. They worry about Facebook controlling access to the system, yet apparently do not worry about the mysterious and opaque management of the reserve.

I don’t really understand how this can work, or why anyone thinks it is a good idea.

There are other risks.  For one thing, maintaining a fork is risky.  Even if there aren’t bugs (which there will be), OpenLibra may be vulnerable to hacking simply because there aren’t enough participants, or simply because the network is open.

There are other unknowns.  Regardless of any technical compatibility, there is no guarantee that OpenLibra contracts and transactions will be accepted equally with Libra.  I can imagine contracts that are written to say “this is only valid if executed on certified Libra systems”.  Running that on OpenLibra might or might not “work”, but might not be honored by all the parties.

Why would such contracts be written? All it will take is one buggy contract on OpenLibra that results in theft or losses.  The Libra network would want to protect itself by simply invalidating anything on the OpenLibra network.

Or, if Libra succeeds in gaining regulatory approval, it could well include a requirement to only honor transactions on the permissioned network.  OpenLibra transactions could be banned as illegal, violating the rules of the Libra network.

I’m not totally sure these scenarios make sense, because I really don’t understand how OpenLibra would interact with Libra, or how Libra itself will work.  But I think you can see the point that technical interoperability is necessary but not sufficient to ride on top of Libra. (Have they talked to a lawyer?)


This is all getting to be quite a tower of speculation.  Libra is pretty unknown, and looking pretty iffy.  OpenLibra is a poorly defined,  very iffy layer on top of Libra.

It’s iffy all the way down.

It’s Crypto Tulips all the way down!


  1. Christine Kim (2019) ‘Members’ of OpenLibra Disavow Project Days After Its Devcon Unveiling. Coindesk, https://www.coindesk.com/members-of-openlibra-disavow-project-days-after-its-devcon-unveiling
  2. Christine Kim (2019) New Libra Fork Will Create Permissionless Stablecoin Free of Corporate Control. Coindesk, https://www.coindesk.com/new-libra-fork-will-create-permissionless-stablecoin-free-of-corporate-control
  3. OpenLibra. OpenLibra: An open platform for financial inclusion. Not run by Facebook. 2019, https://www.openlibra.io/.

 

Cryptocurrency Thursday

Ethereum Release Schedule Pushed Back

This is actually good news, because Ethereum is acting like a real software project, complete with slipped deadlines and reduced deliverables.

Christine Kim summarizes the situation for Coindesk: the “Instanbul” hard fork has been coming “soon” for quite a while, with a proposed date of August 14 [1].  But—surprise—it’s taking longer than hoped.

So the developers have split the changes into two batches.  The first batch of six are relatively small, uncontroversial, and ready now.  These will come out in October.

The other changes are the big ones, that will implement the ProgPOW protocol changes. Given the magnitude and the implications of these changes, it’s hardly surprising that they aren’t ready yet.  So they are now “Part 2” and have been pushed to “first quarter of next year”.  Who knows when they will actually be ready?

I’m a little surprised that the ProgPOW changes are still planned, because they are not only controversial, they are a stopgap “solution” to a problem than will disappear when “Ethereum 2.0” comes out. (2.0 will feature the non-Nakamotoan Proof of Stake consensus protocol, making ProgPOW irrelevant.)

Ethereum 2.0 is nowhere in sight, though, and who knows when it may be ready, if ever.  There is a reasonable possibility that it won’t be accepted by a lot of miners. So there is still time for the ProgPOW stopgap.  And, who knows, 2.0 may never be adopted, so ProgPOW might be permanent.

If this sounds messy and difficult, all I can say is “welcome to real software engineering”.  Coding is usually the easiest part of making software.  Meeting deadlines (and budgets) is much harder.

And building the right thing at the right time is the hardest thing of all.

Is Ethereum’s road map the right way to go?  I’m not sure.  Can they actually follow the plan?  We’ll see. I’m sure it’s going to take more time than they hope.


  1. Christine Kim (2019) Ethereum Coders Approve 6 Changes for Upcoming Istanbul Hard Fork. Coindesk, https://www.coindesk.com/ethereum-coders-approve-6-changes-for-upcoming-istanbul-hard-fork

 

Cryptocurrency Thursday

Facebook’s Libra: Crypto Tulip of the Year?

The Cryptotulip of the Year judges have taken note.  A new 800 pound cryptotulip enters the competition!

Everyone is talking about Facebook’s Libra cryptocurrency and payment system.  As with everything FB does, it’s a big idea simply because of its massive captive user population.  Move over Nakamoto, because whatever cryptocurrency used to be, it is now about Libra.

I haven’t plowed through the official white paper, but the summaries I’ve seen indicate that Libra mashes up features borrowed from many existing cryptocurrency projects [1, 2].  As such, it looks like it is “school of Nakamorto”, but not particularly orthodox.

Of course, as the product of a giant monopolistic corporation, it certainly flies in the face of the libertarian ethos of fundamentalist Nakamotoism.  Replacing a government monopoly on money with a corporate monopoly on money is not what Satoshi’s folks were aiming for.

It should be noted that important aspects of Libra are TBD.  The information to date is hazy about “governance”, which isn’t surprising because cryptocurrencies generally have no clue on this front.  They also envision going to a Proof of Stake some time in the future, following Ethereum’s path.   In this, they certainly capture the look and feel of contemporary cryptocurrency, no?

Libra appears to follow the Nakamotoan concept of pseudo anonymity (ID by public key), which is interesting in so far as it seems to violate FB’s own policies.  And anyone who thinks that they are anonymous on FB run platforms deserves everything they get.

The network is definitely proprietary, and, in fact, the “open” API is a minimal subset of the functionality.  The hoi poloi will be able to have read only access to the blockchain for now, only the big wheels will be able to build real apps.

Oh, and by the way, they created their own programming language and virtual machine.  Because, I guess, the world needs yet another, incompatible, programming environment.  Sigh.

Frankly, Libra doesn’t look fully baked to me.  Libra is supposed to be a stable coin, backed by some kind of basket of assets.  There are other hints about their monetary policies (such as “burning” coins in an effort to maintain a stable money supply).  Other systems have had limited success with these approaches, and we have little idea of just what FB will really do.

Even without regulatory problems (and I guarantee you that there will be massive pushback), it’s not clear how successful this will be. What would success look like?  What is it for, anyway?  Who wants it, who will use it?

Of course, even a limited success would still be the most successful crypto project ever, because FB has billions of users.  It could put every other cryptocurrency out of business. On the other hand, this could be the biggest non-event in the history of crypto—just another alt-coin, but with a really, really big bank roll.

The CryptoTulip Award judges (me) are already seeing a burst rhetorical enthusiasm, pro-Libra, anti-Libra, and hard-to-classify.  And we are very impressed with the “Nakamotoan” coating around FB’s (probably evil) monopolistic ambitions.  So Libra already seems to be the one to beat for this year’s Crypto Tulip of the Year.


  1. Brady Dale (2019) Libra White Paper Shows How Facebook Borrowed From Bitcoin and Ethereum. Coindesk, https://www.coindesk.com/libra-white-paper-shows-how-facebook-borrowed-from-bitcoin-and-ethereum
  2. Christine Kim and Ian Allison (2019) Facebook’s Libra Cryptocurrency: A Technical Deep Dive. Coindesk, https://www.coindesk.com/facebooks-libra-cryptocurrency-a-technical-deep-dive
  3. Libra Association, Welcome to the official Libra White Paper. Libra Association, 2019. https://libra.org/en-US/white-paper/

 

Cryptocurrency Thursday

More Ethereum Software Engineering

Perennial favorite CryptoTulip of the year  Ethereum continues to stand out in this year’s competition.

Even as Bitcoin and other cryptocurrencies are whipsawed by insane volatility (strong dollar kind of means weak Bitcoin, no?), fraud and crime are rampant, and Craig (“I am Satoshi”) Wright’s Theater of the Absurd is in its summer run (with some competition from the addled fugitive John McAfee).

Meanwhile, in Ethereumland, they continue to explore how to do engineering with no one in charge to make decisions.

Core Development Experimenting with Planning

“They” (it’s kind of hazy just who is in charge) have an upgrade (“fork”) scheduled for October, but they are still trying to figure out what will be in it [1].

Programmers always have the attitude, “You can tell me when or you can tell me what, but you can’t tell me both.”   Ethereum is experimenting with the “tell me when” approach:  set a deadline, and then see what can be done by that time.  Will this work?  We’ll see.

The deadline for proposals has passed but–surprise!–they need to sift through them to figure out what can and should be included in the update.  Christine Kim reports that—surprise, again!—this is not an easy process.  Only one proposal is a “definite go”.  As her headline says, “The Real Discussion About Ethereum’s Next Hard Fork Is About to Begin”.

If and when “they” decide what will be in this October release, the candidate code is supposed to be integrated into test systems by about mid-July.   We’ll see.

All this looks like real software engineering.

There is even a checklist of “readiness” or lack thereof.   However, as Kim points out, that  “the envisioned timeline for Istanbul is a rather new creation that has never replicated by previous ethereum hard forks”.  I.e., they’ve never actually tried this approach before.

(I have.  So have thousands of other professional software developers.)

In my decades of professional software engineering, I rarely met a deadline.  It is always a challenge to get everything done on time.  Usually, there has to be triage: what must be done, what would be good to have if possible, and what can be left out if necessary.

In my own experience, there is often need for somebody, a manager or executive, to “make the call” on this triage.  We’ll see how it works by “consensus”.

End-User Software

No normal humans should ever see the “core” software discussed above.  Out in the world, users deal with clients, services, and apps. All this other software needs to be kept up, too. The “core” developers can’t make that happen, it’s up to others, including the users.  How is that working out?

Daniel Palmer reports that this is potentially disastrous, because “Unpatched Ethereum Clients Pose 51% Attack Risk [2].

Ethereum and most cryptocurrency is used via user devices, often via mobile apps.  This client software is loaded on zillions of devices, under the control of users, AKA, normal people.  Security flaws can enable these clients to be monkeyed with, and, as Palmer notes, potentially hijacked in ways that threaten the core protocols.

Palmer is referring to a report from Security Research Labs, which shows that large numbers of clients have yet to install security patches issued earlier this year [3].  The unfixed bugs open the system to the infamous 51% attack:  bad nodes manipulating the consensus process to fiddle the ledger.

The SRL report indicates that part of the problem is that these software products are not easy to update, which is kind of a damning finding.  I mean, the point of cryptocurrency is security, so you’d think that the software should at least be well engineered.  (Not that safe and reliable auto updates are easy to implement—far from it.)

(We may also pause to contemplate the fact that there are three clients that account for the vast majority of user connections.  This is scarcely “decentralized”, regardless of how the “core” software works.)

You begin to see why Apple and Google police their app stores so annoyingly.  They impose standards to try to make sure that third parties don’t wreck the whole system.

Success Means Maintenance

This year Ethereum is experiencing all the real software engineering challenges that come with success.  The more you succeed, the more maintenance you have to do, and the more careful you have to be about your upgrades—thousands of users and millions of dollars are at stake, so you need things to be as smooth as possible.

The core code is attempting to get into a professional-grade development process, with accountability and predictable disruptions, but without benefit of any final authority to call the hard choices.

At the same time, the end-to-end system is slow to pick up crucial patches, potentially threatening the whole shebang.

We’ll see how this works out?  Will they make the October deadline?  Will client bugs cause a disaster?  Stay tuned.


  1. Christine Kim (2019) The Real Discussion About Ethereum’s Next Hard Fork Is About to Begin. Coindesk, https://www.coindesk.com/the-real-discussion-about-ethereums-next-hard-fork-is-about-to-begin
  2. Daniel Palmer (2019) Unpatched Ethereum Clients Pose 51% Attack Risk, Says Report. Coindesk, https://www.coindesk.com/unpatched-ethereum-clients-pose-51-attack-risk-says-report
  3. Security Research Labs, The blockchain ecosystem has a patch problem in Security Research Labs – Bites. 2019. https://srlabs.de/bites/blockchain_patch_gap/

 

 

Cryptocurrency Thursday

Ethererum 2.0 Is 1000 LOC?

Perennial favorite for CryptoTulip of the Year, Ethereum is running strong for this year’s award.  Even without any other developments (and there are plenty), Ethereum 2.0 alone would make this community the odds on favorite for this year.

From my experience with complex software development projects, I’ve been taking the Ethereum 2.0 project schedule with a healthy dose of skepticism.  All men die, all projects are late.

So, I was surprised to read Christine Kim’s report that the first phase of Ethereum 2.0 is nearly ready [2]. (!).  Actually, the report is somewhat confusing to a software guy, because the headline says that the “code” could be “finalized” (not sure what that means) in June, but the actual developer quotes say that this would be a “spec freeze”.  A specification freeze is certainly an important step, but could be quite far from meaning the code is finished.


To review, this phase of th project has the two key features of Ethereum 2.0:  Proof of Stake and Sharding.

Ethereum’s Proof of Stake replaces the Nakamotoan Proof of Work protocol with a much less computationally expensive protocol, basically one dollar, one vote.

“Sharding” is a protocol to break up the network into localized islands, which communicate with each other.  This dramatically increases throughput and decreases latency by introducing a hierarchy into the Nakamotoan P2P network.

Sharding is scarcely new, It has been widely used in large scale distributed systems for decades (e.g., see [1]).  Sharding is a classic engineering trade off of complexity for latency. Updates are processed by local nodes, and then a summary of the changes are propagated to other shards.  Most operations happen faster, though the global state may not be completely consistent for a while.

These are both very sensible technical designs for Ethereum.  They are also pretty non-Nakamotoan.  To be sure, this is still a decentralized system, in the Nakamotoan sense.  But it isn’t as simple, and I suspect the complexity may make it at least a little fragile.  And, of course, the PoS isn’t as “democratic” as PoW, though both are tilted toward wealthy participants.  (PoS is explicit about this bias, where PoW ignores it.)


The report has the interesting tidbit that the key stuff, the new Proof of Stake protocol and sharding together could be about 1,000 lines of code!  (Does this include the “Hobbit” protocol? [3]  I’m not sure.)  To me, that’s totally believable and a sign that the programmers are competent.  Important code doesn’t have to be complicate or long.  In most software, the bulk of the code is error checking and error response—code that rarely, if ever runs.   (This is one reason why boasting about how many LOC you have produced tells me that you aren’t a very good programmer, however fast you can type.)

Sifting through the tea leaves, I think that the specification and initial code are being developed together, so the “spec freeze” will be accompanied by a test version of the software.   Excellent work.

At this milestone, will it be “done”?  Not really.

It will be ready to test at that point.

And testing this kind of distributed system is hard. Very hard.  Very, very, hard.

The challenge it to make sure not that the code works as specified, but to try to be confident that it doesn’t work as it’s not supposed to.  This means testing error detection and exceptions and anomalies such as overloads and network failures.  This will take a lot of work, and, in the likely event that problems are uncovered, the code may need to be changed.  Which would mean more testing, and so on.  Who knows when it will be done?

To date, the Ethereum 2.0 looks to me like a professional engineering project, and that is a good thing.  Unfortunately, that may or may not matter because it will have to be deployed through the Nakamotoan “consensus” process, which means that users may choose to continue to run the old for a long time.  (As I have pointed out, the deployment of IPV6 began two decades ago, and still isn’t complete.)

So, this all could come a cropper, despite the best efforts of Vitaly happy elves.

We’ll see how this plays out.


  1. Thor Alexander, ed. Massively Multiplayer Game Development 2. Charles River Media, Inc.: Hingham, MA, 2005.
  2. Christine Kim (2019) Code For Ethereum’s Proof-of-Stake Blockchain to Be Finalized Next Month. Coindesk, https://www.coindesk.com/code-for-ethereums-proof-of-stake-blockchain-to-be-finalized-next-month
  3. Christine Kim (2019) Ethereum 2.0’s Nodes Need to Talk – A Solution Is ‘Hobbits’. Coindesk, https://www.coindesk.com/testing-ethereum-2-0-requires-basic-signaling-a-solution-is-hobbits

 

 

Cryptocurrency Thursday

Ethereum faces classic software engineering problems

The Ethereum developers have been rediscovering (but certainly not reinventing) software engineering.  Under the benign dictatorship of Vitalik Buterin, Ethereum has struggled to maintain professional quality software without a conventional top down, closed organization.  (See also this and this).

This month we read about their struggle with planning and scheduling “hard forks”-significant software updates that are incompatible with earlier code [1].  In conventional software development, these are managed through a central distribution, and users must take the update or loose compatibility.  In cryptoland, “hard forks” are “voted on”, and if a significant fraction of the network does not accept the change, the network splits.  And there may be competing “forks” that address a problem in different ways.  Sigh.  This is no way to run a railroad.

While this approach is “disruptive”, at least in the sense of “less organized and much harder” than conventional software project management, But it hardly “reinvents” software development.  Amazingly enough, all the hard problems of software maintenance are found in cryptocurrency software, and still have to be solved.

Crypotoland is already famous for its non-functional planning and decision-making.  What changes should be made?  More important, what changes must be made, versus might be made?  What are the effects and implications of a proposed change?  And so on. (See this and this and this.)

The “hard forks” problem is basically the difficult question of compatibility.  Some changes are so drastic that you basically have to throw away all the old software and data—it’s effectively a whole new product.  These changes are painful for users, and the more users you have—the more successful you are—the more difficult such upgrades become.  There is too much sunk cost in the old software to blithely toss it out and redo it.

This is not just conservatism or laziness.  As the Ethereum folks recognize, there are a lot of people using the software that simply do not have money, people, or expertise to port their stuff to a new version, let alone to do so over and over again.  And if they do try to keep up, they may spend most of their time just chasing the releases, with no time for their own software or business.  (Been there, done that.)

In the case of Ethereum, they also face a classic software dilemma.  They are working on “Ethereum 2.0”, which is a pretty complete rework of the basic Ethereum protocol.  In principle, everything will be wonderful in 2.0, but it will be quite a while before 2.0 is ready—it’s already a couple of years of discussions, and probably several more years before it might be done.

In the mean time, there are many changes that might be made to the current version of Ethereum core software.  Some of these may be critical fixes, others are good ideas, and others are, well, who knows?  But all these changes will be obsolete when the great day comes and Ethereum 2.0 comes out.  (Although, some changes might be applicable to both old and new—so they have to be done twice.)

So just how frequent should these “hard forks” be?  Too few, and the software may suffer.  Too many, and downstream developers will be overwhelmed?  And everything you do before V2.0 will essentially be thrown away.

Coindesk reports that the developers are discussing setting a regular schedule of forks, every six months, or even every three months.  Of course, if history is a guide, they probably can’t hit such a target anyway, presumably because the process of testing and preparing (and, I hope, documenting) the code takes longer than hoped.  (Definitely been there, done that.)

The ultimate kicker is that unlike conventional software, every one of these updates can be a political disaster, potentially causing a schism in the network, with untold consequences for users.  Software maintenance is hard enough without having to have to worry about civil wars among different interest groups.

It is good to see that Ethereum folks seem to have some understanding of these challenges, and are taking them seriously.  It is reported that the God Emperor of Ethereum, VB, wants a conservative policy for changes, doing only those necessary for survival of Ethereum, until 2.0 is out.  On the other hand, the schedule for 2.0 is uncertain and seems to slip farther into the future every year, so there is desire to improve what exists, rather than what might exist someday.

However “innoruptive” you think cryptocurreny/blokchain is, it’s still software, and it’s still hard.  And the more you succeed, the harder it gets.

Welcome to the software biz!


  1. Christine Kim (2019) Ethereum Core Developers Debate Benefits of More Frequent Hard Forks. Coindesk, https://www.coindesk.com/ethereum-core-developers-debate-benefits-of-more-frequent-hard-forks

 

Cryptocurrency Thursday

ProgPow: Ethereum Works Hard To Slow Down

Perennial Crypto Tulip leader Ethereum has yet another Tulip-y initiative:  ProgPOW (programmatic proof of work).

The general idea is that contemporary technology makes it reasonably easy to create custom chips to execute any given algorithm, usually with considerable speedup over general purpose processors.  These specialized chip (known as ASICs, Application Specific Integrated Circuits) are only good for one thing, but they do that one thing much better than generic circuits.

Sound great, right?

In cryptocurrency-land, this technology has been applied to optimize Nakamotoan proof of work algorithms. This is a problem because these ASIC systems are faster than general purpose computers, so they can win the race (i.e., complete the “work” faster) and get the payouts (i.e., mine more coins). This advantage results in concentrating mining “power” in relatively few, high cost, systems, which “centralizes” the cryptocurrency network, contrary to the Nakamotoan vision of a ubiquitous, democratic network.

Over the past few years, there have been efforts to fiddle with consensus protocols to defeat ASICs and promote the use of mass produced, generally available computers. This kind of protocol-hacking is never a pretty thing, and generally won’t work for long. Wan and Long are on target with the phrase,  “An Expensive Game of Whack-a-Mole”.

The “ProgPOW” in question is a new initiative cooking in the Ethereum developers community.  (It should be noted that Proof of Stake protocols, with all their flaws, are immune to ASIC arms races.  So if and when Ethereum 2.0  comes, it will not need ProgPOW.)

The overall idea of ProgPOW is to twiddle the computational work to make it work best on “commodity” GPUs  (vector processors), reducing any advantage to creating a specialized chip. The point is that anyone, or at least, a lot of people, can get these GPUs, so anyone can set up a competitive mining operation.  (I’m sure my friends who work at NVIDIA are grateful for the help selling their products.)

This month Dovey Wan and Martina Long critique this “whack-a-mole game” [2].

First of all, the actual hacks in question are aimed at a relative few specific GPU chips, and not only disadvantage ASICs as intended, but also disadvantage older, less expensive GPUs (and, for that matter, ordinary CPUs.)  The result could easily create the same centralization in the hands of the favored GPUs.

Second, Wan and Long note that the supposed “problem” isn’t especially large.  They cite examples of ASICs that cost three times a GPU, and deliver 2.5 – 4 times the performance.  There may be other costs and performance wrinkles that change the I picture, but to the degree that this is representative, it doesn’t seem like there is much of a glaring advantage to the example ASIC.

They point out that the largest pools are enabled not by access to expensive technology, but access to cheap (and often dirty) electricity.

Wan and Long make another interesting point.  The specialized ASICs have only one use, so they align the operators with the target cryptocurrency.  Breaching security or otherwise messing with the network would destroy the value of the chips, while general purpose GPUs could be redeployed to other uses.  In addition, to the extent they are expensive and hard to get, they are unlikely to be used to hack the network.  It makes more sense to use GPUs that are easy to get, and can be reused later.

Finally, they note that the manufacturing of GPUs is highly centralized, controlled by a tiny handful of companies.  ASICs have generally come from many sources, and so are less “centralized” in this sense.

And remember, as I said before, if and when Proof of Stake comes to be, all the ASICs (and, I assume GPUs) will be irrelevant.  As Wan and Long point out:

“With the switch to PoS planned for Ethereum in the near future, it doesn’t make economic sense for most miners to further massively invest in Ethereum ASICs for their brief lifespan.”

My own guess is that ProgPOW is pretty useless, even if Ethereum 2.0 does not come soon.  And there is a real possibility it will never be adopted at all. [1]


  1. Christine Kim (2019) Ethereum’s ProgPow Mining Change Approved Again, But Timeline Unclear. Coindesk, https://www.coindesk.com/ethereums-progpow-mining-change-approved-again-but-timeline-unclear
  2. Dovey Wan and Martina Long (2019) Ethereum’s ProgPoW Proposal: An Expensive Game of Whack-a-Mole. Coindesk, https://www.coindesk.com/ethereums-progpow-proposal-an-expensive-game-of-whack-a-mole

 

Cryptocurrency Thursday