Tag Archives: Alyssa Hertig

Big Bitcoin Bug

As the competition for CryptoTulip of the year contest enters the final stretch, we now hear from the arch, patriarch of all Tulips:  Bitcoin.

This month we learned that there was a huge, massive, bug in the oldest, stablest, “most secure” cryptocurrency of them all, Bitcoin   (There are also an unknown number of copycats who use code from the Bitcoin source base, so the bugs may affect other systems, too.)

Actually, there were two bugs, one a possible denial of service attack, and another that could allow double spending.  Nothing major, just a potential for a crippling shutdown and/or counterfeit coinage!  The bugs were accidentally introduced two years ago!


The bugs themselves aren’t especially notable. All software has bugs.  Bitcoin is software.  Ergo, Bitcoin has bugs.

The interesting and Tulip-y thing is how it was handled.

Notably the “open source”, “transparent” development team took it upon themselves to keep quiet about most serious part of the problem until there was a patc [1].  This is, of course, perfectly standard and reasonable behavior for proprietary code.  The developers took responsibility for the welfare of the code and its users, and tried to get the patch out before the details of the flaw were explained to potential attackers.

This is a sensible process, but it is not exactly a Nakamotoan process.  Bearing in mind that many enthusiasts advocate the principle that “the code is the law”, which means that, for a while, it was perfectly proper, even “intended” that people might be able to ravage Bitcoin through these loopholes in “the law”.  And the unelected developers in fact took it upon themselves, without consultation or notice, to change “the law” to preclude these highly profitable moves.

Naturally, this being cryptoland, the unannounced bug was, in fact, soon unofficially leaked by non-cooperative folks. (Thanks for helping, guys.) And even according the official announcement, only half the affected systems had been patched so far- probably.  The bug notice itself essentially begs people to update with the bug fix. And no one can do more.

Apparently, many Bitcoinistas believed their own propaganda about how ‘secure’ this stuff is, and about how invincible ‘open source’ code is.  So some people were “shocked” by this bug. [2]   In response, there have been naïve calls for more and better testing, as if any software ever has enough and good enough testing.  (And, by the way, decentralized, asynchronous, network protocols are really, really hard to test.)

There have also been calls for multiple implementations, which is a good idea until it isn’t a good idea.

As Alyssa Hertig reports, “developers don’t necessarily agree on exactly what needs to be done.”

At this point, we might ask, “Is this bug really patched?”  Who knows?

At this time we believe over half of the Bitcoin hashrate has upgraded to patched nodes. We are unaware of any attempts to exploit this vulnerability.” (from [1])

Not exactly a ringing assurance.

This episode shows just how vulnerable this technology really is.  There can and surely will be huge bugs, but they can be patched only through the indirect and voluntary cooperation of many anonymous operators.  And, as we have seen with Ethereum and the DAO, a bug can be exploited in seconds, but may take years to fix.

The CryptoTulip award will surely have to consider this episode.

Bitcoin was lucky this time (as far as we know).  With billions on the line, it’s only a matter of time before this CryptoTulip explodes.

  1. BitcoinCORE, CVE-2018-17144 Full Disclosure.  Bitcoin Core Notice, 2018. https://bitcoincore.org/en/2018/09/20/notice/
  2. Alyssa Hertig (2018) In Wake of ‘Major Failure,’ Bitcoin Code Review Comes Under Scrutiny. Coindesk, https://www.coindesk.com/in-wake-of-major-failure-bitcoin-code-review-comes-under-scrutiny/
  3. Alyssa Hertig (2018) The Latest Bitcoin Bug Was So Bad, Developers Kept Its Full Details a Secret. Coindesk, https://www.coindesk.com/the-latest-bitcoin-bug-was-so-bad-developers-kept-its-full-details-a-secret/


Cryptocurreny Thursday

FOAM: Decentralized Localization Using Ethereum

FOAM is a technology that seeks to use blockchain and Ethereum contracts to create mapping and location based services.  The project wants to address a complex of perceived problems: GPS is spoofable, maps are owned by big actors, and location services aren’t private.  In addition, they think that “people lie about their location,” (Ryan John King, the co-founder and CEO of FOAM, quoted in Coindesk [3])  The solution deploys blockchain technology and Nakamotoan philosophy [2].

Looking at their materials, it is clear that FOAM is mainly focused on replicating Internet location-based services, not on navigation or engineering or geoscience.  The geospatial model is a two-dimensional map of the surface of the Earth.

The location service depends on many local low-power radio beacons instead of satellites. They imagine an ad hoc mesh of locally operated beacons, which are recognized and validated via Nakamotoan style consensus rather than a central authority (such as a government space agency). These beacons are used to trangulate positions.  Good behavior and trustworthiness of the beacons is supposedly assured by cryptocurrency tokens, in the form of incentives, notably buy in and security deposits.

They imagine this to be used to construct datasets of “Points of Interest”, which are “where are the stores, cafes, restaurants and malls, where a fleet of vehicles in a ride sharing program like Uber should be anticipating if demand is shifting or surging, or which traffic bottlenecks drivers should avoid on an app such as Waze.”  These are stored and validated through a decentralized protocol. “[G]ranting control over the registries of POI to locally-based markets and community forces, allowing the information provided to be validated by those who contribute to the relevant locality.

These datasets are to be created through bottom up efforts, presumably incentivized by desire to operate local services. “FOAM hopes that the Cartographers and users will contribute the necessary individual work, resources, and effort themselves to contribute to the ongoing community-driven growth and supplement this important cartography project.

Interestingly, the crypto token-based incentive system relies on negative incentives, namely buy ins and “security deposits” that can be forfeited by consensus. I’m not sure I’ve seen another Nakamotoan project with this sort of punishment based (dis-)incentive.  (I’ll note that psychologists generally find that the threat of punishment does not engender trust.)

Obviously, this entire concept will depend on the development of the localization network and the datasets of “Points of Interest”.  As far as I can see, realizing this is based on “hope” that people will contribute. I’d call this “faith-based engineering”

We can pause to reflect the irony of this “trustless” system that appears to be entirely based on “hope” and the threat of punishment.

As far as the actual technology, it is, of course, far short of a “map of the world”.  The local beacons are fine for a dense urban setting, but there is little hope of coverage in open space, and no chance that it will be useful at sea, up in the air, inside significant structures, or underground. Sure, there are ways to deploy beacons indoors and other places, but it isn’t easy, and doesn’t fit the general use cases (Points of Interest).

Ad hoc networks aren’t immune to jamming or interference, either, and are essentially defenseless against determined opposition.  In classic fashion, the protocol “routes around” interference, discarding misbehaving nodes and corrupted data. Unfortunately, this means that the response to a determined and sustained attack is to shut down.

The incentive system is somewhat unique, though the notion of a “security deposit” is widely used. How well will it work?   (How well do security deposits work?)  It’s hard to say, and there doesn’t seem to be much analysis of potential attacks.  The notion that the loss of security deposits and other incentives will guarantee honest and reliable operation remains a theoretical “hope”, with no evidence backing it.

The system depends on a “proof of location”, but it isn’t clear just how this will work in a small, patchy network. In particular, assumptions about the security of the protocol may not be true for small, local groups of nodes—precisely the critical use case for FOAM.

Finally, I’ll note that the system is built on Ethereum, which has had numerous problems. To the degree that FOAM uses Ethereum contracts, we can look forward to oopsies, as well as side effects from whatever emergency forks become necessary.

Even if there are no serious bugs, Ethereum is hardly designed for real time responses, or for datasets at the scale of “the whole world”.  Just what requirements will FOAM put on the blockchain, consensus, and Ethereum virtual machine?  I don’t know, and I haven’t seen any analysis of the question.

This is far from an academic question.  Many location services are extremely sensitive to time, especially to lag.  Reporting “current position” must be really, really instantaneous.  Lags of minutes or even seconds can render position information useless.

Can a blockchain based system actually deliver such performance?

Overall, FOAM really is “a dream”, as Alyssa Hertig says.  A dream that probably will never be realized.

  1. Foamspace Corp, FOAM Whitepaper. Foamspace Corp, 2018. https://foam.space/publicAssets/FOAM_Whitepaper_May2018.pdf
  2. FoamSpcae Corp, The Consensus Driven Map of the World, in FOAM Space. 2017. https://blog.foam.space/
  3. Alyssa Hertig (2018) FOAM and the Dream to Map the World on Ethereum. Coindesk, https://www.coindesk.com/foam-dream-map-world-ethereum/


Cryptocurrency Thursday

Nakamoto’s Fifty One Percent Problem

“As long as a majority of CPU power is controlled by nodes that are not cooperating to attack the network, they’ll generate the longest chain and outpace attackers.” (Nakamoto, 2009 [3] )

At the very core of Nakmotoan cryptocurrencies is a consensus protocol that relies on the principle that a decentralized voting system spread across a very large network cannot be manipulated because the majority of nodes are “honest” and not cooperating to game the voting.

The system works as long as “honest nodes control a majority of CPU power.”

On the other hand, if more than 50% of the CPU power on the network coordinate, they can manipulate Nakamotoan protocols, and produce “dishonest” results.  This is the dreaded “51% attack”. Depending on the situation and the details of the specific protocol, this attack can result in canceled payments, improper payments, double spending, or other forms of theft via manipulation of the ledger–pretty much exactly what Nakamotoan blockchains are supposed to preclude.

It is important to note that the all too many troubles reported about cryptocurrencies; hacking, fraud, theft, and criminal activities; all have nothing to do with a 51% attack. Indeed, in most of these cases, the protocol is working just fine, and securely implementing the nefarious activities  which are enabled by dishonesty and breaches in other parts of the system.

Indeed, actual 51% attacks have been so rare that they seemed purely theoretical.  However, as Alyssa Hertig points out, with the growth in the number of cryptocurrencies, these attacks have become more common [2].

So what is going on here?

The Nakamotoan project is based in large part on what amount to probabilistic claims about the participating nodes of the network.  The consensus protocol is secured as long as the “majority of the CPU power is honest” condition remains true.  (We may pause to note the irony of a “trustless” protocol depending on the trustworthiness of vast numbers of independent computers on the Internet….)

Bitcoin and it’s extended family seem to work, and people have come to have confidence in these decentralized networks.  But why would we believe that this condition can be met in a real implementation?

Confidence is based on intuitive beliefs about the Internet and cryptocurrency networks on the Internet.  The basic intuition is that 1) the internet has a huge number of independent nodes with a huge aggregate computing power and 2) the computing power is distributed approximated evenly across the Internet.  The idea is that It is impractical to round up zillions of independent mom-and-pop nodes to make up a majority.

The size of the Internet is indisputable, though it is important to remember that cryptocurrencies operate on only a fraction of the nodes. Dreams of a universal blockchain, computed and protected by every computer everywhere are a long way from reality, and probably unrealistic. (Remember, many of the nodes are mobile devices and dedicated systems that are not necessarily available for computations.)

There are many blockchains and cryptocurrencies, and their networks are not necessarily very large.  Actually, some “private” blockchains are not only small, but also not open to the public, so we don’t really know anything about them.

Whatever the number of nodes, the CPU power is never evenly spread, not even approximately.  In this assumption, we see the Libertarian instincts of Nakamotoists overriding knowledge about networks.  Statistically, networks are never egalitarian, they are always hierarchical [4].  Cryptocurrency networks are no different, and no sensible person would expect them to be different.

In the real world, therefore, cryptocurrencies fail to meet one or both of these intuitive criteria.

Only the largest cryptocurrencies, such as Bitcoin or Ethereum, have truly massive numbers of nodes.  For many alternative cryptocurrencies or blockchains it is hard to know just how big the network might be, though they obviously start out small.  At the extreme, a small network means that only a relative handful of nodes could collude to control it.

Even the largest networks are characterized by huge concentrations of CPU power in the hands of a few operations.  There may be millions of ordinary Joe’s out there, but there are gigantic server farms with millions of times Joe’s CPU power.  The largest of them all, Bitcoin, still has a handful of operations that control 51% of the computing power and therefore theoretically could control the network.  This has been evident in the governance stalemate over scaling issues, with the interests of a few operators successfully blocking upgrades that disadvantage them.

The upshot is that the likelihood of a 51% attack is unknown (and can depend on a lot on obscure details of the implementation ), but is clearly more likely if the network is small and computing power concentrated. In these cases, “honest nodes control a majority of CPU power” is more of a hope than a solid assumption.

So we shouldn’t be surprised that these problems are increasing, especially in smaller networks, and especially in opaque networks [2].  This is definitely a case where being “just as good as Bitcoin (or Ethereum or whatever) is scarcely a guarantee that it will work just as well.

  1. Alyssa Hertig (2018) Blockchain’s Once-Feared 51% Attack Is Now Becoming Regular. Coindesk, https://www.coindesk.com/blockchains-feared-51-attack-now-becoming-regular/
  2. Alyssa Hertig (2018) Verge’s Blockchain Attacks Are Worth a Sober Second Look. Coindesk, https://www.coindesk.com/verges-blockchain-attacks-are-worth-a-sober-second-look/
  3. Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System. 2009. http://bitcoin.org/bitcoin.pdf
  4. Mark Buchanan, Nexus: Small Worlds and the Groundbreaking Theory of Networks, New York, W. W. Norton and Company, 2002.


Cryptocurrency Thursday

Yet Another “Blockchain for Provenance” System

In the short decade since the Nakamoto paper [5] cryptocurrency enthusiasts have put forward a variety of use cases for blockchains and cryptocurrencies.  It is notable that most of the exciting use cases aren’t actually in the canonical paper itself, and most of them have yet to prove out in the real world. (And the most successful use cases are the ones not put forward as good examples–extortion, dark commerce, money laundering, etc.)

One of the perennial favorite use cases is Provenance:  tracing goods from source to consumer.  For companies, this is “logistics” or “supply chain”, for ordinary consumer this is about quality control.  This the same problem that scientists (and anyone) faces with data quality—where did this data come from, and what has been done to it?  In the latter form, this is called “provenance” and we were struggling with solutions a long time ago (before Nakamoto, Ante Bitcoin) [3].

This month yet another company touted this use case at the Ethereal Summit in NYC [1] .  The presentation by Viant traced a Tuna from Fiji all the way to the conference sushi plates.  Tagged with RFID, records of the sales and transportation of the fish are on the Ethereum blockchain, so everyone can check that the fish they are eating is “moral”. (How it can be “moral” to harvest increasingly rare wild animals and fly them half way around the world beats me.)

This is the yuppie version of Provenance (making sure that my luxury goods are authentic and “moral”), but the technology is the same as any supply chain.

Looking at Viant’s web site, they seem to have a reasonable grasp on the problem.  They have a logical model of provenance that includes “four pivotal aspects of an asset: Who, What, When, and Where”.  The model includes “Actors” and actions, and “Roles” that define permissions.  IMO, this is the right stuff (See [3]) .

They also have RFIDs to tag and geo track, and apps to implement operations (e.g., sales to distributors).  These are certainly the right technology, and they are lucky to have ubiquitous mobile devices and “the cloud” to implement these concepts we pioneered in the late twentieth [4].

So what does blockchain technology bring to the table?

First of all, it is used as a shared database, essentially a bulletin board.  The cryptocraphically signed and immutable records provide an unfudgeable trace of the object’s life.  And the blockchain is available to anyone, so ordinary consumers can get the authenticated traces of the object. (More likely, any third party can create apps that deliver the information to consumers – no normal person monkeys around with the blockchain itself.)

The second feature is the use of Ethereum “smart contracts” to process the transactions. This technology lets the company post standard scripts for, say, transfer of an asset. The script is available anywhere, and executes the same way for everyone.

These features are, of course, available from conventional databases and file systems as well.  But the Ethereum blockchain is available to everyone, and is maintained by the Ethereum network rather than dedicated servers.  This is the third advantage of the blockchain—deployment (no need for server farms), availability (no server access required) and maybe cost (TBD).

It is interesting to point out one feature of Nakamotoan blockchains that is not really used here:  trustlessness.  While the system boasts that it is decentralized and therefore “trustless”, this is misleading.

Provenance is literally all about trust. The point of tracing the object is to assure that it is what it is supposed to be, and that requires knowing who did what, etc.  Furthermore, it needs to establish a trusted trace, with each actor and action attested by a trusted source.

Using a blockchain, or, indeed, any digital system, is not sufficient to achieve this.  The company will tell you this.  The RFID can be removed or destroyed.  Actors can make mistakes or be suborned.  On the blockchain, false records look the same as correct records (and can never be removed).  Trust involve real world protocols, including authentication of identities.

In this area, the blockchain may actually be a liability. The “trustless” data cannot be trusted.  Part of what the company is doing with the “smart contracts” is overlaying a network of trusted records on the trustless blockchain.

There are other potential draw backs of using a blockchain in this use case.

Let’s talk about privacy.  Think about it. It’s not clear just how “moral” it is for anyone in the world to know where every bit of sushi came from and ended up.  Individual fishing captains don’t necessarily want any kid on the Internet snooping on their business, not to mention rival captains and possible criminal gangs.  And the caterer doesn’t necessarily want random people, competitors, or criminals tracking their business. And so on.

Second, there is no way to correct mistakes. Even if the software is always correct (which is unlikely), people make mistakes and are dishonest. If bad information gets onto the blockchain, it can’t be removed or corrected.

So, imagine that a bad actor somehow gets a bunch of bad fish entered as OK fish.  The blockchain shows that this is “moral tuna”, even though it isn’t.  Even if we find out about the fraud, the blockchain could still have the evil records forever.

One last point.  Viant is one of I don’t know how many companies trying to implement this kind of Provenance.  With all these variations out there, it will be extremely important to have interoperability standards, so you can combine tracking from a number of sources.  (See the W3C PROV working group.)

Using standards would seem to be both obvious and compatible with the philosophy of decentralization.  After all, if the only way to do tracking is to use Viant’s proprietary data model and software, then a key advantage of the decentralized blockchain is out the window.

Overall, Viant and others are doing the right thing.  It remains to be see whether using a blockchain will be a net win or not.  And all of them should implement the standards we started developing back at the turn of the century.

  1. Alyssa Hertig (2018) Moral Food: A Fish’s Trek From ‘Bait to Plate’ on the Ethereum Blockchain. Coindesk, https://www.coindesk.com/moral-food-a-fishs-trek-from-bait-to-plate-on-the-ethereum-blockchain/
  2. Robert E. McGrath, Semantic Infrastructure for a Ubiquitous Computing Environment, in Computer Science. 2005, University of Illinois, Urbana-Champaign: Urbana. http://hdl.handle.net/2142/11057
  3. Robert E. McGrath and Joe Futrelle, Reasoning about Provenance with OWL and SWRL, in AAAI 2008 Spring Symposium “AI Meets Business Rules and Process Management”. 2008: Palo Alto.
  4. Robert E. McGrath, Anand Ranganathan, Roy H. Campbell, and M. Dennis Mickunas. Incorporating “Semantic Discovery” into Ubiquitous Computing Environments. In Ubisys 2003, 2003.
  5. Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System. 2009. http://bitcoin.org/bitcoin.pdf


Cryptocurrency Thursday

Yet More Academic Warnings About Blockchains

One of the most important features of Nakamotoan blockchains is that they are “decentralized”[3] .  Blockchains and consensus protocols are grievously inefficient, but the price is considered worth paying in order to eliminate the potential for a few privileged actors to control the network.

Nakamoto-style blockchains are theoretically decentralized. This means that the system is capable of, and intended to be, run by a non-hierarchical group of peers.  But real networks are never perfectly decentralized in practice. There are also many possible dimensions of “decentralization”.

One important, if not preeminent dimension is decision making: just how are decisions actually made, and by whom?

Researchers from University College London report this spring that in fact the decision making is concerned Bitcoin and Ethereum are highly centralized [1].  This finding confirms the intuition of anyone who has dealt with these communities.  Regardless of philosophical intentions, there are a relative handful of people and organizations that have out-sized influence on these cryptocurrencies.

The study examined the public discussion and code repositories, where the design and implementation of the software infrastructure is recorded. This infrastructure embodies many technical decisions that affect the behavior of the system, the outcomes of users, the security and trustworthiness of the information, and even how decisions are made.

The decision-making process is modelled after the Internet and open source software. Ideas are formulated as public proposals, which are posted for global discussion. Implementations are published in open repositories, and also subject to evaluation and discussion.  The principle is that anyone on the Internet can propose features or changes, and that implementations will have widespread understanding and support by the time they are deployed.

The study examines the number of individuals who contribute to comments and code for different cryptocurrencies, as well as comparison to other open source code projects.

The results are pretty simple.

While “anyone on the Internet” is theoretically able to contribute, only a relatively small number of people actually write the code. And most files have only a handful of authors.  (Programmers will not be surprised at this: coding is hard work, and collaborative coding is even harder.)

Similarly, the open-to-anyone comment process is, in practice, dominated by a handful of individuals, who are de facto “experts”. This distribution parallels the pattern of actual coding, though whether “coders are experts” or “experts are coders” or there are two separate populations is not clear.

This study confirms what we have seen in practice: cryptocurrency communities are complicated, with many individuals, organizations, and interest blocs that exercise outsized influence. Their comparison to other code projects indicates that these are a natural pattern for “distributed” software projects.  The paper also include references to other studies that show just how “centralized” cryptocurrencies are.

The study did not, and could not, compare to non-decentralized projects, such as proprietary or sponsored systems.  My own experience is that such projects have similar patterns of concentration in decision making (a relative few highly influential designers and coders), though this case there is also a formal proprietor with decision-making authority which may override the contributors.

In other words, the pattern seen in this study is perfectly normal for software development.  The major difference is that there is no one “in charge”, so the de facto mavens rule.

It is important to note that, as the researchers discuss, there is a large ecosystem beyond the core software examined here.  These other projects, including exchanges, wallets, and services are organized in a variety of ways, some “decentralized”, and some very centralized (and opaque).  This means that the overall, end-to-end system is “patchy” and likely includes many islands of code, created and managed by different people.  It isn’t really reasonable to describe a cryptocurrency as purely “decentralized”.

This and many other studies show that the broad and often poorly defined notion that cryptocurrencies are “decentralized” is not realized in the actual, real-world implementation. Clearly, the Nakamotoan dream of a truly decentralized system has yet to be realized in practice.

This conclusion is important because this “decentralization” property underlies other important claims for the ultimate fairness and usefulness of the system.  For many people, the point of paying the high technical cost for decentralization is to achieve a system that is not, and cannot be, controlled by a powerful few.  If this goal is not really being accomplished, then the case for Nakamotoan blockchains is much weaker.

  1. Sarah Azouvi, Mary Maller, and Sarah Meiklejohn, Egalitarian Society or Benevolent Dictatorship: The State of Cryptocurrency Governance. 2018. https://fc18.ifca.ai/bitcoin/papers/bitcoin18-final13.pdf
  2. Alyssa Hertig (2018) Major Blockchains Are Pretty Much Still Centralized, Research Finds. Coindesk, https://www.coindesk.com/major-blockchains-pretty-much-still-centralized-research-finds/
  3. Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System. 2009. http://bitcoin.org/bitcoin.pdf


Cryptocurrency Thursday


Dahlia Malkhi on Ethereum Casper Protocol

In addition to its jolly governance problems, Ethereum has been slouching towards doing something about scaling.  Specifically, Ethereum has been discussing a new consensus mechanism, based on ‘proof-of-stake’ (codename Casper).

Classical Nakamotoan consensus is based on proof-of-work, the crude, brute force requirement of expending real world resources to preclude cheating.  Proof-of-Stake substitutes a different ‘hard problem’, basically placing a bet on the ultimate consensus.  This approach is vastly more efficient than POW, running faster and using less computing resources.

Great, huh?

The question has to be, is this really a secure protocol?

This is not a question that can be solved by intuition or happy-talk.  It’s really, really hard to analyze this kind of protocol, and it would be wise to have some adult supervision before committing millions of dollars to a possibly flawed idea.

This winter Dahlia Malkhi (a very smart grown up) sloshed some icy water on the Casper protocol.  Alyssa Hertig reports that Malkhi was pretty clear that “proof-of-stake is fundamentally vulnerable” [1].  Speaking from decades of experience, she says there are trivial scenarios that break Casper.

The Coindesk report suggests that Casper’s advocate, Sensei Zamfir, replies to this criticism that the protocol is still useful, if it is ‘mostly’ OK.

All together now:    No, it isn’t.

I’m not enough of an expert to analyze this protocol in detail.  But I am smart enough to pay attention when Sensei Malkhi tells me it’s not secure.  She’s almost certainly correct.

The most troubling thing about this exchange is that Ethereum has a history of pushing on even in the face of expert warnings about security.  The DAO disaster was no surprise.  It was predicted by Cornell researchers days before the catastrophic loss and resulting hard forks.

Ethereum has been warned about Casper. It is another self-inflicted disaster waiting to happen.

This will be a test.  How many times will Ethereum walk off the cliff while we are yelling ‘stop!’

  1. Alyssa Hertig (2018) Vulnerable? Ethereum’s Casper Tech Takes Criticism at Curacao Event. Coindesk, https://www.coindesk.com/fundamentally-vulnerable-ethereums-casper-tech-takes-criticism-curacao/


Cryptocurrency Thursday

Ethereum CryptoPets Are Proliferating

As I predicted earlier, CryptoKitties has led to copycats (!), including puppies and multispecies.

Obviously, one has to doubt that there is an infinite appetite for these utterly useless digital “collectables”, so we’ll have to see just how many such games succeed. Of course, I would never have predicted the phenomenal success of Pokeman or Minecraft, so I wouldn’t care to bet one way or another.

Alyssa Hertig reports in Coindesk that CryptoKitties is actually notable as the first implementation of “ERC721”, a standard for “Non-fungible Tokens”. Most Ethereum projects have been using fungible tokens (which, I learn, is supported by the “ERC20” standard), but CryptoKittese are, by design unique and not interchangeable—i.e., non-fungible.

As Hertig says, this technical accomplishment is interesting because it opens the way not only for clones of the Kittie game, but possibly other applications that track ownership of uniquely identifiable objects.  This might include tracking ownership of real world objects, as has been discussed for a long time.

It remains to be seen if Ethereum executable contracts are a good technology for these apps.  After all, there are already (several) provenance tracking systems, and even digital asset licensing.  These earlier systems use cryptographic signatures and publish records on a blockchain, but do not rely on Ethereum-style executable contracts.

At a very abstract level, the principle technical difference between CryptoKitties and say, Ascribe, is that CK has pushed some of the transaction logic out into the Internet. But only some of the logic.  Key parts of the system run on conventional servers.

More important, both CryptoKitties and Ascribe require users to trust the company, and both organizations take steps to earn and keep that trust.

Using the “trustless” blockchain is supposed to make the system “more trusted” by eliminating the “centralized” services that are a point of failure.  In these hybrid architectures, that certainly is not 100% true.  Or even close to 100% true.  (I have yet to see any non-trivial system that is completely decentralized and also works.)

What, then, is the advantage to using the slow, balky blockchain?

I dunno.

Perhaps we shall see.

  1. Alyssa Hertig, Crypto Collectables? Ethereum’s Next Killer App Is on Its Way. Coindesk.December 15 2017, https://www.coindesk.com/crypto-collectables-ethereums-next-killer-app-is-on-its-way/
  2. Shirley, Deter, ERC: Non-fungible Token Standard #721. Ethereum Foundation, 2017. https://github.com/ethereum/EIPs/issues/721


Cryptocurrency Thursday