Category Archives: Sociotechnical

More on Blockchain for Supply Chains

I have written about the use of blockchain technology for provenance and supply chains. This is, indeed, a reasonable use case for blockchain technology, if not as compelling as some may think.

But in cryptoland, even the most reasonable ideas can inspire gob-smacking nonsense.

Case in point: Pindar Wong writes at Coindesk about “Blockchain’s Killer App? Making Trade Wars Obsolete” [1].  Huh, what?

This is the familiar supply chain use case.  But what does this have to do with trade wars?

Basically, I think there is a dramatic misunderstanding of what the term “Trade War” means. It means national policies that inhibit trade, especially in physical goods.  It has nothing at all to do with the technical operation of markets.

Wong wants “trade warriors” to use blockchain technology “to reduce trade friction and improve cross-border relations”.  But these frictions and relations are fundamentally political, not technical or economic.  And, tellingly, this article is in the context of strategists in Hong Kong exploring “how to fully digitize trade among the 65-plus countries involved in China’s ‘Belt and Road Initiative’.”  The B&RI is the very model of twenty first century trade war, not to mention neo-colonialism.  (I understand why HK is anxious to find a pivotal role in this initiative.)

Anyway, what is Wong actually talking about?  It’s pretty confusing.

One thing he is talking about is simplifying and automating supply chains. This is a familiar use case, though it is usually supposed to assure the provenance of goods. In this permutation, blockchains actually help trade wars, because smuggling is suppressed.

The ”trustless” blockchain requires some form of trust.  In this case, Wong describes model systems deployed in China.  Characterized as “open, bottom-up, opt-in”, they are actually Chinese government approved standards. Naturally the HK group propose extending these to the B&RI.  “Trust us, we’re from Hong Kong.”

Another innovation, indeed the biggest innovation he talks about is moving supply to demand, i.e., shipping raw materials and IP to the consumer, and manufacturing locally, on-demand.  A blockchain would be one way to keep track of the IP and return royalties and so on.  Basically, when I buy a Samsung mobile phone, it is fabricated in a local factory, and part of the sale gets credited back to Samsung via the blockchain.

This is a highly imaginative scenario, but there are a whole lot of questions. Why would an enterprise want to operate this way?  Why would a government let this be done this way?  I don’t really know.

Wong makes a good point that current WTO rules would have trouble dealing with this approach, at least initially.  But I don’t see any overwhelming difficulties.

More to the point, a blockchain is a pretty minor part of the overall picture. This entire scenario depends on some kind of international legal framework, which is the entire point of the WTO. The WTO of some successor will define the legal framework that the blockchain implements.

The whole idea of a trade war is that nation states have their own policies, which discriminate in favor of local interests. Nothing in Wong’s scenario changes this political picture. Replacing the WTO with an opaque Chinese hegemony such as the B&RI, is scarcely a realistic solution, blockchain or no blockchain.

Taking Wong’s overall point, it is interesting to think it is likely that using a blockchain does not make trade warriors “powerless”. In fact, to the degree that blockchains are transparent and trustworthy, they will make it far easier to implement discriminatory trade policies.  In short, nations will be able to use blockchain based provenance to implement “smart trade wars”.

Blockchains will actually empower a new breed of highly efficient trade warriors.

  1. Pindar Wong (2018) Blockchain’s Killer App? Making Trade Wars Obsolete. Coindesk,


Cryptocurrency Thursday

Yet Another “Blockchain for Provenance” System

In the short decade since the Nakamoto paper [5] cryptocurrency enthusiasts have put forward a variety of use cases for blockchains and cryptocurrencies.  It is notable that most of the exciting use cases aren’t actually in the canonical paper itself, and most of them have yet to prove out in the real world. (And the most successful use cases are the ones not put forward as good examples–extortion, dark commerce, money laundering, etc.)

One of the perennial favorite use cases is Provenance:  tracing goods from source to consumer.  For companies, this is “logistics” or “supply chain”, for ordinary consumer this is about quality control.  This the same problem that scientists (and anyone) faces with data quality—where did this data come from, and what has been done to it?  In the latter form, this is called “provenance” and we were struggling with solutions a long time ago (before Nakamoto, Ante Bitcoin) [3].

This month yet another company touted this use case at the Ethereal Summit in NYC [1] .  The presentation by Viant traced a Tuna from Fiji all the way to the conference sushi plates.  Tagged with RFID, records of the sales and transportation of the fish are on the Ethereum blockchain, so everyone can check that the fish they are eating is “moral”. (How it can be “moral” to harvest increasingly rare wild animals and fly them half way around the world beats me.)

This is the yuppie version of Provenance (making sure that my luxury goods are authentic and “moral”), but the technology is the same as any supply chain.

Looking at Viant’s web site, they seem to have a reasonable grasp on the problem.  They have a logical model of provenance that includes “four pivotal aspects of an asset: Who, What, When, and Where”.  The model includes “Actors” and actions, and “Roles” that define permissions.  IMO, this is the right stuff (See [3]) .

They also have RFIDs to tag and geo track, and apps to implement operations (e.g., sales to distributors).  These are certainly the right technology, and they are lucky to have ubiquitous mobile devices and “the cloud” to implement these concepts we pioneered in the late twentieth [4].

So what does blockchain technology bring to the table?

First of all, it is used as a shared database, essentially a bulletin board.  The cryptocraphically signed and immutable records provide an unfudgeable trace of the object’s life.  And the blockchain is available to anyone, so ordinary consumers can get the authenticated traces of the object. (More likely, any third party can create apps that deliver the information to consumers – no normal person monkeys around with the blockchain itself.)

The second feature is the use of Ethereum “smart contracts” to process the transactions. This technology lets the company post standard scripts for, say, transfer of an asset. The script is available anywhere, and executes the same way for everyone.

These features are, of course, available from conventional databases and file systems as well.  But the Ethereum blockchain is available to everyone, and is maintained by the Ethereum network rather than dedicated servers.  This is the third advantage of the blockchain—deployment (no need for server farms), availability (no server access required) and maybe cost (TBD).

It is interesting to point out one feature of Nakamotoan blockchains that is not really used here:  trustlessness.  While the system boasts that it is decentralized and therefore “trustless”, this is misleading.

Provenance is literally all about trust. The point of tracing the object is to assure that it is what it is supposed to be, and that requires knowing who did what, etc.  Furthermore, it needs to establish a trusted trace, with each actor and action attested by a trusted source.

Using a blockchain, or, indeed, any digital system, is not sufficient to achieve this.  The company will tell you this.  The RFID can be removed or destroyed.  Actors can make mistakes or be suborned.  On the blockchain, false records look the same as correct records (and can never be removed).  Trust involve real world protocols, including authentication of identities.

In this area, the blockchain may actually be a liability. The “trustless” data cannot be trusted.  Part of what the company is doing with the “smart contracts” is overlaying a network of trusted records on the trustless blockchain.

There are other potential draw backs of using a blockchain in this use case.

Let’s talk about privacy.  Think about it. It’s not clear just how “moral” it is for anyone in the world to know where every bit of sushi came from and ended up.  Individual fishing captains don’t necessarily want any kid on the Internet snooping on their business, not to mention rival captains and possible criminal gangs.  And the caterer doesn’t necessarily want random people, competitors, or criminals tracking their business. And so on.

Second, there is no way to correct mistakes. Even if the software is always correct (which is unlikely), people make mistakes and are dishonest. If bad information gets onto the blockchain, it can’t be removed or corrected.

So, imagine that a bad actor somehow gets a bunch of bad fish entered as OK fish.  The blockchain shows that this is “moral tuna”, even though it isn’t.  Even if we find out about the fraud, the blockchain could still have the evil records forever.

One last point.  Viant is one of I don’t know how many companies trying to implement this kind of Provenance.  With all these variations out there, it will be extremely important to have interoperability standards, so you can combine tracking from a number of sources.  (See the W3C PROV working group.)

Using standards would seem to be both obvious and compatible with the philosophy of decentralization.  After all, if the only way to do tracking is to use Viant’s proprietary data model and software, then a key advantage of the decentralized blockchain is out the window.

Overall, Viant and others are doing the right thing.  It remains to be see whether using a blockchain will be a net win or not.  And all of them should implement the standards we started developing back at the turn of the century.

  1. Alyssa Hertig (2018) Moral Food: A Fish’s Trek From ‘Bait to Plate’ on the Ethereum Blockchain. Coindesk,
  2. Robert E. McGrath, Semantic Infrastructure for a Ubiquitous Computing Environment, in Computer Science. 2005, University of Illinois, Urbana-Champaign: Urbana.
  3. Robert E. McGrath and Joe Futrelle, Reasoning about Provenance with OWL and SWRL, in AAAI 2008 Spring Symposium “AI Meets Business Rules and Process Management”. 2008: Palo Alto.
  4. Robert E. McGrath, Anand Ranganathan, Roy H. Campbell, and M. Dennis Mickunas. Incorporating “Semantic Discovery” into Ubiquitous Computing Environments. In Ubisys 2003, 2003.
  5. Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System. 2009.


Cryptocurrency Thursday

Real Quantum Blockchain

More WTF-Science!

Nakamotoan blockchains have a certain mystical quality about them, but they are surely built on Von Neuman or at least Turing machines, no?  Plain old physics.  Time runs one-way. No spooky action at a distance.

At base, The general goal of Nakamoto is to create immutable data structures, permanent across time.  No action in the future can ever change the results of an action. Another way of saying that is that the data today is necessarily tied to the data at the original moment of creation.

This is, in a way, a form of time travel, isn’t it?  When I access the data, I want to access it at the exact moment of creation (or at least, the moment when it was “preserved” or “frozen” or whatever).

From this perspective, cryptographic schemes are mathematically simulating this time travel, by attempting to tunnel through the future in a sealed time corridor, i.e., the cryptographically signed data.  All the rigmarole of Nakamotoan signatures and “consensus” is a mathematical dance designed to make an (almost) unbreakable virtual link between the data and all future incarnations of it.

This dance is all necessary because we can’t have real time travel.

Or can we.

This month, researchers in New Zeeland report a conceptual design for a blockchain using quantum time-entanglement [2].

“Perhaps more shockingly, our encoding procedure can be interpreted as non-classically influencing the past; hence this decentralized quantum blockchain can be viewed as a quantum networked time machine.“ ([2], p. 1)

A time machine?!?   Now this is what we were thinking of when we were first imagining the blockchain!

The concept involves “entanglement in time between photons that do not simultaneously coexist”, which is even spookier action at a distance.

The details are beyond my puny understanding of quantum physics, but the paper describes a system that encodes data in a way that is not just difficult to tamper with, but impossible to tamper with.  Furthermore, it isn’t even possible to try to tamper with any blocks except the latest, because the photons no longer exist!

“in our quantum blockchain, we can interpret our encoding procedure as linking the current records in a block, not to a record of the past, but linking it to the actual record in the past, which does not exist anymore.”

Or, as they say, “…measuring the last photon affects the physical description of the first photon in the past, before it has even been measured. Thus, the “spooky action” is steering the system’s past” (quoting reference 22)

Assuming this concept is valid, it not only solves the challenge that QC poses for conventional blockchains, it is actually a direct implementation of the distributed “time machine” that classical blockchains only simulate.

Very cool.

And very, very spooky.

  1. Charles Q. Choi, Quantum Blockchains Could Act Like Time Machines, in IEEE Spectru – Tech Talk. 2018.
  2. Del Rajan and Matt Visser, Quantum Blockchain using entanglement in time. arxive arXiv:1804.05979, 2018.



Cryptocurrency Thursday

2018 Coworking Survey: Not That Great News

The annual Deskmag survey [1] is presented at Global Coworking Unconference Conference (GCUC) every year.  For the past few years, the full report has been proprietary, i.e., you have to buy it.  This is standard practice for corporate research, but makes it impossible for me or any other independent researcher to critique or even comment on their results.   Personally, I think it would be a good idea to release as much of the data as possible.  Maybe release last year when you publish this year, or something like that.

Anyway, the headlines are pretty much the same as usual [4]. The number of coworkers and the number of coworking operations has grown steadily.

In the past few years, the survey has focused more and more on coworking operators, and on projecting the future of the “industry”.  Much of the report is a collection of opinions about what insiders expect in the next year.  These are roughly as useful as any of the junk in the “business news”.

The big news this year, of course, is the aggressive expansion of the WeWork chain [2].  The survey documents the widespread complaints about the impact of WeWork’s anti-competitive behaviors. There isn’t any actual data, just opinions.

This annual survey is one of the most influential reports on coworking.  As noted, it has the privilege of an annual presentation at GCUC, and everyone cites it, including me [3].

This survey is becoming less and less useful over the years.

First of all, the methodology is not published, but seems very weak.

This year, as in the past, it is still conducted as a web survey.  There is no sampling strategy, and it is subject to all the shortcomings of any public poll. The only thing reported about the sample is “1980 people filled in the questionnaire.”  That’s a healthy sample size, but they appear to be entirely self-selected.  It isn’t necessarily representative of all coworkers or coworking operations, and certainly doesn’t represent, say, workers who don’t coowork, or have stopped coworking.

Even the headline numbers are less than they seem.  The reports emphasize a continued steady growth in workers and sites.  Even taking these data at face value (and there isn’t really much support for the specific numbers) the story isn’t all that rosy.

The reported growth in coworkers is something like 33% in 2018.  This is healthy growth, though hard to parse precisely. Does a worker who uses a coworking space one hour per year count the same as a full time, all year member?

But the main point is that this large year-to-year increase is coming off a pretty small base number. If the total number of coworkers really is 1.69 million people world wide, then this is something like one out of every 1700 workers.  (The increase is up from something like 1 in 1900.)  This is a tiny fraction of all workers.

This actually makes sense.  Vast numbers of workers produce physical products and/or deal directly with customers and users (e.g., farmers, doctors, firefighters). Coworking isn’t really a meaningful option for these workers, even if they are independent contractors.

Of course, for some categories of work and workers, coworking is much more prevalent.  I’m sure that a relatively high proportion of freelance digital workers choose to cowork at least some of the time. This workforce has been growing in recent decades, perhaps as much as 26% in 2017.   Similarly, the number of “freelancers” is growing, perhaps by 5%. The reported growth in coworking is roughly consistent with the growth in digital workers, and faster than the general group of “freelancers”.

The 2018 survey also finds “18,900 shared workspaces around the world, compared to 8,900 in 2015.”  Doubling every 3 years is a pretty good pace, though again it is a small base number.  20,000 sites is not really a big number. For comparison, there are something like 25,000 Starbucks sites world-wide (and most coworkers probably also work in one or more coffee houses.)

Overall, coworking is still a tiny, tiny sliver of all workers and workplaces.

With this in mind, the rather gloomy predictions from many of the respondents stand out as serious red flags.  Many operators report difficulties attracting new members, too much competition (e.g., from other coworking spaces), and other signs that growth will be limited.

It may be telling that even in the deskmag survey,  the growth of workers is about the same or slower than the growth in sites.

Altogether, it is easy to believe that coworking is already overbuilt.

I personally take this entire survey with a grain of salt. It is not only self-reports from a self-selected sample, many of the headline questions are actually asking for guesses about the future.   Sigh.

However, taking the data to be at least somewhat accurate, the rhetoric about growth looks like corporate cheerleading to me. The capacity is growing as fast, and possibly faster, than the user base. There is plenty of reason to wonder just how large the pool of potential coworkers actually is, and will be in the future.

This intuition is reflected in the anxieties expressed by the respondents about finding new members and competition.

The signs are that coworking may be “over built”, and may experience a major crash.

  1. Carsten Foertsch (2018) 1.7 Million Members Will Work in Coworking Spaces by the End of 2018. deskmag,
  2. Carsten Foertsch (2018) WeWork harms 40% of all coworking spaces in its close vicinity, however…. deskmag,
  3. Robert E. McGrath, What is Coworking? A look at the multifaceted places where the gig economy happens and workers are happy to find community, Urbana, Robert E. McGrath, 2018.
  4. Ruby Irene Pratka (2018) Deskmag survey: More than 1.5 million people to use coworking spaces this year. Sharable,

For more information about coworking and coworkers, see my new book:

What is Coworking?

A look at the multifaceted places where the gig economy happens and workers are happy to find community,

What is Coworking?



Ethereum Contracts Are Buggy!

CryptoTulip of the Year for 2017, Ethereum is still thrashing around.  It seems like there is another great idea for totally remaking the system every week or so.  Indeed, sometimes there are so many ideas flying around it is hard to see how it can all stick together in a single system.

Nevertheless, confidence and enthusiasm remain high, even though they still haven’t figured out how to deal with last year’s big “oopsie” that left millions of dollars worth of Ethereum unreachable.

Personally, I don’t really think that a gang of unelected philosopher kings is really going to solve the problem.  (Plato advocated this back in the day, but it has never worked as advertised.  “Wise dictators” are usually just dictators.)


Meanwhile, out in the real world….

Several exchanges reportedly have “paused” Ethererum contracts in response to reports of bugs. In fact, they basically stopped support for the problematic ERC-20 protocol completely.

Wow!  Crypto exchanges acting almost like real, grown up businesses!  What a concept!

Of course, I have to wonder, “why now?”

The particular bugs in question are just the latest of a long line of such bugs. So why were they allowing ERC-20 in the first place?

All snarking aside, this development actually raises some very important points.

First of all, the bugs in question aren’t necessarily a flaw in the protocol, they are mainly just bad programs.  There will always be bad programs.  There is no such thing as a bug free programming language, and there can never be one.  If using Ethereum contracts depends on all contracts being correct, then it will never work, it can never work. Never.

Second, despite the decentralized protocol, and the fact that “no one” is in charge, in the real world the end-to-end system does have people in charge, and can respond to problems. In this case, the operators of the exchanges have intervened to protect their customers and their business.

Unfortunately for some users, the response is a draconian ban on the whole ERC-20 protocol. In this case, I don’t see much alternative.  It’s impossible to really tell if some ERC-20 contract is a problem or not.

Third, note that just because the blockchain is decentralized and immutable doesn’t mean that everyone has to agree on what to do with it.  The ERC-20 protocol and code is still there, indeed, it will be there until the heat death of the universe. But a lot of people can’t use it because their exchange does not honor the protocol.  Ironically, the “decentralization” that assures there is no one who can “censor” the blockchain, also assures that there is no one who can “censor the censors” of the blockchain.

This kind of behavior could be problematic in the long run. If part of the network accepts some contracts and not others, then how can anyone really use the system.  This is sort of a really soft ‘fork’ that effectively splits the network Even though there is a single technical system, it is used differently by different sub networks.

Ethereum is certainly pushing hard to repeat the CryptoTulip of the Year in 2018!

  1. Nikhilesh De (2018) Crypto Exchanges Pause Services Over Contract Bugs. Coindesk,
  2. Rachel Rose O’Leary (2018) Ethereum Infighting Spurs Blockchain Split Concerns. Coindesk,
  3. Rachel Rose O’Leary (2018) Ethereum Is Throwing Out the Crypto Governance Playbook. Coindesk,
  4. Rachel Rose O’Leary (2018) Ethereum’s Dialogue Divide Is Slowing Answers to Its Toughest Questions. Coindesk,


Cryptocurrency Thursday

Cerf on the ‘Turing Test 2’

Ole Vince Cerf has been around a long time, longer even than me.  He is famous for one of the most successful, if hacky, computing innovations of all time: TCP/IP, the foundation of the Internet in the 1970s and 80s.  Since that time, he has punditized for many decades.

Every once in a while Cerf drops a gem that is actually beautiful and enlightening.

This spring he nailed it with his short essay, “Turing Test 2” [1].

The original Turing Test is, of course, Alan Turing’s 1950 “Imitation Game” (no citation really necessary).  This Turing Test 1 challenges a human to exchange text messages with an adversary, and to try to detect whether the other party is human or a computer.  Whether or not this is a test of “intelligence” or even a measure of good software design, it certainly is a valid challenge for human-computer (and human-human) interactions.

Cerf points out that a lot of recent development and hand wringing involves a second challenge:  a computer exchanges text messages with an adversary, and attempts to determine if the other party is a human or another computer.  He calls this Turing Test 2.


This is one of those ideas that is so obvious once it is pointed out, even though I had never made the connection between TT1 and what he describes as TT2.  Kudos.

What he is talking about is the growing, if chaotic, field of “bot detection”, which has achieved global prominence with the documentation of weaponized bot nets that are conducting information wars every day.  Of course, the TT2 is also important for everyday interactions, as is seen in the ubiquitous use of CAPTCHAs.  As Cerf points out, the TT2 is also important for maintaining many kinds of social interactions, including discussions, crowdsourcing, and, of course, news of the world.

While the TT1 has mainly engendered interesting psychological research and philosophical musing, the TT2 is a raging arms race, pitting algorithms against algorithms in a desperate battle to defend our information homelands from marauding hordes.

Anyway, thanks to Sensei Vinton, I now have a new term to use, and it really helps make sense of the world.

  1. Vinton G. Cerf, Turing test 2. Communications of the ACM, 61 (5):5, May 2018.


Tracking Bitcoins, Mitigating Evil

Bitcoin was designed to be difficult to regulate, in the same way that gold is difficult to regulate. Possession (of a private key) is ten-tenths of the law as far as Bitcoin is concerned, and it can be very difficult to tell exactly how a particular Bitcoin came to be possessed by a particular individual.

This relative opacity is one of the properties that makes Bitcoin and other cryptocurrencies so attractive for criminals, extortionists, tax evaders, and dark markets.

From the point of view of believing Nakamotoans,  untraceability is a feature.

From the point of view of the law and society in general,  opacity is often considered a bug. Civil society in general has little appetite for unregulated financial systems, so Bitcoin will never succeed unless it can be brought into civil society and the rule of law.

This month researchers at Cambridge University describe how an old legal principle might be applied to Nakamotoan cryptocurrency to rein in abuses and “make Bitcoin legal” [1].

The researchers point out that many Internet technologies have been put forward as “outside the law”, but this is an assertion not a fact.  The fact is that “the law” decides what the law is and how it is applied.  No one gets to simply secede from the legal system, at least not without resort to pure power politics.

“we have repeatedly seen a pattern whereby the promoter of an online platform claims that old laws will not apply.”

“The key is making online challengers obey the law – and the laws may not need to change much, or even at all.”

In the case of Bitcoin, the researchers explore how conventional financial controls, especially anti money laundering rules, could be applied to Nakamotoan cryptocurrency.  They conclude that it is surprisingly straight forward and does not require changes to the network protocols.  I.e., the legal system can adapt to cryptocurrencies as they stand now, without any cooperation or consent from programmers or users.

There is a common legal principle that one may not profit from the fruits of crime.  Similarly, you cannot receive goods from someone who does not legitimately own them.  If someone gives you a stolen coin, it must be returned to the original owner (and you may well be out of luck).  Thus, it is very important not to trade in ill-gotten goods.

It is often the case that the monetary fruits of crime are passed along mixed in with other money.  In the case of Bitcoins, this kind of mixing occurs rapidly and across the whole Internet.  This presents a dilemma for the law.  The funds are “partly” stolen, but which part can be confiscated?

The Cambridge team discusses the history of this problem.

Theft and misuse of Bitcoins are a significant issue, to the point that even most Bitcoin users are concerned.  If there is a significant risk that your assets may be stolen (or misplaced), with no possible recourse, then cryptocurrency is unattractive for many uses.

Philosophically, Nakamotoans generally do not want government guarantees (e.g., registration of ownership) or other conventional mechanisms for protecting assets.  An alternative would be for courts to enforce rules, e.g., to allow recovery of stolen or extorted Bitcoins.  But how would courts adjudicate such a case?

In the past, the general legal approach has been to consider the funds “poisoned” by the presence of illegal money.  Someone who holds the funds will have to pay a penalty proportional to the illegal funds.  This stands as a deterrent to dealing in potentially “toxic” assets.

One way to do this is to consider all the money to be N% illegitimate, i.e., to confiscate part of the value of the whole batch.  This approach can be used with Bitcoin, though it is a blunt instrument.  Anderson et al. indicate that a very large proportion of Bitcoins would be touched by such “pollution” (5% in one sample–one in every twenty!)

They propose an alternative mechanism that echoes an approach used in nineteenth century English law:  First-in-first-out.   The idea is to trace the flow of coins and to assign an order to each transaction.  The first coin taken out of an account is equated to the first coin put in, and so on.  When a stolen coin is spent, that transaction is identified and the payment is illegal.  This is a sort of “reverse lottery” – an unlucky user ends up losing.

This approach is much more precise way to identify and deter accepting ill gotten money.  The paper argues that this is quite possible with Bitcoin, using the public blockchain and crime reports.  Furthermore, the FIFO principle works even when “mixers” are used to conceal the origins of the Bitcoins.  In the end, when this legal doctrine is applied, accepting Bitcoins from a mixer risks losing the entire payment in the unpredictable event that you receive coins designated “poison”.

This approach isn’t “centralized”, and it doesn’t break Bitcoin.  It doesn’t even change Bitcoin. It just wraps Bitcoin in a legal framework.  Honest users would have a way to behave honestly (use honest exchanges), crime could be punished, and the system functions as efficiently or inefficiently as now.

“In short, we might be able to turn a rather dangerous system into a much safer one – simply by taking some information that is already public (the blockchain) and publishing it in a more accessible format (the taintchain). Is that not remarkable? “

It is difficult to overstate how important it is for Bitcoin and other cryptocurrencies to get “legal”.  Whatever the technical merits of Nakamotoan technology, it cannot succeed outside the law.

  1. Ross Anderson, Ilia Shumailov, and Mansoor Ahmed, Making Bitcoin Legal. Cambirdge University, Cambirdge, 2018.
  2. Andy Greenberg (2018) A 200-Year-Old Idea Offers a New Way to Trace Stolen Bitcoins.,



Cryptocurrency Thursday