Category Archives: Blockchain Technology

Government Blockchains Coming This Year

Around the world, various governments are experimenting with Blockchain technology. The classic use case is for public records, such as property titles (e.g., the Swedish Lantmäteriet), where the blockchain serves as a cryptographically secured bulletin board.

The general use case is to make these records easy (and cheap) to access via the Internet, while maintaining the integrity of the information. In the classic case of the land registry, the government agency performs its traditional role as authenticator, certifying the record, date, and identities of the parties and assets. Blockchain replaces (more likely duplicates) other forms of records, including databases. In principle, this could be really cheap and really reliable (assuming the records are correct to begin with).

Many governments are trying similar ideas, including my local government in Illinois. (Heaven protect us from these clowns! If anyone can mess up blockchain technology, it’s the Illinois state government.)

Amy  Nordrum reports in IEEE Spectrum about the different approaches in Dubai and Illinois [1]. Both jurisdictions are looking at a variety of uses, generally involving public record keeping. One big hope is that a blockchain can be a really fast and cheap way to publish these records, redusing both public expenditure and friction on commerce.

Nordrum calls attention to the different approaches. Dubai is building a single system (using Ethereum and Fabric from Hyperledger). Illinois is floating multiple pilots, and letting the projects select what technology to use. Illinois is in a “try anything” stage, and explicitly assumes that integration can be done later with no particular cost or problems. (Does Illinois have the remotest clue what it is doing?)

What impact are these innovations likely to have?

Robert Charette, an expert in IT risk management, doubts blockchains will prove to be more effective than a simple cloud database in most cases. “It’s kind of like solving a problem that’s already been solved,” he argues.

First of all, the imagined benefits are pretty unambitious. They are tackling easy problems (for example, land registries have been around since Babylonia, the Lantmäteriet itself is 390 years old), and the main goal is to reduce overhead from existing systems, which maintaining or improving “transparency”. Thus, as long as a blockchain based system at least ties the performance of conventional system, and costs less for all parties, it will be called a success.

On the other hand, the problems are not only already solved, they are scarcely a choke point in the economy or everyday life. Having a property deed appear on line in 30 minutes instead of 30 days matters little to most transactions. Sure, this will make property flipping a bit easier, but why do we care about that? Why do we really want to do that?

Much will depend on how the cost accounting is done. Most governments, and Illinois for sure, will be interested in the reduction of expenses for IT infrastructure. If a blockchain based system eliminates the need for leasing servers and IT support, that would be an important advantage.

Just how much will blockchain technology reduce IT requirements?

It’s hard to predict precisely. The blockchain itself replaces a networked database, e.g., running in a cloud. That’s a good thing, because public facing databases are a significant security risk and also quite costly. Blockchain technology also uses cryptographic signatures, which is a very good thing. Of course, you could use cryptography the same way in any system, but blockchain is a quick and easy way to get this technology deployed more widely.

On the other hand, the rest of the infrastructure will still need to exist. The blockchain records themselves would be used by lots of other software—that’s the whole point.  There will have to be network forms and APIs for getting data in and out of the system, and these run on conventional infrastructure with concomitant risks and costs. In fact, if the blockchain is working well, users will not know that the blockchain is there—everything else will look the same as before.

It seems to me that the blockchain replaces one cloud database and concomitant APIs. This might actually be one part of a larger centralized system. Replacing the database will mean that at least some software will have to be replaced to use the blockchain.

Note that the agency still needs to do its non-digital work, such as  certifying identities, verifying records, and so on.  Publishing the results in only one part of their work, and frankly, it’s the easy part.

If, as seems likely, the organization needs to keep the database (e.g., for auditing and other internal activities, or simply out of caution), then the blockchain software is actually duplicating code, not replacing it. Worse, the parallel systems have to be kept in sync, which is extra code.

However cheap blockchain may be, the cost savings could be quite complicated to assess. I’m sure that politics will simplify the assessment, providing rosy assessments.

My own guess is that the blockchain solutions will no worse than what they replace. They may be better (e.g., because they have newer technology), though they could be worse (e.g., if quality control suffers).

But I guarantee you that the governor of Illinois will declare it a success no matter what.

  1. Amy Nordrum, Illinois vs. Dubai: Two Experiments Bring Blockchains to Government, in IEEE Spectrum – Features. 2017.


Cryptocurrency Thursday

Yet Another Blockchain Use Case: Sharia Compliant Transactions

Blockchain technology, like classical bookkeeping, is generally culturally and morally neutral. Smart contracts, a la Ethereum, are technical expressions of contract conditions, which can refer to pretty much any body of law or custom.

A new initiative is setting out to develop Sharia compliant contracts on top of Ethereum. The general idea appears to be to encode Islamic principles in the logic of the programs, to ensure that proper rules are followed. These rules are supposed to prohibit charging interest, gambling, and speculation, among other behavior.

The compliant contracts will presumably structure transactions and trades in ways that do not cross the line. Furthermore, the public nature of the contracts and the distributed ledger will make the compliance (or any slippage) visible to anyone—a significant motive for good behavior.

I’m no expert on these topics, but I gather that there are centuries of practice that defines ways to get business done without straying from Sharia. This framework will encode these practices in formal logic and executable code.

That’s pretty neat.

One advantage of using this kind of executable contract is that there are likely to be cases where a transaction needs to be very carefully structured to achieve the goal that might have been achieved by, say, an interest bearing loan, without violating Islamic principles. The digital technology will make it possible to create, validate, and execute even complicated transactions easily and quickly. There should be no performance penalty for complying with Islamic principles, even if there should be extra hoops to jump through behind the scenes.

Of course, there are some interesting challenges.

It’s one thing for programmers to create a logical framework, but its quite another thing to show that it truly, accurately, and completely complies with any given legal principles, Islamic or other. A significant part of this work will surely be careful review and documentation of the logical framework’s compliance. Just what needs to be proven about the logic of the contract, and just what kind of proofs would be adequate? That will be an interesting body of literature, indeed.

Overall, this could be a ground-breaking effort. To date, much of the work on smart contracts has been from a non-Islamic perspective (and sometimes without any legal framework at all). It will be interesting to see how the deep historical principles of Islam are expressed in this a-cultural medium, and it may inspire other religious and ethical frameworks. I am not aware of any other similar efforts.

(For one example, how about encoding the various Creative Commons licenses into standard smart contracts? Perhaps that has already been done.)

This project also makes me think.

I wonder if it will be possible to automatically translate between different executable contracts. Can I have a button to “make this ‘smart contract’ be Sharia compliant”?  Perhaps tools could have a high level specification of what is intended, and then options for creating concrete contracts within one or more legal frameworks.  That would be kind of cool.

One huge caution I would have for this project is to look carefully at the blockchain software and protocol. While any given executable contract might be Sharia compliant, if the transactions are executed and recorded on an open system, the other data there is almost certainly not Sharia compliant. The ethical records will be in the same data blocks with everything else: on-line gambling, speculative bets, interest payments, and so on. And the transactions will be processed by software that also processes all these other activities.

The question will be whether this approach is acceptable or not. Is it OK to handle, at least indirectly, all these other transactions?  Or should the software only be used for compliant transactions?

This concern could be mitigated by a private blockchain that only handles Sharia compliant transactions. (Perhaps Ripple might be a better match than Ethereum, since it already is designed after a Halawa network, and let’s you control who you trust.)

I would also urge that the consensus mechanism be examined carefully. Nakamotoan consensus depends on mining that has an incentive system that may or may not be consistent with Sharia. The Nakamoto block reward strongly resembles a lottery or slot machine, which seems problematic to me.

Ethereum may be moving to a proof-of-stake method, and there are other possibilities. These alternative ‘math problems’ might have significantly different ethical implications.

This project is quite interesting, and will bear watching as it develops. I’d like to see blockchain technology put to socially positive use.

  1. SettleMint, SettleMint to create Sharia compliant financial products for the Islamic Development Bank member countries. 2017.
  2. Sujha Sundararajan, Islamic Development Bank to Research Sharia-Compliant Blockchain Products. Coindesk.October 20 2017,
  3. Bernardo Vizcaino, Saudi Arabia’s IDB plans blockchain-based financial inclusion product, in Reuters – Fintech. 2017.


Cryptocurrency Thursday

Bitcoin Is Designed To Be Wasteful

..and that won’t work for long.

One of the great curiosities of Nakamotoan cryptocurrencies is that the key innovation in the protocol is the use of “proof of work” to implement a truly decentralized timestamp [2]. At the core of this innovation there is a scratch off lottery, in which computers spin and spin, looking for a winning number. This computation is deliberately designed to be inefficient, so that it cannot be cheated or repeated. In fact, there is a “knob” that resets the difficulty to keep it inefficient in the face of technical improvements.

For me, this feature is just plain weird. My whole career–in fact, everybody’s career–has been about making software go faster. Bitcoin not only doesn’t want to go faster, it keeps adjusting the parameters to prevent software from going faster. This is so backwards and so wrong to conventional software engineers.

The underlying reason for this approach is to force real world costs into the protocol, in order to make the system “fair”. There is no back door or magic key for privileged users to game the system.  Only real (computing) work counts.

As a side-effect, these costs create a form of “value” for Bitcoin, which logically must be worth at least as much as the cost of the computing work needed to obtain them. This is a sort of computational labor theory of value, which is no doubt amusing  to twenty first century Marxists.

Unfortunately, the “work” that is used to mine and handle Bitcoin is a crude, brute force algorithm. It is simple and effective, but it sucks down computing cycles like mad, which use up large amounts of electricity.

Peter Fairley writes in IEEE Spectrum about “The Ridiculous Amount of Energy It Takes to Run Bitcoin” [1]. In all, the Bitcoin network does 5 quintillion (5,000,000,000,000,000,000) 256-bit cryptographic hashes every second which he estimates consumes about 500MW of power. In addition, there are other cryptocurrencies and blockchain networks (including multiple versions of Bitcoin itself), with substantial, if lesser, power consumption.

This is quite a bit of power, something along the lines of a small city. Of course, it’s only a small slice of the power consumed by the whole Internet, not to mention the rest of modern life. But the engineer in me hates to see so much power burned off for so little meaningful work.

Fairley argues that a bigger problem is that if Bitcoin or some form of Nakamotoan blockchain succeeds and grows to be come truly ubiquitous, then the power consumption is likely to grow to the point that it is unsustainable. Even if we are OK with expending cycles for this purpose, at some point there will not be enough power to run and cool all the computers.

Predicting the future is difficult, of course. Computers in general are becoming more efficient, so growth in cryptocurrency networks will not lead to a linear growth in their power use. Nevertheless, it seems likely that the crude proof of work algorithm designed by Nakamoto will be difficult to sustain over the long haul.

As Farley discusses, there are alternative methods to achieve the same goal. Many alternatives, in fact.

For one, there is substantial interest in various “proprietary” blockchains, which may work the same way as Bitcoin, but do not rely on the open Internet. These networks trade off the “trustless” and “decentralized” nature of Nakamotoan style protocol in various ways, gaining much more efficient performance as well as other potential benefits, such as legally documented authentication

There are also alternative “math problems” that may be used instead of Nakamoto’s brute force hashing algorithm (e.g., Proof of Stake, or Algorand). It is also possible to utilize special purpose hardware, or even Quantum Computing.

In short, there are alternative technologies that would make a cryptocurrency far more scalable. If Bitcoin were normal software, there would be a strong case for reengineering it.

But Bitcoin isn’t “normal”. Not even close to normal.

Another cunning innovation from Nakamoto is its “decentralized” governance model. Changes to the code are published and users vote on them by adopting or ignoring them. There is no central planning, or any planning at all. Furthermore, changes that are not backward compatible essentially create a “new currency”, which may or may not eliminate the “old” code. These fork events can and do create parallel, competing versions of a cryptocurrency.

The point of Bitcoin’s decentralized decision making is to protect against “the man”. At the core of Nakamotoan ideology is the desire to make sure that no government or corporate cabal can fiddle with the currency, block access, or rewrite history. Changes require “consensus”, and “everyone” has a vote.

Unfortunately, this design also protects from centralized engineering. Technological progress requires decisions, and sometimes the decisions are complicated. Furthermore, good engineering is proactive, not reactive: it is a bad idea to wait until a problem is catastrophic or evident to everyone. Furthermore, rational engineering cannot always make everyone happy.

This is a formula for disaster. Ethereum has not only split into two currencies, one of the forks actually rewrote history. Bitcoin itself has been stuck in a rut, unable to deal with the most basic engineering problem (data structures), and heading for a catastrophic split into multiple versions. For that matter, dozens of other cryptocurrencies have floated, competing with Bitcoin (and sucking down yet more power).

If recent history is a guide, no improvement to Bitcoin is likely to be accepted by the current Bitcoin network. However, it is possible to boot up a technology that successfully competes with Bitcoin (as, say, Ethereum has done), and which might one day overshadow it. But Bitcoin probably cannot change.

At some point, Bitcoin qua Bitcoin will surely crash. Perhaps it will be replaced by other cryptocurrencies. Perhaps politics will keep it marginalized. For example, access to vast amounts of electricity is clearly a potential choke point for such a profligate algorithm. Or perhaps technical changes will break it. For example, Quantum Computing will eventually be able to both crack the encryption and likely will also be able to overwhelm the protocol with replay and other attacks. At that point, the blockchain will be corrupted and Bitcoins will have little value.

One of “Bob’s Rules” is that “All software becomes obsolete, sometimes much sooner than you expect”.

The problem is, Bitcoin is supposed to not be software, it is supposed to be money.  The ramifications of Bitcoin’s inevitable crash are staggering.

  1. Peter Fairley, The Ridiculous Amount of Energy It Takes to Run Bitcoin. IEEE Spectrum, 54 (10):36-59, October 2017.
  2. Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System. 2009.


Cryptocurrency Thursday


Yet Another Local Currency

This year the city of Liverpool launched its own local digital currency [2].   This particular project uses Blockchain technology from Colu. Colu appears to be using the Bitcoin blockchain, though users, developers, and businesses probably never need to know about the blockchain at all.

The idea, of course, is to improve local economies by capturing as much spending as possible within the local community. Setting aside the thorny issue of how to define “local”, the digital payment system is essentially a script system, honored by participating businesses, and ultimately tradable for fiat currency.

The digital system makes it easy to automatically implement rules to make it attractive to keep and use the tokens rather than immediately convert to pounds. If this works as intended, a Colu Pound will be “spent” several times, presumably in local transactions, before entering the wider economy. (Ironically, every transaction leaks fees to Colu—a non-local, private company.)

It is important to note that a local digital currency can be implemented in many ways, with or without a blockchain (and the same idea has been implemented without computers at all).

The success of the project depends on three factors:

  • A good user interface and user experience
  • Participation by enough businesses and people to be useful
  • Trust in the system

Colu is well aware of these requirements, and is working hard to provide all of them.

Using a blockchain may be convenient and cheap, but Blockchain qua Blockchain (colored coins, lightning, Bitcoin, or any other) is pretty much irrelevant to all but the last point.

It is clear that the trust in the system comes not from the software or the protocol, but from the face-to-face interactions of local people and local merchants. You don’t need digital currency for that interaction, this is basically neighborliness.

The main contribution Colu makes to this neighborliness is to nudge you to use the digital currency with participating merchants, workers and suppliers rather than trading for UK Pounds. These nudges don’t require blockchain and most users couldn’t tell the difference how you implement the transactions.

Local digital currencies are an intriguing idea, breathing new life into very old ideas about local economies.

Of course, digital local currencies cannot overcome the historic limitations of local currencies. Historically, local currencies have had difficulty competing with national currencies which are easier to use and offer access to large, cheap markets. Digital currencies can certainly operate cheaply, though the same technology is available for the UK Pound, so there isn’t much, if any, economic advantage.

It is an open question whether using a “trustless” blockchain-based system helps foster “trust” in the local currency. Some people find “decentralized” blockchain protocols more trustworthy than “fiat currencies” managed by banks and backed by governments.  This judgment depends a lot on the local circumstances and history. We’ll see How Liverpudlians parse this question.

For that matter, most of the Colu technology is provided by a private company, which is making a profit and may or may not be trustworthy or even exist in the long run. Users are trading the devil they know (conventional regulated banking) for the devil they don’t know (Colu).

  1. Colu. Colu – Local Digital Wallet. 2017,
  2. Dougal Shaw, The Liverpool app that sidesteps the banks, in BBC News – Magazine. 2017.


Cryptocurrency Thursday

Fixing Journalism? Two Approaches

Everybody knows that journalism is in crisis. It turns out that the Internet lowered the cost of delivering information to the point that anyone can play the role of journalism. Anyone. For any reason.

Worse, as the information economy has been increasingly captured by the advertising industry, all other interests have been obliterated. Everything is subordinated to the need to command a large enough audience to generate revenue for advertisers. We now have a word for this, “click bait”.

At the same time, the idea of “mass” media has been replaced with individually filtered channels. It isn’t necessary to serve a least-common-denominator, each person receives a custom stream, potentially different from any other. This has shattered cultural consensus that, for better or worse, was a side-effect of mass media.

These developments have had pernicious effects everywhere, but the destruction of quality (or even mediocre) journalism is particularly damaging to civil society and democratic government.

Scarcely a week goes by without hearing about some new effort to “save” or “reboot” journalism. Shorn of marketing hype, these ideas are basically about money. How can you sustain the activities of journalists or equivalent content creators?

There aren’t many candidate solutions, and they are all pretty much the same ideas as sustained print based journalism.

  1. be a captive propaganda organ
  2. advertising
  3. subscription

Setting aside the “ministry of truth” approach favored by political groups, let’s look at two recent examples of the other approaches.

Civil: Self-Sustaining Journalism

One diagnosis of journalism’s malaise is that they need to adapt to the new world of on-line advertising and the accompanying need to “attract eyeballs”. Conventional journalistic organizations must be rebooted for this new world.

There are many versions of this, but one interesting concept comes from “Civil”, which not only aims to fix journalism, but uses trendy blockchain technology to do so.

The goal is “is a self-sustaining global marketplace for journalism that is free from ads, fake news, and outside influence”. Wow!

One of the key insights in this approach is to view the goal as a global marketplace for journalism, which eschews notions of a special fourth estate with a critical role in democratic self-governance. From this point of view, journalism is one kind of content, and it has to compete in a global marketplace filled with lots of other content.

In one sense, this is essentially conceding defeat. Journalism is over, so we’ll reuse the term for journalism-like content.

Their promised solution sounds too good to be true. Somehow this global, unregulated market will be free of influences, and “self-sustaining” without ads. How will this work?  Magic.

The magic is blockchain based “autonomous” organizations. This technology replaces a conventional organization with code, and, most important, aims to replace the critical functions of journalism with “autonomous” processes—protocols that are not controlled by any person.

So, Civil proposes a suite of processes that they believe replace everything important from conventional journalism,  and avoid costly overheads and intrusive outside interest.

Who are the stakeholders in the journalism game? At the heart, there are journalists (“sellers”) and citizens (“buyers”). There are  funders, owners, advertisers, and sponsors.

But the critical piece that makes it journalism rather than entertainment is quality control, selection of topics, honest investigation, and careful fact checking. In a conventional organization, this role is performed by editorial staff and other managers, who exercise power with judgment.

The ‘Civil’ project eliminates all of these players except the producers and consumers.

Civil aims to create a marketplace model for journalism where citizens and journalists connect around shared interests and standards.

This is both technologically and organizationally identical to many other Internet markets.

The Civil project diagnoses the weakness of this “Amazon” model as being the ease with which “anonymous black hats to cheaply produce and spread fake, malicious content in pursuit of clicks-for-cash ad dollars or nefarious propagandist aims.

Their solution is inspired by Wikipedia, and seeks to “incentivize journalism” while defeating non-journalistic behavior. In their analysis, the way to do this is to create a cryptocurrency and use it to implement micropayments. It’s a bit more complicated than this, because they want to encourage more than just personal payments. They want stable channels of information with strong quality or at least reputation for quality.

Their design has three pieces:

Newsrooms” – “Newsrooms allow citizens to pool funding to support coverage for a specific topic. The more citizens, the more funding, the more journalists will be drawn to cover it.”

Stations” – “Stations allow journalists to productize and price their work to their own dedicated audience however they want”

Fact-checking-as-a-service” – this is crowd sourcing of the editorial role.

These ideas are to be implemented with Ethereum-style “smart contracts”, creating protocols for buying and selling content, as well as voting, penalizing ‘inaccuracy’ and other activities.

The two “innovations” here would have to be the “newsroom” and the “fact-checking-as-a-service”. (“Stations” are indistinguishable from many other digital channels, including this blog.)

The Newsroom concept is an interesting take on how journalism is supposed to work. The idea that journalists should cover what “people” want them to cover is, well, problematic. There are lots of things I don’t want to know about (e.g., wars), but I need journalists to tell me about it. The idea that journalistic coverage should be driven by customer demand is pretty poor journalism.

The “Fact-Checking-As-A-Service” is even more problematic. This concept replaces the efforts of editors and quality control staff with an unspecified crowd sourcing. They don’t explain how this might work or even what it does.

First of all, “fact checking” is only the first level of journalistic quality controls. A report can be 100% “accurate” and still mislead by omission or bias. For that matter, much of the “fake news” is based on interpretation and even “alternative facts”. If there are multiple “fact checkers” who give different rulings, how does that help?

Second, actual quality control is far more than just double checking names and dates. Tracking down alleged events and sources isn’t trivial. More important, judging the weight to give various sources is hard. In this, journalists act as trusted sources of information, and we implicitly trust their sources because we trust them. Replacing this chain of trust with a “trustless” system is dubious.

As an aside, I’ll point out that the best journalists are not “incentivized” by money. They are motivated by a desire to be a trusted source of information. And the best of them report on things that no one wants to know about—and they make us care whether we want to or not.  Thus, the incentives of this system are probably misguided from the start.

The bottom line is that “Civil” is almost a caricature of the cryptocurrency culture. They aim to “fix” journalism, but they seem to misunderstand what it is, and misdiagnose its ills. Not surprisingly, the proposed “fix” is problematic, and unlikely to work.

The Conversation

“The Conversation” offers a rather different “fix” for at least part of the same problem. The conversation is a not for profit enterprise, dedicated to promulgating reliable, fact-based information.

Provide a fact-based and editorially independent forum, free of commercial or political bias.”

The Conversation is responding to the challenges described by Civil. They also perceive a disconnect between universities and the public. Universities are repositories of knowledge, but that knowledge is poorly represented in journalism.

The Conversation sees itself as a source of trusted information dedicated to the public good.

In contrast to Civil, The Conversation does not rely on a “market” to “incentivize” their producers. For one thing their writers are already highly motivated. What they do focus on is careful editing, which is not just ”fact checking”, but also helps create clear, understandable information for non-specialists.

Above all, The Conversation is aiming to create trusted and trustworthy information. They enforce strong rules on transparency, including disclosure of financial interests. The authors are not paid in cryptocurrency or anything, and the content is open for anyone to reuse under Creative Commons Attributions-No Derivs (CC BY-ND). This license preserves the attribution and precludes modification of what the author said, which are necessary to maintain both the trust of the readers and the reputation of the writers and editors.

In short,  “We aim to help rebuild trust in journalism.”

The content is not driven by user demand, it is curated by The Conversation. They are looking for people who know a lot about a topic of public interest, who want to inform the public about it.

Authors must agree to “Community Standards”, which amount to straightforward rules of civil discourse: mutual respect, staying on topic, be constructive, be responsible. It is interesting that one of the rules is “Be You”. No anonymous or pseudonymous posts allowed: you must take personal responsibility for what you say.

Articles are “pitched” to the staff, and if selected an editor is assigned to help create the article. The editor is not a “fact checker”, she or he is a co-creator,  charged to help design the article to be valuable for the general audience.

The published article will include the name, qualifications, affiliations, and funding sources of the author. In this, they are taking practices from academic publishing out to general readers.

The content is free for readers, and available for republishing. No one is writing to make money, but there is plenty of reputation on the line.

One reason this works is that the contributors must be affiliated with an academic institution. Aside from filtering out complete fakes and robots, this means that the authors have their own funding, and generally have a mission to publish. The Conversation doesn’t need to “incentivize” with a starvation wage.


These two (of many) efforts to “fix journalism” offer an interesting comparison.

Both Civil and The Conversation say that there is a crisis in journalism, and describe the illness in similar terms. But these two projects diagnose the underlying disease rather differently, and therefore prescribe different treatments.

Civil is concerned with the financial underpinnings of journalism, and seems to be mainly interested in coverage of current events, especially local events. They seek to use digital technology to create a more efficient, decentralized funding model. Specifically, they use trendy blockchain technology to design “markets” that replace the processes of journalism.

While Civil deploys “disruptive” technology, it’s processes aren’t especially novel, nor even that different from conventional practice. The main novelty is the replacement of editorial decision-making and quality control with market incentives and rather hazy notions of “fact checking as a service”.

The Conversation is concerned with creating better content in ways that are distributed as widely as possible. They are particularly interested in disseminating the deep knowledge accumulated at Universities to the general public.

The Conversation is focused on trusted information. As such, quality control is at the center of the solution, and incentives are aimed to support public interest, not market share.

The Conversation uses digital technology (of course), but musters motivated people from the existing pool of academic researchers who have a desire to support the public good. Authors are not paid, and the content is given away for free. Editors, on the other hand, are paid. If there is a market, it is a reputation economy.

It is notable that The Conversation has been operating for a number of years. No one is getting rich, but there is a lot of solid journalism being made. In that sense, it is a proof by existence.

Civil, on the other hand, is untried as yet. The blockchain technology it aims to use is not only new, it is extremely shaky.

My own view is that Civil’s approach to journalism exhibits fundamental misunderstandings and even a repudiation of what journalism actually used to be. Editors have always been aware of market forces, but are supposed to act as a buffer between producers and raw demand. That is, editors want to foster solid reporting, even if there is no immediate “demand” for it, and they want to report accurately regardless of what the customers want to hear.

Editorial staff does fact checking, but fact checking per se is only the most trivial aspect of quality control. In any case is neither an optional after market service, nor something that you choose to match your own prejudices.

I think that The Conversation’s focus on trust is a great idea, and I’m glad to see it working. On the other hand, The Conversation is focused on a small part of the problem with journalism, which is the poor use of expert knowledge. This problem has been around for decades in the form of anxiety over the challenges of disseminating scientific understandings.

The Conversation works because it uses already existing social mechanisms, specifically, the credentialing and public mission of Universities. These institutions are designed to create trusted information and conduct civil discourse. The Conversation extends the reach of these processes.

However, the entire enterprise of public universities is increasingly threatened by both cultural attack and politically motivated defunding. The Conversation only works if you think that University affiliated experts are trusted sources, and that belief is far from universal. A lot of “fake news” is simply nihilistic denial of expert opinion, and no amount of editing can overcome the will to deny.

The bottom line is that neither of these projects is much of a cure for journalism. The Conversation does a good job, but depends on the fate of academia and rational debate in general. Civil misunderstands journalism, and attempts to fix the problem of trusted information via “trustless” technology and market forces. Whatever Civil is doing, it isn’t good journalism.

  1. Civil Civil: Self-Sustaining Journalism.June 20 2017,
  2. The Converstaion. The Conversation: In-depth analysis, research, news and ideas from leading academics and researchers. 2017,


Cryptocurrency Thursday

IOTA’s Cart Is Way, Way Before the Horse

Earlier I commented on SatoshiPay microcrasactions switching from Bitcoin to IOTA. Contrary to early hopes, Bitcoin has not been successful as a medium for microtrasactions because transaction fees are too high and latency may be too long.

IOTA is designed for Internet of Things, so it uses a different design than Nakamoto, that is said to be capable of much lower latency and fees. SatoshiPay and other companies are looking at adopting IOTA for payment systems.

The big story is that IOTA is reinventing Bitcoin from the ground up, with its own home grown software and protocols. I described it (IOTA) as “funky” in my earlier post.

It is now clear that this funkiness extended to the implementation, including the cryptographic hashes used [1,2]. This is not a good idea, because you generally want to use really well tested crypto algorithms.

So when we noticed that the IOTA developers had written their own hash function, it was a huge red flag.

Unsurprisingly, Neh Haruda reports that their home grown hash function is vulnerable to a very basic attack, with potentially very serious consequences.

The specific problems have been patched, but the fact remains that IOTA seems to be a home made mess of a system.

Narula also notes other funkiness.  For some reason they use super-funky trinary code which, last time I checked, isn’t used by very many computers. Everything has to be interpreted by their custom software which is slow and bulky. More important, this means that their code is completely incompatible with any other system, precluding the use of standard libraries and tools. Such as well tried crypto libraries and software analysis tools.

I have no idea why you would do this, especially in a system that you want to be secure and trusted.

The amazing thing is not the funkiness of the software. There is plenty of funky software out there. The amazing thing is that lots of supposedly competent companies have invested money and adopted the software. As Narula says, “It should probably have been a huge red flag for anyone involved with IOTA.

How could they get so much funding, yet only now people are noticing these really basic questions?

It is possible that these critiques are finally having some effect. Daniel Palmer reports that the exchange rate of IOTA’s tokens (naturally, they have their on cryptocurrency, too) has been dropping like a woozy pigeon [3].  Perhaps some of their partners have finally noticed the red flags.

The part I find really hard to understand is how people could toss millions of dollars at this technology without noticing that it has so many problems. Aren’t there any grown ups supervising this playground?

I assume IOTA have a heck of a sales pitch.

Judging from what I’ve seen, they are selling IOTA as “the same thing as Bitcoin, only better”. IOTA certainly isn’t the same design as Bitcoin, and it also does not use the same well-tested code.  I note that a key selling point is “free” transactions, which sounds suspiciously like a free lunch. Which there ain’t no.

IOTA’s claims are so amazingly good, I fear that they are too good to be true.

Which is the biggest red flag of all.

  1. Neha Narula, Cryptographic vulnerabilities in IOTA, in Medium. 2017.
  2. Neha Narula, IOTA Vulnerability Report: Cryptanalysis of the Curl Hash Function Enabling Practical Signature Forgery Attacks on the IOTA Cryptocurrency. 2017.
  3. Daniel Palmer, Broken Hash Crash? IOTA’s Price Keeps Dropping on Tech Critique Coindesk.September 8 2017,
  4. Dominik Schiener, A Primer on IOTA (with Presentation), in IOTA Blog. 2017.


Cryptocurrency Thursday

Study of Trust in Fact-Checking Services

Petter Bae Brandtzaeg and Asbjørn Følstad write this month about Trust and distrust in online fact-checking services [1].

Everyone knows that the Internet is a perfect medium for disseminating information of all kinds, including rumors, errors, propaganda, and malicious lies. Social media have proved to be just as susceptible to misinformation, despite it’s filtering mechanisms (which are problematic in other ways).

One response to this flood of junk information is a proliferation of “fact-checking” services, which attempt to verify claims in public statements using primary and secondary sources.

The very fact that there are 100 or more such services would seem to be significant, though I’m not sure what it means exactly. This must be the ‘golden age of fact-checking’.

Bae Brandtzaeg and Følstad point out that a fact-checking service depends on establishing a reputation and the trust of users. In particular, what matters is how the user (consumer) perceives the service. There isn’t much point to a “fact checker” that you don’t believe is accurate and honest.

Their study analyzed social media discussion of selected widely used fact checking services. This data is unstructured (so say the least!), but does represent unfiltered publicly stated opinions about the fact-checking service by actual users. These sentiments were coded for statements about “usefulness” and “trust”.

One of their findings is that negative comments were often about “trust”, which positive comments were about “usefulness”.

Many negative comments complained about perceived bias in the service, which is certainly consistent with the vast research that indicates that people do not readily change strong opinions in the light of facts. In this case, they dispute the motives of the messenger, rather than their own opinions.

Positive comments about the “usefulness” indicate that the service may have achieved enough trust (or congruence with preconceptions) that the information influences the user’s opinion. This is consistent with the idea that someone who is both skeptical of a claim and trusts a fact checker will find the check useful.

The authors note that there may be a great need and desire for fact checking, but most people don’t use it. (For example, me.) If nothing else, the perceptions of these systems might well evolve if they are more widely used.

The authors point out that for many users the distrust in the fact checking service isn’t really specific to the behavior of the service itself, it is a distrust of everything. A very general disbelief in society is often highly emotional, so services should take care to present themselves in ways that try hard to engender trust.

lack of trust extends beyond a particular service to encompass the entire social and political system” (p. 70)

The long and the short of it is that fact checking needs to be highly transparent. Trust is created by knowing who is “checking” and how they do it. The authors also suggest that reliance on “expert” opinion should be minimized, and that “crowd sourced” verification may be especially useful.

Reading about this study, I am struck by the contrast with the widely held dogma of “trustlessness” of the cryptocurrency and blockchain world. Nakamotoan blickchains are a cure for everything, including fake news, some say.

It is thought that these “trustless” systems “can’t be evil”. Furthermore, in a medal winning rhetorical judo throw, the anonymous (or at least unaccountable) blockchain is considered “transparent”. Some even explicitly imagine that such “trustless” systems can fix journalism (generally through “market based” processes).

The Brandtzaeg and Følstad study makes pretty clear that the key to trustworthy information is transparent and accountable processes. I don’t see how you can hope to build trusted information on a foundation of “trustless” technology. Frankly, I think blockchain and other technologies are largely irrelevant to the problem of “fake news”.

Finally, I note that “trust” is an end-to-end property. People trust people. The technology in between the two people is relevant only to the degree that it obscures or enhances the ability of people to trust each other.

The challenge is that digital technology is naturally opaque and it is easy to be deliberately deceptive. In order to be trusted, a digital service must work hard to make clear to the human users who the human and other sources really are, and what their motives really are.

This is surprisingly difficult, and I think that “trustless”, peer-to-peer systems make it even harder to establish this trust.

  1. Petter Bae  Brandtzaeg and Asbjørn Følstad, Trust and distrust in online fact-checking services. Commun. ACM, 60 (9):65-71, 2017.