Tag Archives: Andy Greenberg

Tracking Bitcoins, Mitigating Evil

Bitcoin was designed to be difficult to regulate, in the same way that gold is difficult to regulate. Possession (of a private key) is ten-tenths of the law as far as Bitcoin is concerned, and it can be very difficult to tell exactly how a particular Bitcoin came to be possessed by a particular individual.

This relative opacity is one of the properties that makes Bitcoin and other cryptocurrencies so attractive for criminals, extortionists, tax evaders, and dark markets.

From the point of view of believing Nakamotoans,  untraceability is a feature.

From the point of view of the law and society in general,  opacity is often considered a bug. Civil society in general has little appetite for unregulated financial systems, so Bitcoin will never succeed unless it can be brought into civil society and the rule of law.


This month researchers at Cambridge University describe how an old legal principle might be applied to Nakamotoan cryptocurrency to rein in abuses and “make Bitcoin legal” [1].

The researchers point out that many Internet technologies have been put forward as “outside the law”, but this is an assertion not a fact.  The fact is that “the law” decides what the law is and how it is applied.  No one gets to simply secede from the legal system, at least not without resort to pure power politics.

“we have repeatedly seen a pattern whereby the promoter of an online platform claims that old laws will not apply.”

“The key is making online challengers obey the law – and the laws may not need to change much, or even at all.”

In the case of Bitcoin, the researchers explore how conventional financial controls, especially anti money laundering rules, could be applied to Nakamotoan cryptocurrency.  They conclude that it is surprisingly straight forward and does not require changes to the network protocols.  I.e., the legal system can adapt to cryptocurrencies as they stand now, without any cooperation or consent from programmers or users.


There is a common legal principle that one may not profit from the fruits of crime.  Similarly, you cannot receive goods from someone who does not legitimately own them.  If someone gives you a stolen coin, it must be returned to the original owner (and you may well be out of luck).  Thus, it is very important not to trade in ill-gotten goods.

It is often the case that the monetary fruits of crime are passed along mixed in with other money.  In the case of Bitcoins, this kind of mixing occurs rapidly and across the whole Internet.  This presents a dilemma for the law.  The funds are “partly” stolen, but which part can be confiscated?

The Cambridge team discusses the history of this problem.

Theft and misuse of Bitcoins are a significant issue, to the point that even most Bitcoin users are concerned.  If there is a significant risk that your assets may be stolen (or misplaced), with no possible recourse, then cryptocurrency is unattractive for many uses.

Philosophically, Nakamotoans generally do not want government guarantees (e.g., registration of ownership) or other conventional mechanisms for protecting assets.  An alternative would be for courts to enforce rules, e.g., to allow recovery of stolen or extorted Bitcoins.  But how would courts adjudicate such a case?


In the past, the general legal approach has been to consider the funds “poisoned” by the presence of illegal money.  Someone who holds the funds will have to pay a penalty proportional to the illegal funds.  This stands as a deterrent to dealing in potentially “toxic” assets.

One way to do this is to consider all the money to be N% illegitimate, i.e., to confiscate part of the value of the whole batch.  This approach can be used with Bitcoin, though it is a blunt instrument.  Anderson et al. indicate that a very large proportion of Bitcoins would be touched by such “pollution” (5% in one sample–one in every twenty!)

They propose an alternative mechanism that echoes an approach used in nineteenth century English law:  First-in-first-out.   The idea is to trace the flow of coins and to assign an order to each transaction.  The first coin taken out of an account is equated to the first coin put in, and so on.  When a stolen coin is spent, that transaction is identified and the payment is illegal.  This is a sort of “reverse lottery” – an unlucky user ends up losing.

This approach is much more precise way to identify and deter accepting ill gotten money.  The paper argues that this is quite possible with Bitcoin, using the public blockchain and crime reports.  Furthermore, the FIFO principle works even when “mixers” are used to conceal the origins of the Bitcoins.  In the end, when this legal doctrine is applied, accepting Bitcoins from a mixer risks losing the entire payment in the unpredictable event that you receive coins designated “poison”.

This approach isn’t “centralized”, and it doesn’t break Bitcoin.  It doesn’t even change Bitcoin. It just wraps Bitcoin in a legal framework.  Honest users would have a way to behave honestly (use honest exchanges), crime could be punished, and the system functions as efficiently or inefficiently as now.

“In short, we might be able to turn a rather dangerous system into a much safer one – simply by taking some information that is already public (the blockchain) and publishing it in a more accessible format (the taintchain). Is that not remarkable? “

It is difficult to overstate how important it is for Bitcoin and other cryptocurrencies to get “legal”.  Whatever the technical merits of Nakamotoan technology, it cannot succeed outside the law.


  1. Ross Anderson, Ilia Shumailov, and Mansoor Ahmed, Making Bitcoin Legal. Cambirdge University, Cambirdge, 2018. http://www.cl.cam.ac.uk/~rja14/Papers/making-bitcoin-legal.pdf
  2. Andy Greenberg (2018) A 200-Year-Old Idea Offers a New Way to Trace Stolen Bitcoins. Wired.com, https://www.wired.com/story/bitcoin-blockchain-fifo-dirty-coins/

 

 

Cryptocurrency Thursday

Cats + Wearable Computing (+ Spyware?)

It is a historical fact that the World Wide Web was created so people could post pictures of their cats. Sure, we dressed it up with stories about collaborative science and e-commerce, but that was to keep the grown ups off our case.

Let’s combine our infatuation with cats with our other interests: animal computer interfaces, specifically wearable animal computer interfaces.

Andy Greenberg reports in wired.com about “How to Use Your Cat to Hack Your Neighbor’s Wi-Fi”, a project built by Gene Bransfield originally reported at DefCon.

Despite the militaristic rhetoric (Bransfield calls it “WarKittieh”), this is actually a simple wearable computer for a cat. Much of the development involved the familiar problems of weight, power, and comfort.

The WarKitteh collar with its components and wiring removed, with a dollar bill for scale. Photo: Gene Bransfield

The device is basically a small wi-fi sniffer worn on a collar by the feline. As the cat explores his or her neighborhood, the device maps local wi-fi networks, finding unprotected access points. This is not really different from just walking around with your phone or other device, though a cat can get into places humans can’t easily access.

Obviously, this concept would work for other species or, indeed, might better be applied to an UAV (as I’m sure it has). Success clearly depends on the cooperation of the particular cat. Older, more sedentary cats who don’t roam far will scarcely provide any data. And, as reported, some cats simply won’t wear the wearable, so there. (Good luck even finding the rejected gear.)

Applying my standard analysis for Animal-Computer Interfaces, I have to question this product. The design does consider the comfort and safety of the animal, but the device itself produces no benefit to the cat. He or she is basically an unwitting dupe of its human. The fact that the information is essentially useless to the human doesn’t really improve that risk/benefit analysis for the feline.

Worse, I’m concerned about the implications of mounting spy gear on innocent pets. If paranoid people come to believe that any cat they see may be carrying spy ware, and decide to shoot on sight, this is really bad for the health and safety of cats everywhere. The annoying “jokes” about “hacking” and “war” only makes this risk greater.

Thumbs down. Don’t do this.

Articles on High End Hacking

A couple of useful article s at wired.com this week, putting some technical flesh on the bones of the reports about spyware.

Kim Zetter gives a nice explanation of the “firmware hack” as reported recently by Kaspersky Labs.  As she comments, the popular media didn’t really explain how devastatingly effective this technique can be.

Basically, a disk drive has a computer on it that does all the data management. The controller runs a tiny, specialized bit of software that is installed and updated by the manufacturer and not intended to be seen by anyone else. This is termed “firmware”, a double pun indicating that it is logically between “hardware” and “software” (it rarely changes—it is “firm”), and also that it is tightly associated with the specific “firm” that makes the device.

In a classic instance of “security by obscurity”, firmware is unprotected and assumed to be trustworthy. Since the operating system and user cannot see or mess with it, it is assumed to be safe. But if you can get into it, the system is indefensible.

Zetter points out that once firmware is infected, even wiping the disk and reloading the operating system have no effect. The only way to wipe it is to buy a new disk drive and throw out the old. (And don’t reload any data from the old disk—it could well be infected.) Ouch.

Infecting firmware is not trivial, because every disk is different and firmware is obscure. As Kaspersky Labs suggest, this requires a substantial organization with access to a lot of experts.

Why is this useful? For one thing, it works even for computers not attached to a network. And it keeps working even if the disk is scanned or the OS reinstalled. It potentially can reinfect the system after such cleaning.

It also can capture documents or data on the way to the disk, defeating data encryption. It may also capture and save passwords, to defeat encryption.

What is this for? Zetter concludes that this is probably mostly for systems not on a network, with encrypted disks. In other words, very specific, very high value targets.

How is the infection installed? This is not clear, and is certainly a highly guarded secret.

This is such an elegant and beautiful hack, in its powerful simplicity.


A second article by Andy Greenberg discussed a recent paper “Surreptitiously Weakening Cryptographic Systems” [1] by Bruce Schneier and colleagues. This is a somewhat technical, but readable treatment that includes some well known historical cases. But most of all, it is worth reading for the view of the methodic thinking of code crackers. What kinds of mischief can be done, and what do they accomplish?

They give a framework for thinking about these tasks. Different approaches have different plusses and minuses. Some methods are devastating, but also not very secret, and with a lot of collateral damage. Other methods may give a subtle edge, or open only certain systems. This is less powerful, but may be undetected and cause less unintended damage.

One point that is implicit in their taxonomy it that, despite recent publicity, large scale “drag nets” are generally not preferred, at least for information gathering. (In fact, the “collect the whole haystack” is probably a desperate measure, not a first choice.) The best attacks are precise and hidden, obtaining secrets without detection.

In essence, this paper outlines what the NSA and its peers around the world have been doing for decades. But in this case, the authors are trying to understand how to strengthen cryptography by understanding how it can be attacked.

I’ll make two obvious points. First, many of the “attacks” described are due to or made worse by poor engineering, especially software engineering. Software is buggy, period. And security software can’t afford to be buggy. That’s a problem.

Second, many, if not most, of the breahes we hear about are due to much more mundane human errors. Careless users, adminstator errors, and “human engineering” (e.g., phishing), are the easiest and most common way to get through and around security. The subtle and complex technique in this paper are needed mainly for hard targets that are well secured.

For this reason, it’s probably good for all of us to have a general idea about these things, so we grok that you need to follow proper safety practices, keep systems up to date, and be reasonably paranoid at all times.


 

  1. Schneier, Bruce, Matthew Fredrikson, Tadayoshi Kohno, and Thomas Ristenpart, Surreptitiously Weakening Cryptographic Systems. Cryptology ePrint Archive 2015/097, 2015. http://eprint.iacr.org/2015/097

“Internet of Things”: Probably Say “No” For Now

There is much excitement these days about the coming of the “Internet of Things” (a term coined by Kevin Ashton in 1999, but now coming true.

I’ve been interested in this stuff since before Ashton coined the phrase,

There is so much cool that can be done with smart environments, how could be not be excited?

However, along the way, things languished in unimaginative and short sighted apps–honestly, I don’t need to automate the light switches, they work just fine–and a lack of contact with the real world.

My favorite rant in this area is about the plethora of “intelligent” rooms that sense the wishes of the inhabitant.  Not the singular.  One person per domicile.

So, it knows your schedule and anticipates when to start the coffee, etc.  I adjusts lighting and music to the tastes of the (one) user. And so on.  (Also, these systems tend to be given the personality of a servant–a psychologically and sociologically troubling fantasy.)

In real life, for most real people, there are more than one person in the family.  So you can’t talk about “optimizing” for the one user, that makes not sense.  And optimizing for multiple people is hard–and in any case is what relationships are about.  Not really what your thermostat should be doing.

Example of why this is hard:  many services offer to take my collection of music recordings, crunch on them, and then make recommendations for me.

Here are some real world problems with that concept.

First, my collection extends decades back.  My tastes have changed over the years.  (Essentially, I’m not “one person” as far as the algorithm would have to assume.)

Second, the collection was merged with my spouse decades ago.  She has her own tastes, which also have evolved.

So how would machine learning deal with this?  I have no idea, but I’m sure I don’t care.

Where was I?  Oh, right.  Internet of Things.

What’s coming out this summer are a bunch of things from Internet/Mobile app people in alliance with appliance maker and marketers.  The technology is based on home scale internet things:  moderate bandwidth, light security, almost no local storage or cycles.

For some reason, people seem to think that hooking all this up to the Internet, kind of like a phone,  will be OK.  Are you nuts?

Aside from the sheer insanity of letting Google anywhere near my thermostat (to pick one example), there are so many reasons I don’t want my house full of low grade computing.

For example, we hear about cases such as a hack attack that dropped malware on home storage devices that secretly mined cryptocoins.  This went on for months because-wait for it-random home owners in Taiwan do not take time monitoring CERT bulletins and emergency patches for their home appliances.

When I read this story I realized that, even though I am experienced and well informed, I have no idea how many of the devices in my house may have bluetooth or wifi, nor if they could be hacked.  How would I know?  Why would I want to know?

Glancing at the web documentation from Google Nest, I found that I was supposed to be reassured by the fact that they use openSSL (!) and will ask before sending data to an app on your phone.  Problem solved!

If you want and even more thorough walk through, I refer you to recent books on this topic.

Until there is an application I really need or want, and a system that is really isolated from the Internet, I’m saying “no” to the Internet of Things.

Bitcoin Brand: So Many Narratives

This week we see some tensions in the world of Bitcoin.  When you are a public brand, everyone can use your name for their own purposes.

The Bitcoin foundation is voting to replace two deposed members.  The first round failed to produce a result. I can’t pretend to have a horse in this race, though commentators note the preponderance of financially interested parties.  Clearly, the Bitcoin foundation is of the capitalists, by the capitalists, and for the capitalists.

Meanwhile, the narrative is getting away from the suits….

Apparently Senator Rand Paul is as clueless about cryptocurrencies as he is about everything else related to economics.  He imagines “backing” Bitcoin with commodities or stocks.  This is, of course, completely the wrong idea, demonstrating that he does not get Bitcoin at all.  (I could be snarky, and note that the Bitcoin blogosphere was quick to criticize his misunderstanding, though not with the personal vitriol served up to people (such a Paul Krugman or Warren Buffet) who slam Bitcoin. I didn’t see any headlines about “Ron Paul is an idiot.”)

This kind of attention from poorly educated Presidential Timbers doesn’t help Bitcoin’s cred much.

I note that Stephen Colbert snuck in a technically savvy jab at Bitcoin volatility–his writers apparently do have a clue.

Finally, everybody is abuzz with the splashy announcement of DarkWallet.  As Andy Greenberg makes clear in Wired, this is about nothing other than money laundering. the narrative is really really clear:  this is about drugs and guns and everything else illegitimate.  The video features pictures of a home made weapon, racist-tinged pictures of Obama.  These guys give quotes like:  “Liberty is a dangerous thing.”

I might note that the technical point of DarkWallet is to subvert aspects of the Bitcoin protocol, which is designed to provide complete transparency as one of its goals.  The protocol itself is an interesting attempt to combine the interests of anonymity and transparent accountability.  BlackWallet seeks to make it anonymous and unaccountable.

It will be interesting to see how this is treated by the growing number of reputable people who want to use Bitcoin for legitimate business.  Greenberg is sympathetic to the concept of anonymous transactions though he isn’t clear why this is important enough to balance the incredibly bad effects it will surely have.

It’s not that people will do bad things with Bitcoin.  This crew with its open defiance of law enforcement, is certain to sour the attitude of regulators with whom the grown ups have been trying to negotiate.  It can’t make potential users such as MIT feel very happy.  Any bank doing business in the US would have trouble using Bitcoin if it is connected with these people–which it certainly is now.

What will the Bitcoin foundation do? What will big investors like Andreesseen do?