Category Archives: science and technology

Does Quantum Computing Kill Bitcoin?

Quantum Crypto Is Upon Us

We know it is coming. Probably.

For the last 25 years and more, we’ve known that quantum computing is coming, and that one of its first uses will be code breaking.

Much of the cryptographic infrastructure of the Internet is based on methods that are proven to be so hard to compute that a brute force or guessing attack is “infeasible”. Generally, this means that with current and projected technology, it would take a long time, years or centuries, to work it out.

But quantum computers should be zillions of times faster at certain kinds of computations, including the beating heart of key crypto algorithms. Uh, oh!

This cuts both ways. Quantum encryption might well be unbreakable by conventional computers (good for the defense, bad for the offense). But much of conventional computing and networks will be effectively clear text (bad for defense, good for offense).

I assume the NSA and all the other technically advanced powers are on the case, though I certainly don’t know exactly what is going on. We do know, for example, that there is a public effort in China to deploy quantum cryptography on a backbone network. Google has announced it has the technology. It is likely that high security nets have already got such technology, long before any public announcements. The future is already here.

Mark Kim writes this month in Quanta Magazine about these developments [3]. In particular, he discusses a paper by Bernstein, Daniel J. and colleagues, which looks at “Post Quantum RSA”, i.e., what happens to RSA encryption in a quantum computing world [1].

The thrust of this paper is proposals for “RSA parameters can be adjusted so that all known quantum attack algorithms are infeasible while encryption and decryption remain feasible.” ([1], p. 1)  As they say, their ideas are “not what one would call lightweight cryptography”. The case they analyze involves a 1 Terabyte key! This is expensive and awkward, but the point is that for cases that demand extreme measures (e.g., guarding root keys, critical backbones, and other vital secrets) there may be ways to protect against quantum decryption attacks, even with conventional computing.

This is a cool idea, assuming it bears out. If nothing else, it dilutes the aura of magical invincibility that surrounds quantum cryptography.

But these measures and other possible approaches, don’t really solve the problem for the bulk of the Internet. It may soon be true that well endowed actors, nation states and googles, can crack any crypto they need to.

What Happens to Bitcoin, blockchains, and other Cryptocurrency?

These developments potentially have serious implications for cryptocurrencies and blockchains, all of which depend on cryptography and, equally important, cryptographically-secured systems.

I’m not sure exactly what parts of the Nakamotoan mechanisms might be affected by quantum computing, some might even be improved. But the big two to worry about are the hashing scheme (the basis of ‘mining’) and the ‘addresses’ which are cryptographic public keys. These mechanisms are secured by algorithms that depend on the speed and cost of computing, so a major disruption of speed could breach the entire basis for Bitcoin.

I don’t know if there are ways to subvert the hashing scheme with quantum computing, and I certainly don’t know what the cost/benefit analysis might be for any such scheme. Quantum computing is likely to be more expensive, so who knows when it is cost effective? (Note that the argument that “it’s too expensive to be reasonable” simply does not apply to state actors.)

One potential problem is if it becomes reasonable for some wealthy miners to have systems that are much, much faster, and thereby to accumulate a large fraction of the total hashing power, then that would be a very serious problem.

An even bigger problem is that governments and large companies will soon be able to crack public keys, and therefore probably will be able to mess with Bitcoin addresses. Yoiks! Unfriendlies not only reading your mail, but manipulating your Bitcoins and your “smart contracts”, too. Again, arguments about supposed economic and cost barriers don’t apply to state actors.

Worst of all, anyone actually using Bitcoin or a blockchain for any normal purpose (i.e., other than mining or currency exchange), relies on the general security of the network and nodes. Even if the blockchain, servers, and wallets aren’t cracked (which they will be), the network itself is likely to be unsecure.

It’s hard to know what might happen, but if unfriendlies can insert man-in-the-middle attacks between nodes, then all bets are off. Anyone trying to actually use Bitcoin with a wallet and local connection would be vulnerable in any number of possible ways.

Game over.

Time’s Up For Cryptocurrencies?

The official Bitcoin wiki pages have a short note on “Quantum computing and Bitcoin”, which whistles past the graveyard. They suggest that there is a decade or more to do something, which is probably optimistic. But even this Pollyanna-ish page notes that there aren’t any solid solutions known at this time.

This isn’t great news, especially given Bitcoin’s disfunctional governance system, which has been spinning its wheels for two years over much simpler technical issues. How in the world will the crypto community cope with the existential threat of QC?

Obviously, I’m far more concerned about the collapse of the whole Internet.

Perhaps Bitcoin and other cryptocurrencies might turn out to be canaries in the coal mine, keeling over just before the the big explosion.

  1. Daniel J. Bernstein, Nadia Heninger, Paul Lou, and Luke Valenta, Post-quantum RSA. Cryptology ePrint Archive: Report 2017/351, 2017.
  2. Bitcoin Foundation. Quantum computing and Bitcoin. 2016,
  3. Mark H. Kim, Why Quantum Computers Might Not Break Cryptography. Quanta magazine.May 15 2017,


Cryptocurrency Thursday

Freelancers Union: The App

n the early twenty first, there’s an app for everything. Indeed, some people seem to think that if you don’t have an app, you aren’t for real.

This week the Freelancer’s Union (I’m a proud member since 2015) released a new ‘app’. As their web page puts it, “Solidarity? There’s An App For That.” This isn’t my grandfather’s union, that’s for sure!

OK, I’m game. Let’s do some more close reading here.

First, let me be very clear. The Freelancers Union is doing important stuff, and I strongly support them. You can’t talk about the future of work without talking about the future of workers.

But that does not mean that I will not do a close reading of their narrative or their recent forays into digital products.

Looking At The App

Just what exactly does this ‘Solidarity Forever: The App’ actually do? Does it connect us to our brothers and sisters in the Union? Does it help recruit more members? Does it host digital rallies? Does it ping our elected representatives about legislation? Could there possibly be a playlist of inspiring songs? Dare I hope for live sing alongs with our comrades around the world?

Maybe in version 2.0.

The current version does only one thing: connects you to legal advice.  Sigh. Useful, I suppose, but not nearly as exciting as on could hope.

You App Reveals Your Psyche

While I think this app misses an opportunity to show off FU as truly the new way of work (see below), it does reveal some facts about the FU and our members.

First of all, the fact that there is an app at all, indicates the desire for conventional branding, especially, to be current. The Union is real unless it’s got an app. Box checked.

Second, we find confirmation that the backbone of the union is in the ‘digital creatives’, especially in NYC. The release is accompanied by a social promotion campaign (standard fare for digital advertising), and the instructions simply say,

Post a photo of yourself holding up the app, with the caption “I stand with freelancers because [write your reason!]. #FreelancersUnionApp

It is obviously assumed that we know what “post” means, and think that posting selfies is a meaningful political act.

We also see clearly what is at the top of the worries for the union and the membership. The app does only one thing: it refers you to a lawyer. Glancing at the app, we see a list of the common categories of problem, and the number one suggested topic is  “nonpayment”.

The FU has been pointing on its #FreelancingIsntFree campaign for more than a year, so we get the picture. The same bastards who hire temps instead of permanent employees, also find it cost effective to not pay the temps.

Another glaring point is that, like much of the union’s activities, this offer is only available in NYC initially. The Union is open to everyone, even schlunks like me out in some cornfield, but they are effective on the ground only in a few cities, and mostly in NYC where they HQ. I’m pretty sure that the union would like to spread the goodness everywhere, but it tends to be a perennial disappointment out here in the cornfields, where we can read about, but not really get much real union action.

Anyway–see how much we can learn from close reading an app!

Let me try to be clear. There isn’t anything really wrong with this app, and I certainly support the FU and the purpose of this app.  The point is to see what the app really is, and think about what it could be.

Please let me go one more step and make some suggestions for version 2.

First of all, there could be a specialized social network, with union themed features. The network should be totally flat, because everyone is in one union. PMs should be limited to pings that say, “I got your back” (forget about “like”—we don’t have to “like” each other, just fight for each other :-)). The union might circulate petitions and calls to contact politicians.

Second, there could be solidarity themed ‘togetherness’ activities. Simple ways for the Union to organize flash crowds, marches, or picnics, where feasible.  Other activities might include walkabouts that alert you when union members are near (a la Look Up or even AR Pokemon).

In cases where, we can’t meet in person, lets have digital solidarity. Digital sing songs. Digital dance alongs. Casual games

One game I can think of is a simple trivia game to learn about the union an dits members. Flash cards with simple (non-invasive) information, like where, what you do, and a tag. Remember the most Union members and be famous! High multipliers for locations outside NYC, and for statistically unusual tags (rare occupation, older worker, etc.)

If we want to go Augmented Reality, then we could make union badges that are AR markers. When you encounter someone with their badge on, point the app at her or him. Poof, they are surrounded by halos and unicorns! Or some other magic, magic that only happens when two union members are together in physical space.

The point is, if you make the app cool enough, people will want to join the union, just to get the app!  Let’s put the union in the lead of social technology.

Join the union.

Orchestrating Internet of Things Services

Zhenyu Wen and colleagues write in IEEE Internet Computing about “Fog Orchestration for Internet of Things Service[1]

Don’t you thing “Fog Orchestra” is a great name for a band?

After laughing at the unintentionally funny title, I felt obliged to read the article.

The basic topic is about the “Internet of Things”, which are “sensors, devices, and compute resources within fog computing infrastructures” ([1], p. 16) As Arieff quipped, this might be called “The Internet of Too Many Things”.

Whether this is a distinct or new technology or architecture is debatable, but the current term or art, “fog computing” is, for once, apt. It’s kind of like Cloud Computing, only more dispersed and less organized.

Wen and colleagues are interested in how to coordinate this decentralized fog, especially, how to get things done by combining lots of these little pieces of mist. Their approach is to create a virtual (i.e., imaginary) centralized control, and use it to indirectly control pieces of the fog. Basically, the fog and its challenges is hidden by their system, giving people and applications a simpler view and straight forward ways to make things happen. Ideally, this gives the best of both worlds, the flexibility and adaptability of fog, and the pragmatic usability of a monolithic application.

(Pedantic aside: almost anything that is called “virtual” something, such as “virtual memory” or a “virtual machine” or a “virtual private network”, is usually solving this general problem. The “virtual” something is creating a simpler, apparently centralized, view for programmers and people, a view that hides the messy complexity of the underlying system.

Pedantic aside aside: An exception to this rule is “Virtual Reality”, which is “virtual” in a totally different way.)

The authors summarize the key challenges, which include:

  1. scale and complexity
  2. security
  3. dynamicity
  4. fault detection ans handling

This list is pretty much the list of engineering challenges for all computing systems, but solving them in “the fog” is especially challenging because it is loosely connected and decentralized. I.e., it’s so darn foggy.

On the other hand, the fog has some interesting properties. The components of the system can be sprinkled around wherever you want them, and interconnected in many ways. In fact, the configuration can change and adapt, to optimize or recover from problems. The trick, of course, is to be able to effectively use this flexibility.

The researchers refer to this process as “orchestration”, which uses feedback on performance to optimize placement and communication of components. They various forms of envision machine learning to automatically optimize the huge numbers of variables and to advise human operators. This isn’t trivial, because the system is running and the world is changing even as the optimization is computed.

I note that this general approach has been applied to optimizing large scale systems for a long time. Designing networks and chips, optimizing large databases, and scheduling multiprocessors use these kinds of optimization. The “fog” brings the additional challenges of a leap in scale, and a need for continuous optimization of a running system.

This is a useful article, and has a great title!

  1. Zhenyu Wen, Zhenyu, Renyu Yang, Peter Garraghan, Tao Lin, Jie Xu, and Michael Rovatsos, Fog Orchestration for Internet of Things Services. IEEE Internet Computing, 21 (2):16-24, 2017.

CuddleBits: Much More Than Meets The Eye

Paul Bucci and colleagues from University of British Colombia report this month on Cuddlebots, “simple 1-DOF robots” that “can express affect” [1] As Evan Ackerman says, “build your own tribble!” (Why hasn’t there been a zillion Tribble analogs on the market???)

This caught my eye just because they are cute. Then I looked at the paper presented this month at CHI. Whoa! There’s a lot of interesting stuff here.[1]

First of all, this is a minimalist, “how low can we go” challenge. Many social robots have focused on adding many, many degrees of freedom, for example, to simulate human facial expressions as faithfully as possible. This project goes the other way, trying to create social bonds with only one DOF.

“This seems plausible: humans have a powerful ability to anthropomorphize, easily constructing narratives and ascribing complex emotions to non-human entities.” (p. 3681)

In this case, the robot has programmable “breathing” motions (highly salient in emotional relationships among humans and other species). The challenge is, of course, that emotion is a multidimensional phenomenon, so how can different emotions be expressed with just breathing? And, assuming they can be created, will these patterns be “read” correctly by a human?

This is a great piece of work. They developed theoretical understanding of “relationships between robot behaviour control parameters, and robot-expressed emotion”, which makes possible a DIY “kit” for creating the robots – a theory of Tribbleology, and a factory for fabbing Tribbles!

I mark their grade card with the comment, “Shows mastery of subject”.

As already noted, the design is “naturalistic”, but not patterned after any specific animal. That said, the results are, of course, Tribbleoids, a fictional life form (with notorious psychological attraction).

The paper discusses their design methods and design patterns. They make it all sound so simple, “We iterated on mechanical form until satisfied with the prototypes’ tactility and expressive possibilities of movement.” This statement understates the immense skill of the designers to be able to quickly “iterate” these physical designs.

The team fiddled with design tools that were not originally intended for programming robots. The goal was to be able to generate patterns of “breathing”, basically sine waves, that could drive the robots. This isn’t the kind of motion needed for most robots, but it is what haptics and vocal mapping tools do.

Several studies were done to investigate the expressiveness of the robots, and how people perceived them. The results are complicated, and did not yield any completely clear cut design principles. This isn’t terribly surprising, considering the limited repertoire of the robots. Clearly, the ability to iterate is the key to creating satisfying robots. I don’t think there is going to be a general theory of emotion.

I have to say that the authors are extremely hung up on trying to represent human emotions in these simple robots. I guess that might be useful, but I’m not interested in that per se. I just want to create attractive robots that people like.

One of the interesting things to think about is the psychological process that assigns emotion to these inanimate objects at all. As they say, humans anthropomorphize, and create their own implicit story. It’s no wonder that limited and ambiguous behavior of the robots isn’t clearly read by the humans: they each have their own imaginary story, and there are lots of other factors.

For example, they noted that variables other than the mechanics and motion While people recognized the same general emotions, “we were much more inclined to baby a small FlexiBit over the larger one.” That is, the size of the robot elicited different behaviors from the humans, even with the same design and behavior from the robot.

The researchers are tempted to add more DOF, or perhaps “layer” several 1-DOF systems. This might be an interesting experiment to do, and it might lead to some kind of additive “behavior blocks”. Who knows

Also, if you are adding one more “DOF”, I would suggest adding simple vocalizations, purring and squealing. This is not an original, this is what was done in “The Trouble With Tribbles” (1967) [2].

  1. Paul Bucci, Xi Laura Cang, Anasazi Valair, David Marino, Lucia Tseng, Merel Jung, Jussi Rantala, Oliver S. Schneider, and Karon E. MacLean, Sketching CuddleBits: Coupled Prototyping of Body and Behaviour for an Affective Robot Pet, in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2017, ACM: Denver, Colorado, USA. p. 3681-3692.
  2. Joseph Pevney, The Trouble With Tribbles, in Star Trek. 1967.


Robot Wednesday

Close Reading Apps: Brilliantly Executed BS

One of the maddening things about the contemporary Internet is the vast array of junk apps—hundreds of thousands, if not many millions—that do nothing at all, but look great. Some of them are flat out parodies, some are atrocities, many are just for show (no one will take us seriously if we don’t have our own app). But some are just flat out nonsense, in a pretty package. (I blame my own profession for creating such excellent software development environments.)

The only cure for this plague is careful and public analysis of apps, looking deeply into not only the shiny surface, but the underlying logic and metalogic of the enterprise. This is a sort of “close reading” of software, analogous to what they do over there in the humanities buildings.  Where does the app come from? What does it really do, compared to what they say it does? Whose interests are served?

Today’s example are two apps that pretend to do social psychology: Crystal (“Become a better communicator”) and Knack (“for unlocking the world’s potential”).

[Read Whole Article]

Astronomy Leads The Way In Big Data

Jay Kremer and colleagues at University of Copenhagen write in IEEE Intelligent Systems about, “Big Universe, Big Data: Machine Learning and Image Analysis for Astronomy [1].

This article is a nice survey of the kinds of data that astronomers collect, and the challenges of analyzing, and, indeed, simply handling it all.

I have worked with Astronomers in the past, and one of the coolest things is that when they have a dataset that covers “everything”, they really mean everything—the entire Universe, at least as much as we can see from where we are. And it is so romantic. Every study deals with space and time, matter and energy, theory and observation. Astronomical data makes you feel tiny and insignificant. Yet we are part of this gigantic picture, and our brains are capable of learning so much about it.


Kramer walk through many aspects of  contemporary Astronomical data. They describe the data (visible light and spectrographic measures), which are captured in detailed images of the sky. Billions of pixels recorded from signals that have travelled incomprehensible distances over inconceivable time spans, to intersect with us here and now.

No human could view all this information, nor make sense of it. The data is run through pipelines that use algorithms to clean up the data and look for “interesting” stuff. These days, the processing also automatically generates catalogs of objects in the image, i.e., tries to find everything interesting in the image. Of course, the details depend on the data source and what you are looking for—stars, galaxies, planets, asteroids, or many other possible targets.

Over the years, astronomers have employed all kinds of image analysis, including machine learning techniques to automate these processes. In fact, many techniques pioneered by astronomers have been adopted for other uses. Astronomers have also pioneered the use of crowdsourced “citizen science” to aid the development and validation of these algorithms. Galaxy Zoo was one of the first and most successful such citizen science project, and has spawned dozens of clones.

In order to understand and answer questions about these massive datasets, e Astronomers have also pioneered statistical methods and search techniques. Kramer also discusses the difficult challenges of creating models that connect theory to the observational data. Much of astronomy is about trying to go from theoretical physics to “pixels in the image”, and vice versa.

Finally, they note that most of the data is openly available (though you really can’t download a copy, because it too freaking much). Most of the software is available, too. (This openness is possible largely because no one knows how to make money off astronomy, not even astronomers.) This means that there is opportunity for anyone to get into the game, to create new analyses, or to discover new science. Much of the data has hardly been studied at all, so who knows what you might be able to do?

In one sense, this article is nothing new. For centuries, Astronomy has led the development of instruments, data analysis, and theory. Looking out at the universe is both the hardest, and the most informative, scientific observations of all, and Astronomers are always working at the edge of what is technically possible.

In the past few years, there has been an accelerating trend to cut pubic funding for scientific research. The remaining funds are ever more tightly rationed, forcing hard choices, and difficult arguments about the relative benefits of different activities. Inevitably, there are strong pressures to reduce activities that have little obvious and direct benefit for people or important political interest groups.

One of the prime targets has been large-scale astrophysics, which requires expensive equipment and is, by definition, not about current life on Earth. It doesn’t even employ large numbers of people, at least once construction has finished. What good is it, except to fill the curiosity of a few egg heads?

This political picture is important to keep in mind when reading this article. They are responding to the “Why should we pay for these large investigations?”

In short, one reason to support Astronomy research is that this work can drive many data technologies that are increasingly important in may fields closer to home (and more profitable).

This is not the most romantic reason to do Astronomy, but it is a valid and important point.

  1. Jan Kremer, Kristoffer Stensbo-Smidt, Fabian Gieseke, Kim Steenstrup Pedersen, and Christian Igel, Big Universe, Big Data: Machine Learning and Image Analysis for Astronomy. IEEE Intelligent Systems, 32 (2):16-22, 2017.


Space Saturday

MIstform Display

Reported this week at CHI, Mistform is “a shape changing fog display that can support one or two users interacting with either 2D or 3D content.” ([1], p. 4383)  Cool!

The basic idea of this kind of display is to generate a “fog” of water droplets in front of the person, and project information from the back. With cleaver geometry, the projection is seen by the eye as 3D objects hanging in mid air. The cool thing is that the user can reach into the fog to touch the objects hanging there.

This version from  Yutaka Tokuda and colleagues at University of Sussex, adds the wrinkle that the shape of the fog can be manipulated, to create a curved “screen” [1]. This calls for clever squared geometric computations, to account not only for the fog and the eye, but also for the curvature of the fog. The latter is computed from the position of the pipes that generate the mist.

The projection is, in principle, “mere geometry”. Working from the eye position (via head tracking), the color and brightness of each pixel is computed. Working backwards, the pixel is mapped to a region of the go, and then back to the projector. Voila.

Interacting with the display uses hand tracking with a Kinnect. The fog is segmented into regions that can be touched (“actuators”). This is coordinated with the projected objects, so the user can reach into the fog and “touch” an object in a natural motion.


This is a very nice piece of work indeed. The paper [1] gives lots of details.

This is a great example of the potential of projective interfaces, which will replace the ubiquitous screen in the coming decade or two. (If you have any doubts, take a gander at this wizardry from some Illinois alums.)

Of course, the mountain we have to climb is to make one big enough and clever enough that we can walk into it. This will also combine with haptics so the objects ‘push back’ when you tough them. Not that will be cool.

  1. Yutaka Tokuda, Mohd Adili Norasikin, Sriram Subramanian, and Diego Martinez Plasencia, MistForm: Adaptive Shape Changing Fog Screens, in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2017, ACM: Denver, Colorado, USA. p. 4383-4395.
  2. University of Sussex. MistForm: adaptive shape changing fog screens 2017,

PS. Wouldn’t “Shape Changing Fog Screen” be a great name for a band?
Or how about,  “The Fog and the Eye“.