Earlier I commented on Brunton and Nissenbaum’s valuable book, Obsfuscation . Their book gives a very sensible and pragmatic explanation of why this topic has become interesting, and of current technologies that may deliver some kinds of “protection” in some circumstances. Recent research is developing additional technologies that, if successful, might be used for some of the purposes descried by B&N.
Caveat: this stuff makes my head spin. I can say for sure that my understanding of this research is quite shallow. But it is really important to try keep on top of developments, whcih are crucial to the survival of the Internet.
Speaking of obfuscation, Koch, Golling, and Rodosek consider “How Anonymous Is the Tor Network?” . The TOR network seeks to obscure the link between client IP address and server. This is done by “onion routing”, which applies layers of encryption, and passes the message through intermediate nodes, each of which knows nothing more than to relay the message. The idea is that when the message emerges from the network, it is difficult to know where it started from. This allows the user to “anonymously” connect to Internet services, i.e., without revealing their IP address.
Koch et al examine how the Tor network actually works. Specifically, they examine the way that exit nodes are selected and used in real life, which is far from random. Worse, in practice, Tor may well use only a handful of exit nodes for a session. If I understand correctly, this means that, contrary to the intention of the protocol, it is possible to monitor the session at this point.
Moving beyond TOR, Feigenbaum and Ford discuss “Seeking anonymity in an internet panopticon”, describing their “Dissent” project (http://dedis.cs.yale.edu/dissent/), which uses a collective approach to anonymity. The idea is similar to Tor, adding a “shuffling” process that mixes and randomizes messages from several sources. The idea is to preserve the anonymity of the sender, who is effective lost in the group of senders.
They also explore “Dining Cryptographers nets”, which us the classic Dining Cryptographers algorithm to create proofs that do not reveal who said what. I admit that I don’t really grok all of the technical details, at least not without a lot more work than I’m willing to put in right now.
It is important to note that this concept is not a universal solution. For one thing, it is hardly surprising that these processes add latency to messaging—minutes or hours! There Still Ain’t Any Such Thing As A Free Lunch. (By the way, the reason the Internet is so insecure in general is not that we didn’t know how to do security twenty five years ago, but that we had to throw away security to get it to work fast enough to be useful.)
In another recent issue of CACM, Barak wrote a rather comprehensive (and difficult) survey of “obsfucation ressearch, not to be mistaken for the broader, pragmatic notion of “obsfuscation” discussed by Brunton and Helen Nissenbaum.
What they are talking about are possible techniques for creating “obscured” versions of programs that produce the same result as the original, but reveal no information about the code, or reveal only the information desired and nothing else. If this can be done, it would be useful for encryption, but also for uses such as smart contracts (which don’t reveal your intentions) or distributing software patches (without revealing the bugs being patched).
But this concept is mind-bendingly difficult to actually implement. Indeed, in 2001 Barak and colleagues “proved” that it is theoretically impossible. The current article is fascinating because it has been recognized that a different definition of the goal (“indistinguishability obsfuscation”) may be possible and would certainly be useful. The other interesting point is that, at the time of the writing, no one knows if the scheme is actually secure, under any assumptions. While this technology might be a “promised land” for applications, it could be a “house of cards”, even if “the ideas behind these constructiosn seem too beautiful and profound for that to be the case”. P. 96
(The same issue of CACM includes a retrospective by Andy Tanenbaum (“we are not worthy!”), about “Lessons Learned from 30 Years of MINIX” . Giants still walk among us! )
- Boaz Barak, Hopes, fears, and software obfuscation. Communications of the ACM, 59 (3):88-96, 2016.
- Finn Brunton, and Helen Nissenbaum, Obfuscation: A User’s Guide for Privacy and Protest, Cambridge, The MIT Press, 2015.
- Joan Feigenbaum and Bryan Ford, Seeking anonymity in an internet panopticon. Communications of the ACM, 58 (10):58-69, 2015.
- R. Koch, M. Golling, and G. D. Rodosek, How Anonymous Is the Tor Network? A Long-Term Black-Box Investigation. Computer, 49 (3):42-49, 2016.
- Andrew S. Tanenbaum, Lessons learned from 30 years of MINIX. Commun. ACM, 59 (3):70-78, 2016.