In these days of abundant computing cycles, we see some old ideas come to fruition, simply because they are now feasible. For instance, automatic translation and other language processing are becoming common, though the ideas behind them aren’t new. What is new is the amount of computation and data that can be managed at a reasonable cost (and cheaper than paying humans to do it).
Evidently, one of the old ideas that is becoming new again is “obfuscation”, which potentially includes many different approaches, with different goals, including cloaking identity or location of whistle blowers .
A more technical form, “software obfuscation” has been around in one form or another, for a long time (e.g., license keys that unlock “obfuscated’ code), but appear to be enjoying a renaissance. In some cases, this computing power has made possible some amazing techniques, but increased computation mayalso lower the difficulty of techniques that are actually fundamentally flawed.
Hui Xu and Michael Lyu of the Chinese University of Hong Kong published a useful summary of why general software obfuscation is difficult. They explain that automatically obfuscating code requires significant levels of analysis of the code, and the ability to modify the code in ways that preserve the correctness of the output (i.e,, the scrambled program has to still work).
Furthermore, they point out that some of the academic literature is rather flawed, focusing on limited problems and failing to consider end-to-end issues in the real world. For example, they say that common formulations of cryptographic obfuscation “assume less powerful adversaries” than is realistic. They point to the case of a licensing mechanism, where studies of cryptographic obfuscation consider that “a successful cracking implies key leakage…while practical adversaries might only need to locate the code that bypasses the license verification” (p. 82) In other words, the techniques might “succeed” but not accomplish the real world goal at all. (Such studies are literally “academic”.)
Xu and Lyu suggest that the problem needs to be restated, in the form of “possibly attainable security properties that are meaningful for practical software obfuscation techniques”. Their general idea is surely a more end-to-end approach, as well as practical defenses against “specific deobfuscation techniques”.
- Finn Brunton and Helen Nissenbaum, Obfuscation: A User’s Guide for Privacy and Protest, Cambridge, The MIT Press, 2015.
- Hui Xu and Michael R. Lyu, Assessing the Security Properties of Software Obfuscation. IEEE Security and Privacy, 14 (5):80-83, 2016. http://doi.ieeecomputersociety.org/10.1109/MSP.2016.112
(PS. Wouldn’t “Specific Deobfuscation Techniques” be a good name for a band?)