Yet More Stealthwear

In past posts I have commented on misguided efforts to create “stealthwear”—garments and accessories that are intended to defeat ubiquitous surveillance  These concepts use faraday cages, bright lights, and camouflage to prevent radio detection and prevent cameras from visually identifying the wearer.  [here here here]  My own view is that these adornments are more of a fashion statement than any useful technical solution.  (Unfortunately, the “statement” is often, “I have no idea how computer surveillance works.”)

This fall researchers from Carnegie Mellon demonstrated yet another concept, glasses that subvert face recognition software [2].   The basic idea is that relatively minor perturbations in input can confuse the neural nets trained to identify faces. They work out a way to create patterns printed on glasses frame that are, essentially, misleading pixels for the algorithm.

In their demo, they not only confuse the system, they can specify what the output should be. I.e., they enable the wearer to impersonate someone else.

The technique is realized by printing out glasses that are more or less ordinary looking. They imagine that this will fool human observers, who will not suspect that those stupid glasses are messing with their security software.

The spitting image of Milla… Mahmood Sharif
The spitting image of Milla… Mahmood Sharif

Reading the paper [2], I can see that this is pretty clever work. The fact that it works at all is amazing, actually!

But I have to doubt the practical value of this concept.

First, it is far from clear to me how this works with different flavors of neural nets. They have defeated “commercial” systems which are cheap and not designed to deal with this attack (at least not yet). I suspect that fancier systems might be less easy to fool. (In fact, I’d bet on it.)

Second, defeating the algorithm only gets you so far. In low end commercial systems, the output of the algorithm might be used by itself, but in many cases there are more than one source of data. For instance, you might have to give a fingerprint, or even show an ID. Or they might be tracking your phone. Successful evasion will require fooling whatever array of checks are in use—and it can be hard to even know what all is out there.

Third, the “physically realizable” glasses are supposed to be “unobtrusive”, i.e., humans won’t notice. Well, obviously that depends—the examples are certainly very noticeable. In fact, they probably attract attention, which isn’t necessarily what you want while trying to slip under the radar. In any case, the whole thing will crash to the ground as soon as the guard makes you take off your glasses to be scanned. Ooops.

If I wanted to defeat this defense I would probably deploy more than one neural net, and correlate the results. For that matter, I might rotate new neural nets in at random times. It will be hard to create glasses that fool every program, all the time.

I might also make a very simple “glasses detector”, and use it to do “recognition with your glasses off”. Worst case, I’ll record and track and learn the “you wearing funny glasses” image, which eventually will be correctly tagged to be your. After that, your special eyewear is only making it far easier to ID you.

And, finally, human guards will be trained to look out for funny glasses (or any glasses—they might be spywear anyway). We may not be sure who you are, but we know you are probably up to no good so we’ll flag you just for wearing the glasses.

The long and the short of it is that this technique works against a single, static (i.e., unchanging) algorithm, and only if that algorithm is naively unaware of this type of attack. It is easily defeated by several obvious and inexpensive counter measures.

Nevertheless, this is a very nice piece of work, and illustrates just how fragile neural nets can be, if they are naively used and trusted. As an exercise, we might think about this kind of obsfucating “attack” on other pattern recognition algorithms.


  1. Revell, Timothy (2016) Glasses make face recognition tech think you’re Milla Jovovich. New Scientist, https://www.newscientist.com/article/2111041-glasses-make-face-recognition-tech-think-youre-milla-jovovich/
  2. Sharif, Mahmood, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter, Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition, in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 2016, ACM: Vienna, Austria. p. 1528-1540. http://dl.acm.org/citation.cfm?doid=2976749.2978392

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s