This month Kevin Fu and Wenyuan Xu give me yet more sphincter tightening reasons to worry about these systems: transduction attacks .
Much of the IoT is built of small devices equipped with sensors of many kinds. These clever little devices are transducers, converting physical phenomena (light, vibration, head, motion, etc.) into electrical signals and ultimately digital numbers. Anyone who has explored sensor technology knows how awesome this magic can be.
Any input device can be attacked with bogus input, and the output of any module can be subverted by intercepting or messing with its output. Conventional security measures are designed to protect the integrity of inputs and outputs (e.g., with cryptography and error detection).
But physical sensors are also face a third kind of attack, a transduction attack which “exploits a vulnerability in the physics of a sensor to manipulate its output or induce intentional errors”. (, p. 20) Yoiks! If I can’t trust my sensors, the whole system is in peril.
F&X say this is a huge problem because most commercial systems to date are not designed with this threat in mind.
“Billions of deployed sensors lack designed-in protections against intentional physical manipulation” (, p. 20)
For instance, most consumer-grade voice recognition is vulnerable to DolphinAttack: ultrasound signals that fool the system into executing commands unheard by the humans. This is possible because the microphone can detect ultrasound (which it arguably doesn’t need to) and the algorithms can be fooled.
Hacking my phone is annoying enough, but this is far more dangerous when it is an airliner, autonomous car, or a hospital being hacked this way. Yoiks!
“Protecting against transduction attacks is difficult because the consequences arise as software symptoms, but the risks begin in the physics of hardware.” (, p. 21)
What can be done?
Obviously, relying on software testing isn’t sufficient, nor is reliance on tests of individual components. The overall system needs to be designed to be able to check outputs of sensors, with the assumption that they may be untrustworthy. (This will be painful and add cost and complexity.)
F&X note that many of these attacks exploit the resonance of physically connected components. E.g., attacks on motion sensors via sounds sent to the speaker works because the components are mechanically coupled so the vibration of the speaker can fiddle the accelerometer.
They note a recent CERT advisory that recommended ways to mount the component in order to minimize physical coupling and control vibrations in critical ranges (i.e., the resonant frequencies of nearby components).
F&X also note that this problem calls for broader, interdisciplinary design expertise. Computer Scientists don’t necessarily know the physics and mechanical engineering needed, and vice versa. This teamwork is particularly important for systems such as autonomous vehicles, which need to be trustworthy.
F&X refer to this as “back to basics” for the Computer Science curriculum. It is so easy to focus on all the power of software, and simply assume that the hardware black-box just “works”.
“students may seek comfort hiding behind a beautiful Java facade rather than facing the ugly limitations of computing machinery.” (p. 23)
- Kevin Fu and Wenyuan Xu. 2018. “Risks of trusting the physics of sensors.” Communications of the ACM 61 (2):20-23. https://dl.acm.org/citation.cfm?id=3176402