Category Archives: Internet of Things

The Social Psychology of IOT: Totally Not Implemented Yet

Murray Goulden and colleagues write some interesting thoughts about the Internet of Things combined with ubiquitous mobile devices, specifically, “smart home” applications which can observe the user’s own behavior in great detail. In particular, they point out that these technologies generate vast amounts of interpersonal data, data about groups of people. Current systems do not manage and protect individual personal data especially well, but they don’t have any provisions at all for dealing with interpersonal data.

smart home technologies excel at creating data that doesn’t fit into the neat, personalised boxes offered by consumer technologies. This interpersonal data concerns groups, not individuals, and smart technologies are currently very stupid when it comes to managing it.

The researchers discuss social psychological theory that examines the way that groups have social boundaries and ways to deal with breaching the boundaries. For example, a family in their home may have conversations that they would never have anywhere else, nor when any outsider is present.

This isn’t a matter of each individual managing his own data (even if the data is available to manage), but understanding that there is a social situation that has different rules than other social situations, rules which apply to all the individuals.

In-home systems have no understanding of such rules or what to do about them, nor are there any means for humans to manage what is observed.

Their paper makes the interesting point that this stems from the basic architecture of these in-home systems;

The logic of this project – directing information, and so agency, from the outer edges of the network towards the core – is one of centralisation. The algorithms may run locally, but the agency invested in them originates elsewhere in the efforts of the software engineers who designed them.” ([3], p.2)

In short, the arrogant engineers and business managers don’t even understand the magnitude of their ignorance.

I have remarked that many products of Silicon Valley are designed to solve the problems that app developers understand and care about. The first apps were pizza ordering services, music downloads, and dating services. There are endless variations on these themes, and they are all set in the social world of a young, single, worker (with disposable income).

For more than two decades, “smart home” systems have been designing robot butlers that will adjust the “settings” to the “user’s preferences”. I have frequently questioned how these systems work when there is more than one user, i.e., two or more people live together. The lights can’t be perfectly adjusted to everyone, only one “soundtrack” can play at a time, etc.  Noone has an answer, the question isn’t considered.

I will say again that noone with any experience or common sense would ever put a voice activated, internet connected device in a house with children, let alone a system that is happy to just buy things if you tell it to. Setting aside the mischief kids will do with such capabilities, what sort of moral lesson are you teaching a young child when the house seems to respond instantly to whatever they command?

Goulden doesn’t seem to have any solutions in mind. He does suggest that there needs to be ways for groups of people to “negotiate” the rules of what should be observed and revealed. This requires that the systems be transparent enough so we know what is being observed, and that there be ways to control the behavior.

These issues have been known and studied for many years (just as a for instance take a gander at research from the old Georgia tech “Aware Home” project from the 1990’s,  e.g.,[1]), but the start up crowd doesn’t know or care about academic research–who has time to check out the relevant research.

Goulden points out that if these technology are really obnoxious, then people will reject them. And, given that many of the “features” are hardly needed, people won’t find it hard to turn them off.

Their current approach – to ride roughshod over the social terrain of the home – is not a sustainable approach. Unless and until the day we have AI systems capable of comprehending human social worlds, it may be that the smart home promised to us ends up being a lot more limited than its backers imagine.

  1. Anind K. Dey  and Gregory D. Abowd, Toward a Better Understanding of Context and Context-Awareness. GIT GVU Technical Report GIT-GVU-99-22, 1999.
  2. Murray Goulden, Your smart home is trying to reprogram you in The Conversation. 2017.
  3. Murray Goulden, Peter Tolmie, Richard Mortier, Tom Lodge, Anna-Kaisa Pietilainen, and Renata Teixeira, Living with interpersonal data: Observability and accountability in the age of pervasive ICT. New Media & Society: 1461444817700154, 2017.


Orchestrating Internet of Things Services

Zhenyu Wen and colleagues write in IEEE Internet Computing about “Fog Orchestration for Internet of Things Service[1]

Don’t you thing “Fog Orchestra” is a great name for a band?

After laughing at the unintentionally funny title, I felt obliged to read the article.

The basic topic is about the “Internet of Things”, which are “sensors, devices, and compute resources within fog computing infrastructures” ([1], p. 16) As Arieff quipped, this might be called “The Internet of Too Many Things”.

Whether this is a distinct or new technology or architecture is debatable, but the current term or art, “fog computing” is, for once, apt. It’s kind of like Cloud Computing, only more dispersed and less organized.

Wen and colleagues are interested in how to coordinate this decentralized fog, especially, how to get things done by combining lots of these little pieces of mist. Their approach is to create a virtual (i.e., imaginary) centralized control, and use it to indirectly control pieces of the fog. Basically, the fog and its challenges is hidden by their system, giving people and applications a simpler view and straight forward ways to make things happen. Ideally, this gives the best of both worlds, the flexibility and adaptability of fog, and the pragmatic usability of a monolithic application.

(Pedantic aside: almost anything that is called “virtual” something, such as “virtual memory” or a “virtual machine” or a “virtual private network”, is usually solving this general problem. The “virtual” something is creating a simpler, apparently centralized, view for programmers and people, a view that hides the messy complexity of the underlying system.

Pedantic aside aside: An exception to this rule is “Virtual Reality”, which is “virtual” in a totally different way.)

The authors summarize the key challenges, which include:

  1. scale and complexity
  2. security
  3. dynamicity
  4. fault detection ans handling

This list is pretty much the list of engineering challenges for all computing systems, but solving them in “the fog” is especially challenging because it is loosely connected and decentralized. I.e., it’s so darn foggy.

On the other hand, the fog has some interesting properties. The components of the system can be sprinkled around wherever you want them, and interconnected in many ways. In fact, the configuration can change and adapt, to optimize or recover from problems. The trick, of course, is to be able to effectively use this flexibility.

The researchers refer to this process as “orchestration”, which uses feedback on performance to optimize placement and communication of components. They various forms of envision machine learning to automatically optimize the huge numbers of variables and to advise human operators. This isn’t trivial, because the system is running and the world is changing even as the optimization is computed.

I note that this general approach has been applied to optimizing large scale systems for a long time. Designing networks and chips, optimizing large databases, and scheduling multiprocessors use these kinds of optimization. The “fog” brings the additional challenges of a leap in scale, and a need for continuous optimization of a running system.

This is a useful article, and has a great title!

  1. Zhenyu Wen, Zhenyu, Renyu Yang, Peter Garraghan, Tao Lin, Jie Xu, and Michael Rovatsos, Fog Orchestration for Internet of Things Services. IEEE Internet Computing, 21 (2):16-24, 2017.

Acoustic Attack On Motion Sensors

And yet another jaw dropping security attack on mobile devices (and also on IoT and lots of other technology).

Timothy Trippel and colleagues at Michigan and South Carolina describe “acoustic injection attacks” on accelerometers, that can mess with apps on your phone, drone, or other sensor-equipped device. Yoiks!

The researchers give the gory technical details in a paper to be presented in April [1]. But the general idea is simple enough.

Sound waves can be used to create false output from the accelerometer. Basically, playing a sound at the accelerometer’s resonant frequency can push it to respond just as it does to movement. This not only messes up the motion sensing, it can be used to fool the device, i.e., to spoof whatever signal you want. An attacker that can play sounds near the device may be able to take it over and make it do whatever he wants.

This attack is significant because there are many applications that are using accelerometer data for critical system controls, including input to AI algorithms that detect human and machine behavior, and vehicle navigation. Accelerometers are also used in gesture based interfaces.

So, evil doers might fiddle a fitness tracker, registering extra steps and false positions. More alarming, an attack might mislead the navigation of a drone or self-driving car. A gesture based controller might be taken over to send false instructions. And so on.

This is certainly an alarming flaw for the development of whole-body interfaces and other wearable devices. It’s hard enough to get motion and tracking to work, it’s not great to hear that it might be hacked.

This is yet another grievous potential flaw in self-driving cars and other sensor heavy machinery.  Even if you secure the wireless and other networks (which is pretty iffy, if you want my opinion), the system might still be hacked by a drive by boom box or covert squawk from the satellite music stream.


The research indicates that the majority of current mass produces MEMS accelerometers are vulnerable to this attack, at least to some degree. The paper recommends design improvements for future sensors, and some software defenses that protect systems with vulnerable sensors. In addition, sensors might be swathed in soundproofing.

The software countermeasures are interesting: they avoid simple periodic sampling of the signal, using random or 180 degree out of phase sampling. These yield the same result as regular sampling of the analog signal, but cannot be fooled by the acoustic injection in many cases.

This is a cool paper! Nice work.

  1. Timothy Trippel, Ofir Weisse, Wenyuan Xu, Peter Honeyman, and Kevin Fu, WALNUT: Waging Doubt on the Integrity of {MEMS} Accelerometers with Acoustic Injection Attacks, in IEEE European Symposium on Security and Privacy. 2017: Paris.
  2. WALNUT. WALNUT: Acoustic Attacks on MEMS Sensors. 2017,


Health Apps Are Potentially Dangerous

The “Inappropriate Touch Screen Files” has documented many cases of poor design of mobile and wearable apps, and I have pointed out more than once the bogosity of unvalidated cargo cult environment sensing.

This month Eliza Strickland writes in IEEE Spectrum about an even more troubling ramification of these bad designs and pseudoscientific claims: “How Mobile Health Apps and Wearables Could Actually Make People Sicker” [2].

 Strickland comments that the “quantified self” craze has produced hundreds of thousands of mobile apps to track exercise, sleep, and personal health. These apps collect and report data, with the goal of detecting problems early and optimizing exercise, diet, and other behaviors. Other apps monitor the environment, providing data on pollution and micro climate. (And yet others track data such as hair brushing techniques.)

These products are supposed to “provide useful streams of health data that will empower consumers to make better decisions and live healthier lives”.

But, Strickland says, “the flood of information can have the opposite effect by overwhelming consumers with information that may not be accurate or useful.

She quotes David Jamison of the ECRI Institute comments that many of these apps are not regulated as medical devices, so they have not been tested to show that they are safe and effective.

Jamison is one of the authors of an opinion piece in the JAMA, “The Emerging Market of Smartphone-Integrated Infant Physiologic Monitors[1]. In this article, the authors strongly criticize the sales of monitoring systems aimed at infants, on two grounds.

First, the devices have not been proven accurate, safe, or effective for any purpose, let alone the advertised aid to parents. Second, even if the devices do work, there is considerable danger of overdiagnosis. If a transient and harmless event is detected, it may trigger serious actions such as an emergency room visit. If nothing else, this will cause needless anxiety for parents.

I have pointed out the same kind of danger from DIY environmental sensing: if misinterpreted, a flood of data may produce either misplaced anxiety about harmless background level events or misplaced confidence that there is no danger if the particular sensor does not detect any threat.

An important design question in these cases is, “is this product good for the patient (or user)”?  More data is not better, if you don’t know how to interpret it.

This is becoming even more important than the “inappropriateness” of touchscreen interfaces:  the flood of cargo cult sensing in the guise of “quantified self” is not only junk, it is potentially dangerous.

  1. Christopher P. Bonafide, David T. Jamison, and Elizabeth E. Foglia, The Emerging Market of Smartphone-Integrated Infant Physiologic Monitors. JAMA: Journal of the American Medical Association, 317 (4):353-354, 2017.
  2. Eliza Strickland, How Mobile Health Apps and Wearables Could Actually Make People Sicker, in The Human OS. 2017, IEEE Spectrum.


Authentication for Voice Assistants

Voice interfaces have been around for quite a while, but in the last few years they have become widely used in consumer products including mobile devices and home assistants.

For anyone who remembers the first generations of this technology, it is truly stunning to see the quality of the speed recognition, and how well the systems perform even in noisy conditions with multiple speakers present.

Still, there is one problem that remains troubling: these systems are vulnerable to spoofing and counterfeiting of the master’s voice. Even simple attacks such as a recording might fool the system. For that matter, it isn’t even clear what sort of authentication should be used.

Huan Feng and colleagues at the University of Michigan discuss a method for Continuous Authentication for Voice Assistant.

They point out that “voice as an input mechanism is inherently insecure as it is prone to replay, sensitive to noise, and easy to impersonate” ([1], p. 1) Some systems use characteristics of speech as a form of biometric authentication (i.e., the system learns to identify individual by their voice), but this is subject to replay and can even by hidden in noise in ways that a human cannot easily detect.

The authentication problem has a couple of aspects. The system needs to know not only the purported identity of the speaker, and whether the speaker is actually present and commanding, but also that the detected message is what the speaker actually said. Furthermore, it is important to authenticate all of the speech, not just and initial connection.

(If you think about it, these challenges stem from the very advantages that make voice commanding attractive. The system is hands, and everything else, free, there interaction is fluid and natural, without obvious “log in” or “log out”, and the messages are similar to natural language, without metadata or “packets” that might carry authentication information.)

Feng’s wolverines prototyped a system that uses an accelerometer to sense the movement of the speaker, and to continuously match that against the voice signal received. This approach is a sort of two factor authentication, and also assures that the signal is authentically from a specific speaker.

One of the tricky parts is the matching algorithm, mapping the movements of the speaker to the sound picked up by the remote microphone. Their paper explains their methods and results.

This approach has a number of advantages. Obviously, a wearable sensor can is a simple and cheap device, that is closely associated with the specific person, and the motion will not be easy to fake. The method works with any language without specific adaptation, and it automatically adjusts to any changes in the user’s behavior, such as fatigue or illness that might distort their voice. They also point out that the system also detects that the user is not speaking, which should lock out any commands.

This is pretty cool!

Of course, one could question the advantage of the voice interface, if one has to wear a device in order to safely use it. Why not just put the microphone in the wearable itself? In fact, I can see that you might want to put a version of this authentication into any wearable microphone, including phone headsets.

This would have the additional benefit of eliminating the creepy “always listening” behavior of assistants.  If they only listen to someone who is wearing the right sensors, and authenticated properly, then they can eliminate the need to continuous listening.

  1. Huan Feng, Kassem Fawaz, and Kang G. Shin, Continuous Authentication for Voice Assistants. CoRR, abs/1701.04507 2017.

Toward an Internet of Agricultural Things

The “Internet of Things” is definitely the flavor of the month, though it isn’t clear what it is or why anyone wants it in their home. I’m frequently critical of half baked IoT, but I’m certainly not against the basic idea, where it makes sense.

Case in point: a local start up made a splash at CES with a classic IoT for agriculture. Amber Agriculture is a bunch of low cost sensors deployed to grain storage which continuously sense the conditions and optimize aeration, and alert to problems. The web site indicates that the system implements optimizing algorithms (“rooted in grain science principles”, AKA, actual science) to automatically controls fans.


This is a nice example of IoT: the sensor net not only replaces human oversight, the small sensors can give data that is difficult to obtain otherwise. The economic benefit of this fine grain optimization is apparently enough to pay for the sensors. (I would be interested to see actual peer reviewed evidence of this cost analysis.)

I can’t find a lot of technical details, so I wonder how the sensors are deployed (do you just mix them into the grain?), how they are separated from the grain when it is removed, or exactly what it is measuring. Are the sensors reusable? Does it work for different kinds of grain?

It is interesting to think about extensions of this technology.

What other features cold be added?

I wonder what could be done with microphones to listen to the stored grain. Are there sonic signatures for, say, unexpected movement (indicating a leak or malfunction?), or perhaps sounds indicating the presence of pests.

Similarly, the sensors might have optical, IR, or even radio beacons, which might detect color, texture, or other surface properties. Could this early detect disease or contamination?

Anyway, well done all.

(And I learned that there is an internet domain name, ‘.ag’)

  1. Amber Agriculture. Amber Agriculture. 2017,
  2. Nicole Lee,  Presenting the Best of CES 2017 winners! .January 7 2017,


“Hair Coach”–with App

In recent years, CES has become an undisputed epicenter of gadgets, so I can’t let the occasion pass without at least one addition to the Inappropriate Touch Screen Files.

I’ll skip the boneheaded “Catspad”, which isn’t particularly new, and certainly makes you wonder who would want this.

I think the winner for today is the “Hair Coach”, which uses a “Smart Hair Brush” to offer you “coaching” on your hair care.

The brush itself has a microphone to listen to the hair as it is brushed (which I think is slightly cool—some kind of machine learning using the crackle of your hair), accelerometers in the brush to detect your technique (and, for the mathematically challenged, count your strokes). It also has a vibrator to provide haptic feedback (to train you to brush your hair more optimally?).

Of course, no product would be complete without a mobile app: “the simple act of brushing begins the data collection process.” The app is supposed to give you “personalized tips and real-time product recommendations”. The latter are basically advertisements.

I will note that the materials on the web offer absolutely no indication that any of this “optimization” actually does anything at all, other than increase profits (they hope).

This product caught my eye as particularly egregious “inappropriate touch screen”, because this is clearly a case of a non-solution chasing a non-problem. (Of course, most of the “hair care” industry is non-solutions to non-problems.)

My own view is that the simple and millennia old technology of a hairbrush was not actually broken, or in need of digital augmentation. Worse, this technology actually threatens one of the small pleasures of life. The soothing, sensual brushing of your own hair can be a simple and comforting personal ritual, a respite from the cares of the day.

Adding a digital app (and advertising) breaks the calm of brushing, digitally snooping and “optimizing”, and pulling your attention away from the experience and toward the screen—with all its distractions. How is this good for you?

Add this to the Inappropriate Touch Screen Files.


Inappropriate Touch Screen