Category Archives: Internet of Things

Inaudible Speech Commands Hack Your Home

I’m not a huge fan of speech interfaces, or of Internet connected home assistants a la Alexa in general.

I have already complained that these devices are potentially nasty invaders of privacy and likely to have complicated failure modes not least due to a plethora of security issues. (Possibly a plethora of plethoras.) I’ve also complained about the psychology of surveillance and instant gratification inherent in the design of these things. Especially for childred.

Pretty much exactly what you don’t want in your home.

This fall a group at Zhejiang University report on yet another potential issue: Inaudible Voice Commands [1].

Contemporary mobile devices have pretty good speakers and microphones, good enough to be dangerous. Advertising agencies and other attackers have begun using inaudible sound beacons to detect the location of otherwise cloaked devices. It is also possible to monkey with the motion sensors on a mobile device, via inaudible sounds.

Basically, these devices are sensitive to sound frequencies that the human user can’t hear, which can be sued to secretly communicate with and subvert the device.

Zhang, Guoming and colleagues turn this idea onto speech activated assistants, such as Alexa, or Siri  [1]. They describe a method to encode voice commands into innocent sounds. The humans can’t hear the words, but the computer decones it and takes it as a voice command.

These devices are capable of almost any operation on the Internet. Sending messages, transferring money, downloading software. The works.

In other words, if this attack succeeds, the hacker can secretly load malware or steal information, unbeknownst to the user.


Combine this with ultrasound beacons, and the world becomes a dangerous place for speech commanded devices.

The researchers argue that

The root cause of inaudible voice commands is that microphones can sense acoustic sounds with a frequency higher than 20 kHz while an ideal microphone should not.

This could be dealt with by deploying better microphones or by software that filters out ultrasound, or detects the difference between voiced commands and the injected commands.

I would add that a second root cause is the number of functions of these devices, and the essentially unimodular design of the system. Recent voice activated assistants are installed as programs on general purpose computers with a complete operating system, and multiple input and output channels, including connections to the Internet. In general, any program may access any channel and perform any computation.

This is a cheap and convenient architecture, but is arguably overpowered for most individual applications. The general purpose monolithic device requires that the software implement complicated security checks, in an attempt to limit privileges. Worse, it requires ordinary users to manage complex configurations, usually without adequate understanding or even awareness.

One approach would be to create smaller, specialized hardware modules, and require explicit communication between modules. I’m thinking of little hardware modules, essentially one to one with apps. Shared resources such as I/O channels will have monitors to mediate access. (This is kind of “the IoT inside every device”.)

This kind of architecture is difficult to build, and painful in the extreme to program. (I’m describing what is generally called a secure operating system.) It might well reduce the number of apps in the world (which is probably a good thing) and increase the cost of devices and apps (which isn’t so good). But it would make personal devices much harder to hack, and a whole lot easier to trust.

  1. Guoming Zhang, Chen Yan, Xiaoyu Ji, Taimin Zhang, Tianchen Zhang, and Wenyuan Xu, DolphinAtack: Inaudible Voice Commands. arxive, 2017.


IOTA’s Cart Is Way, Way Before the Horse

Earlier I commented on SatoshiPay microcrasactions switching from Bitcoin to IOTA. Contrary to early hopes, Bitcoin has not been successful as a medium for microtrasactions because transaction fees are too high and latency may be too long.

IOTA is designed for Internet of Things, so it uses a different design than Nakamoto, that is said to be capable of much lower latency and fees. SatoshiPay and other companies are looking at adopting IOTA for payment systems.

The big story is that IOTA is reinventing Bitcoin from the ground up, with its own home grown software and protocols. I described it (IOTA) as “funky” in my earlier post.

It is now clear that this funkiness extended to the implementation, including the cryptographic hashes used [1,2]. This is not a good idea, because you generally want to use really well tested crypto algorithms.

So when we noticed that the IOTA developers had written their own hash function, it was a huge red flag.

Unsurprisingly, Neh Haruda reports that their home grown hash function is vulnerable to a very basic attack, with potentially very serious consequences.

The specific problems have been patched, but the fact remains that IOTA seems to be a home made mess of a system.

Narula also notes other funkiness.  For some reason they use super-funky trinary code which, last time I checked, isn’t used by very many computers. Everything has to be interpreted by their custom software which is slow and bulky. More important, this means that their code is completely incompatible with any other system, precluding the use of standard libraries and tools. Such as well tried crypto libraries and software analysis tools.

I have no idea why you would do this, especially in a system that you want to be secure and trusted.

The amazing thing is not the funkiness of the software. There is plenty of funky software out there. The amazing thing is that lots of supposedly competent companies have invested money and adopted the software. As Narula says, “It should probably have been a huge red flag for anyone involved with IOTA.

How could they get so much funding, yet only now people are noticing these really basic questions?

It is possible that these critiques are finally having some effect. Daniel Palmer reports that the exchange rate of IOTA’s tokens (naturally, they have their on cryptocurrency, too) has been dropping like a woozy pigeon [3].  Perhaps some of their partners have finally noticed the red flags.

The part I find really hard to understand is how people could toss millions of dollars at this technology without noticing that it has so many problems. Aren’t there any grown ups supervising this playground?

I assume IOTA have a heck of a sales pitch.

Judging from what I’ve seen, they are selling IOTA as “the same thing as Bitcoin, only better”. IOTA certainly isn’t the same design as Bitcoin, and it also does not use the same well-tested code.  I note that a key selling point is “free” transactions, which sounds suspiciously like a free lunch. Which there ain’t no.

IOTA’s claims are so amazingly good, I fear that they are too good to be true.

Which is the biggest red flag of all.

  1. Neha Narula, Cryptographic vulnerabilities in IOTA, in Medium. 2017.
  2. Neha Narula, IOTA Vulnerability Report: Cryptanalysis of the Curl Hash Function Enabling Practical Signature Forgery Attacks on the IOTA Cryptocurrency. 2017.
  3. Daniel Palmer, Broken Hash Crash? IOTA’s Price Keeps Dropping on Tech Critique Coindesk.September 8 2017,
  4. Dominik Schiener, A Primer on IOTA (with Presentation), in IOTA Blog. 2017.


Cryptocurrency Thursday

Yet Another IOT Security Problem

One of the hottest trend these days is the Internet of Things, which aims to install zillions of network connected devices everywhere, including your home. Unsupervised microphones, cameras, and sensors, connected to the Internet, listening and watching you at all times. What could possibly go wrong?

This summer a group from University of Washington report on yet another jaw dropping technology:   motion detection to track what you are doing, which they call CovertBand [2].

This technique uses active sonar, broadcasting sound and listening for the echoes. Any device with a speaker and microphone could do this, in principle. “Smart” TVs and assistants such as Alexa, for instance.

The technology is sneaky, because they use the idea of steganography (which we knew was going to be important way back when [1]). The sonar chirps are concealed in other sounds, such as music. That pop music ear worm you downloaded is not only rotting you brain, it might also be snooping on you!


The paper reports detailed studies which demonstrate considerable abilities to covertly monitor activities, even through barriers. It’s quite impressive.

I’ll note that the researchers suggest three motivating scenarios for using this technology. (This kind of list is conventionally required in academic papers about security.) They identify three use cases:

  • National intelligence
  • Vigilante Justice
  • Remote Hacking of Phones and Smart TVs

The important point is that there are no socially positive use cases, at least for normal, law abiding civilians. This is purely wicked.

The researchers identify counter measures, which include sound proofing and jamming the sonar signals. The latter can be done with a mobile phone app, so we may soon see people setting up their phone to check and block such snooping!

Surprisingly, the researchers do not consider other defensive measures such as not installing such devices in private areas, not connecting them to the Internet, or engineering the speakers and microphones to not be able to be used in this way.

In a sense, this attack is made possible by the fact that these devices have vastly more capability that is needed most of the time. It might be better to engineer the devices to have “just enough” resolution to do their work.

In another sense, this attack is made possibly by the fact that these devices wrap a whole bunch of functions in one device, with common memory and so on. Including speech generation, speech detection, and multichannel music playing in one device might be convenient, but it isn’t necessary. It could be three simpler devices communicating by simple, easier to secure, channels. This would be harder to build, but much, much harder to hack.

And, following Bob’s Rule for home devices, every IOT device should have a prominent “Off” switch that really works.

This is a really nice piece of work. Well done, all.

  1. Adam D. Cain, Text Steganography, in Electrical Engineering. 1996, University of Illinois at Urbana-Champaign: Urbana.
  2. Rajalakshmi Nandakumar, Alex Takakuwa, Tadayoshi Kohno, and Shyamnath Gollakota, CovertBand: Activity Information Leakage using Music [to appear). Proceedings of ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2017.
  3. James Urton, Computer scientists use music to covertly track body movements, activity, in UW News. 2017.


Is “Cute” Enough for a Robot?

In the great rush to create home robots, it seems that 1,000 flowers are blooming. Many different robots are being tried, combining the basic core of features with different appearances and artificial personalities.

One of this year’s models is ‘Kuri’, which is designed to be simple and cute. It understands speech commands, but “speaks robot”—not synthesized speech, but “cute” beeps and buzzes.

As far as I can tell, it does nothing that a computer or tablet or Alexa can’t do, except in a “friendly”, autonomously mobile package.

It seems that Kuri wanders around your house with its cute face and twin HD cameras. These can live stream over the Internet, to “be your eyes when you’re” away. Kiri also has microphones, of course, to capture sounds and conversations. Kuri will “investigate” unusual sounds. It has speakers, so you can play music, and yell at your baby sister.

This little guy is supposed to “bring joy to your house”. As far as I can tell, the main feature of Kuri is “cuteness”. Is this enough?

Well maybe.

Unfortunately, Kuri has gone way off the rails with a new feature, “autonomous video”.

Basically, as Kuri wanders around mapping your house, listening to you, and generally being cute, it will record videos.

The results of this snooping are sent to you (or at least to whoever controls Kiri), where you can select ones that you like. Supposedly, Kiri uses this feedback to learn what you like, and thereby to construct a please selfie video of your house.

Who doesn’t want that?

Well, me, obviously.  But, who asked for this feature, anyway???

I have no idea why I would ever want “daily dose of short “life at home’ videos”.  I mean, if there is any place I don’t need to visit virtually, it’s the place that I live physically.

But if I did want it, I don’t want an Internet connected device streaming video out of my house to the Internet. And I really don’t want an “autonomous” camera wandering around unpredictably recording my private life.

It’s Alexa on wheels. Eeek.

“Turn it off” doesn’t even begin to cover it.

I’ll add a couple of other points that Kuri brings to mind.

Like may contemporary robots, Kuri does some simple psychological tricks to indicate that he (apparently Kuri is male) is listening. It looks up, it looks ‘happy’, it makes ‘eye contact’ (more or less). This is “cute” in the same way as a pet may be “cute”, and for the same reason—you are projecting human personality onto a non-human actor.

This is probably good marketing, but there is some weird psychology going on here, especially if kids are involved.

First of all:  No, Kuri doesn’t actually like you. It isn’t capable of feelings of any kind.

The head and eye gestures raise the interesting question of whether people will tend to mirror these inhuman movements in the same way that they tend to mirror other people as they interact. And will children develop weird behavioral patterns from living with a household robot?  Who knows.

Then there is Kuri’s gaze.

It is becoming common to put camera’s behind features that look like human eyes. Kuri has a very abstract, but unmistakably analog to a human head and face, and the eyes are where the cameras are. This is a physical analogy to human senses, but has a sort of perverse twist to it. While a person or a dog sees you with their eyes, a robot is usually recording and streaming with its eyes. This mismatch means that you may unconsciously overlook the invasiveness of those robot eyes (which are really web cams), or perhaps edge toward paranoia about other people’s eyes (which are not web cams).

These “uncanny” confusions are hardly unique to Kuri, though the “cuter” the robot the more powerful the psychological illusions.

Is “cute” a good thing for a robot to be? I’m not so sure.

  1. Alyssa Pagano, Kuri Robot Brings Autonomous Video to a Home Near You, in IEEE Spectrum -Automation. 2017.


Robot Wednesday Friday

Hacking the Grid via Solar Panels

It seems there is continuous stream of computer security vulnerabilities (from your USB hub  to synthetic DNA and all modalities in between), and the still unresolved challenges of the Internet of Things (IoT), which promise to enlarge the (just barely working) Internet by orders of magnitude.

This month there is discussion that these issues affect the Solar Power industry as well.

In particular, small scale PV systems that are connected to the Grid may be vulnerable to hacking. In a student project, Willem Westerhof discovered security flaws in a consumer market PV inverter which connects the home system to the power grid. He then sketches a scenario in which determined hackers could take over large numbers of these systems, and then orchestrate power fluctuations that would crash wide areas of the power grid.

I have not found many details of the vulnerabilities, though it would be a remarkable system indeed to not have any security weaknesses. And, like the rest of the IoT, these systems are deployed in the hands and homes of ordinary people, who in no position to investigate or fix the software. In addition, it appears that these systems are, for whatever reasons, connected to the Internet, and therefore vulnerable to network hacking.

In short, it is extremely plausible that home PV systems are hackable.

Westerhof works out what he calls “The Horus Scenario” , which is a worst case episode. Assuming that all the installed PV systems have similar vulnerabilities, a determined hacker could penetrate and gain control over large numbers of the systems. This would enable the hacker to turn off and on the flow to the grid.

The devastating attack involves simultaneous shutdown of large numbers of PV systems, resulting in a dramatic and near instantaneous drop in available power. This would unbalance the grid and likely force shutdowns—sudden, widespread blackouts.

One reason this attack is possible is that, at least in Europe, a significant fraction of the total generating power is from PV. Knocking out one or a few homes would have minimal effects, but knocking out 10 or 20% of the generating power in a few minutes without warning is a fatal problem.

This is clearly a possibility, and a very serious potential threat. Even if only a faction of PV inverters were successfully attacked in the way, it would probably be a serious catastrophe.

It is important to note that this problem has little to do with solar energy per se. The problems stem from connecting a device to both critical infrastructure and the Internet at the same time. This is a concern for the IoT overall. Connecting lots of Internet capable devices to each other and to utilities is surely a bad idea, especially in the wild and unsupervised environment of ordinary homes.

Glancing at the vulnerabilities that have been reported, they are mostly garden variety Internet break ins. (I mean, one of the vulnerabilities is a data overrun via the TELNET port, for goodness sake.) Which leads to the question, why are these things connected to the Internet? I assume there are reasons, but maybe this should be reconsidered.

I get rather nervous that this is reported as “Hackers ‘could target electricity grid’ via solar panel tech, which seems likely to play into the hands of the power monopolies and fossil fuel industry. This will be used as yet more misleading propaganda that will be used to roll back all local generation initiatives.

As I said, this is more about Internet security that solar energy.

That said, I would strongly encourage PV equipment makers to step up their game. If you want to be part of vital infrastructure, then you have to design the systems to be as fail safe as possible.

  1. Chris Baraniuk, Hackers ‘could target electricity grid’ via solar panel tech, in BBC News – Technology. 2017.
  2. The Horus Scenario. The Horus Scenario-Exploiting a weak spot in the power grid. 2017,

Boclkchain+IoT Standards??

As Bitcoin sinks slowly into the sands of history (this week saw a disastrous fork, creating a duplicate-but-incompatible blockchain, and continues on the path to yet more forks), blockchain technology continues to cook along.

One sign that blockchain was last year’s hot technology is the new IEEE Standards Working Group, P2418 – Standard for the Framework of Blockchain Use in Internet of Things (IoT)  [1, 2].

The purpose of this project is to develop definitions and a protocol for blockchain implementations within an IoT architectural framework.This standard provides a common framework for blockchain usage, implementation, and interaction in Internet of Things (IoT) applications. The framework addresses scalability, security and privacy challenges with regard to blockchain in IoT. Blockchain tokens, smart contracts, transaction, asset, credentialed network, permissioned IoT blockchain, and permission-less IoT blockchain are included in the framework.


That’s right, folks. With no existing implementation, and not even a clear definition of what this use case would actually mean, this group has begun to discuss official standards.

As the old joke goes, “We love standards. That’s why we have so many of them.”

Let me be clear. Interoperability standards are critical, and the IoT will fall down with a thud if there aren’t good standards.

But the trickiest part of developing standards is doing it at the right time. There has to be enough experience and understanding of the problem so we know what needs to be standardized and how the standards will work. But you want to standardize early enough that bad ideas don’t become so entrenched that they can’t be dislodged. (A historical example is the ASCII standard, which was a smidgeon too late, IBM had to go to market with EBCDIC, so we all had to live with two standards.)

So one question is, is it time to start standards work?


In general, IEEE “Framework” standards are, well, pretty general. A Framework will usually focus on definitions of concepts and terminology, aiming to help technology progress by at least clarifying what people are talking about.

In the case of IoT, I’d say there is a desperate need for clearer language. I’ve been following IoT forever (and I could argue that I was doing it a decade before the term was coined). But I have no idea at all what most people mean by “IoT”, because so many different things are being called IoT.

Blockchain technology has a clear conceptual origin (Nakamoto (2009) [3]), though in recent years there have been a variety of permutations (private blockchains, side chains, alternative consensus mechanisms), and absurd amounts of propaganda (such as Tapscott & Tapscott (2016) [4]). Everyone is talking about “blockchain”, but its getting difficult to know what that means.

This working group aspires to clarify the concepts in the intersection of IoT and Blockchain. It’s kind of the intersection of two undefined sets, which has to be really, really, undefined, no?

I’d love to see some conceptual clarity. But I don’t know if this working group can succeed. If they have to figure out both IoT and blockchains, then it’s going to be an impossibly large task.

Another question mark is whether the standard can be created fast enough to matter. These technologies are being built and deployed at break neck speed. Large players are already deploying their own products which implicitly or explicitly define their own frameworks.

If the working group gets mired in problems of how to incorporate existing practices—multiple incompatible practices—then it will be doomed. Even if it avoids becoming a battleground (Microsoft versus IBM versus whoever), it could easily become redundant by actual practice.

(For a historic example, twenty years ago there was a huge amount of work done to standardize “URNs” and other identifiers,which got tangled in crosscurrents among different interests, many of which you would laugh at today. In the end people simply worked around the limitations of URLs to achieve the goals.. While URNs are conceptually better than URLs, but no one uses them.)

I will follow this effort with interest, though I’m rather pessimistic that it will succeed.

  1. IEEE Standards Association. blockchain wg – Blockchain working group. 2017,
  2. IEEE Standards Association. P2418 – Standard for the Framework of Blockchain Use in Internet of Things (IoT). 2017,
  3. Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System. 2009.
  4. Don Tapscott and Alex Tapscott, Blockchain Revolution: How the Technology Behind Bitcoin is Changing Money, Business, and the World, New York, Portfolio/Penguin, 2016.


Cryptocurrency Thursday

The Social Psychology of IOT: Totally Not Implemented Yet

Murray Goulden and colleagues write some interesting thoughts about the Internet of Things combined with ubiquitous mobile devices, specifically, “smart home” applications which can observe the user’s own behavior in great detail. In particular, they point out that these technologies generate vast amounts of interpersonal data, data about groups of people. Current systems do not manage and protect individual personal data especially well, but they don’t have any provisions at all for dealing with interpersonal data.

smart home technologies excel at creating data that doesn’t fit into the neat, personalised boxes offered by consumer technologies. This interpersonal data concerns groups, not individuals, and smart technologies are currently very stupid when it comes to managing it.

The researchers discuss social psychological theory that examines the way that groups have social boundaries and ways to deal with breaching the boundaries. For example, a family in their home may have conversations that they would never have anywhere else, nor when any outsider is present.

This isn’t a matter of each individual managing his own data (even if the data is available to manage), but understanding that there is a social situation that has different rules than other social situations, rules which apply to all the individuals.

In-home systems have no understanding of such rules or what to do about them, nor are there any means for humans to manage what is observed.

Their paper makes the interesting point that this stems from the basic architecture of these in-home systems;

The logic of this project – directing information, and so agency, from the outer edges of the network towards the core – is one of centralisation. The algorithms may run locally, but the agency invested in them originates elsewhere in the efforts of the software engineers who designed them.” ([3], p.2)

In short, the arrogant engineers and business managers don’t even understand the magnitude of their ignorance.

I have remarked that many products of Silicon Valley are designed to solve the problems that app developers understand and care about. The first apps were pizza ordering services, music downloads, and dating services. There are endless variations on these themes, and they are all set in the social world of a young, single, worker (with disposable income).

For more than two decades, “smart home” systems have been designing robot butlers that will adjust the “settings” to the “user’s preferences”. I have frequently questioned how these systems work when there is more than one user, i.e., two or more people live together. The lights can’t be perfectly adjusted to everyone, only one “soundtrack” can play at a time, etc.  Noone has an answer, the question isn’t considered.

I will say again that noone with any experience or common sense would ever put a voice activated, internet connected device in a house with children, let alone a system that is happy to just buy things if you tell it to. Setting aside the mischief kids will do with such capabilities, what sort of moral lesson are you teaching a young child when the house seems to respond instantly to whatever they command?

Goulden doesn’t seem to have any solutions in mind. He does suggest that there needs to be ways for groups of people to “negotiate” the rules of what should be observed and revealed. This requires that the systems be transparent enough so we know what is being observed, and that there be ways to control the behavior.

These issues have been known and studied for many years (just as a for instance take a gander at research from the old Georgia tech “Aware Home” project from the 1990’s,  e.g.,[1]), but the start up crowd doesn’t know or care about academic research–who has time to check out the relevant research.

Goulden points out that if these technology are really obnoxious, then people will reject them. And, given that many of the “features” are hardly needed, people won’t find it hard to turn them off.

Their current approach – to ride roughshod over the social terrain of the home – is not a sustainable approach. Unless and until the day we have AI systems capable of comprehending human social worlds, it may be that the smart home promised to us ends up being a lot more limited than its backers imagine.

  1. Anind K. Dey  and Gregory D. Abowd, Toward a Better Understanding of Context and Context-Awareness. GIT GVU Technical Report GIT-GVU-99-22, 1999.
  2. Murray Goulden, Your smart home is trying to reprogram you in The Conversation. 2017.
  3. Murray Goulden, Peter Tolmie, Richard Mortier, Tom Lodge, Anna-Kaisa Pietilainen, and Renata Teixeira, Living with interpersonal data: Observability and accountability in the age of pervasive ICT. New Media & Society: 1461444817700154, 2017.