Category Archives: Internet of Things

BeePi: Open Source Hardware

OK, I have my reservations about the Internet of Things (AKA the Internet of Way Too Many Things, or the Internet of Things That Don’t Work Right).  And I have also expressed concerns about DIY environmental sensing, which is usually long on sensing and short on validity.

But let’s combine IoT concepts with useful environmental monitoring, and validate the measurements, and I’m all for it.

Plus, I’m really worried about the bees.

So I am very interested in Vladimir Kulyukin’s BeePi, a Respberry Pi based bee hive monitor. Over the past decade, his team has developed low cost sensors and in situ data analysis that measures the sound, sight, and temperature of a bee hive. The sensors are minimally invasive, and collect data more or less continuously.

Vladimir Kulukin downloads data from a BeePi system at a honey bee hive in Logan on Monday afternoon. The USU computer science professor started a Kickstarter campaign for the device and surpassed his goal within the first two weeks. John Zsira/Herald Journal

Unlike bogus “Pigeon backpack” projects, this group has actually developed, validated, and published analytics that turn the sensor traces into potentially useful data about the behavior of bees. (E.g. see [1].)

The sound recordings can, in principle, give clues about the number and activity of the bees. At the coarsest level, they have easily documented the daily cycle of activity. I.e, they have confirmed day and night.

The visual imagery is used to detect bees entering and leaving the hive. This is an important indicator of foraging activity and overall health of the colony, and might give early warning of trouble in the hive.

The temperature measures correlate with overall activity, and abberant readings would indicate serious problems inside the colony.

The researchers aim to publish their hardware and software designs, so others can build and improve the idea. (It isn’t immediately clear what kind of licensing is intended, other than it is open source.)

In a sad sign of the times, they are doing a kickstarter to raise money ($1,000 !?) to build some more prototypes. In a sane world, funding agencies and companies would be beating down their doors trying to give them research support. And it would be many tens of thousands.

Another sign of the times is that the kickstarter is the most complete information about the project. Get a web page, guys!!

This project is pretty cool, and made me think.

As a distributed systems guy, the need for manual downloads is just too crude. A future version should have some kind of low power networking that, ideally, will automatically upload data to archives, e.g., in a cloud. A concomitant upgrade would be to beef up the data formats (they need to be documented, and would be better with standard metadata). It would be nice to have standard APIs for pushing and grabbing the data.

Bee hives tend to be scattered and far from networks, though. But perhaps a small UAV “data harvester”, might fly around, hover a couple of meters away to suck out the data through a short range link, and return to base after its rounds. Sort of “sneaker net” in the age of ubiquitous drones.  Such a drone might be useful for many environmental sensing tasks.

On the sensor front, I would think that humidity sensors would be a simple and important addition to the system. I think (but I’m not sure) that humidity is linked to some possible colony problems.

And what about lidar or sonar? The cost of lidar and sonar is crashing, so you might be able to add these to the sensors. Combined with the imagery, this would give even better bee counts (and in all weather, assuming the bees are active in all weather, which I’m not sure about).

Finally, I would suggest that the creators define how they want to share their system and data from it. Creative commons would be a place to look for ideas. <<link>> I would think that the plans and software might be shared through some existing maker community archive. E.g., Instructables, SparkFun, or AdaFruit would be plausible possibilities.  (Call me.)

This is a good example of low cost environmental sensing.  They are doing the hard work of validating the measurements.

There is a lot of work that could be done to make this a slicker and easier to use open source project. Documentation, publishing the design, and setting up a data archive are pretty straightforward, but would make a huge difference.  (Call me.)

  1. Vladimir Kulyukin and Sai Kiran Reka, A Computer Vision Algorithm for Omnidirectional Bee Counting at Langstroth Beehive Entrance, in nternational Conference on Image Processing, Computer Vision, and Pattern Recognition (IPCV’16). 2016: Las Vegas. p. 229-235.
  2. John Zsiray, USU professor hopes ‘BeePi’ hive sensors will help honeybees, in The Herald Journal – 2017.

Airgap, Smairgap

For many years, computers have been secured by keeping them off the Internet. This is called an “air gap”, which refers to the physical separation of the device from the outside world. (With wireless networks, an air gap isn’t a literal open space, obviously.)

For as long as I can remember (since the 1980’s at least), we’ve been taught not to rely on air gaps. They are necessary but hardly sufficient to protect your system.

The most common way to bridge an air gap is probably through human actions, either human error or espionage.

But there are more and more cool ways to covertly connect to devices (scanners, microphones, and so on) and the potential effects are massive.

This month saw reports of two more interesting attacks on so-call Internet of Things devices.

A research group at Ben Gurion University have demonstrated that you can hack into a network via security cameras. Guri,  Bykhovsky, and  Elovici showed that a camera can be infected with malware that lets attackers send commands and receive data via IR [1].

The signals hidden in the video stream are then intercepted and decoded by the malware residing in the network.

This attack takes advantage of two essential features of the security camera.

First, the attack can work because, by design, these cameras are connected to the internal network, and look outside the network. Like a network firewall and other network border guards, they are especially vulnerable because they must operate across the “airgap”.

Second, the attack works via the infrared (IR) capabilities of the security camera used for night vision, but are difficult for humans to detect. This makes the hidden signals stealthy and difficult to block: defenders cannot simply filter out all IR signals, because that would eliminate the night vision essential to the purpose of the camera.

(By the way, the Guri et al. paper [1] reviews the extensive literature of methods to covertly bridge an air gap.)

While we’re on the topic of hacking into your network, a research group at Weizman Institute of Technology have demonstrated yet another way to hack into network enabled LED light bulbs [2]. Actually, there have been many reported ways to take over IoT capable LED lights and similar devices.

However, the Ronen et al. paper demonstrates how to install a worm which takes over all the light bulbs, creating a captive network of thousands of devices.

The attack can start by plugging in a single infected bulb anywhere in the city, and then catastrophically spread everywhere within minutes.

This can allow the intruder to disable all the devices, or possibly to employ them as a bot net for a DDOS attack.


The particular attack exploited a bug in the standard ZigBee protocols, and a method to snatch passwords. They were able to take over and reprogram the firmware of the light bulb. That’s right, you new light bulb has network protocols and passwords that need to be protected. Sigh.

Once compromised, a LED was able to infect nearby LEDs via its onboard network software (i.e., the suborned firmware). Depending on the number and density of lights, this could easily spread to a whole city within minutes.

Aside from the one-of-a-kind bug and side channel attack (which there will surely be many more in the future), the attack exploits the fact that these devices are controlled via a dedicated low power wireless link with a standard protocol. This network is separate from the Internet or TCP/IP local networks, but it also is not protected by the network security on those networks. The network of LEDs is essentially unmonitored and difficult to monitor if you wanted to try. This makes it possible for a worm to spread without detection or counteraction.

The attack also takes advantage of the capability to update the firmware via this network. This feature is almost certainly necessary, given that there are zillions of light bulbs, deployed in inaccessible places, and the devices don’t have human sysadmins to supervise updates. There is little choice but to push updates through the network. But, as Ronen and colleagues show, this process is vulnerable to attack.

I should note in passing that these devices are designed to be reasonably secure. The network communications are designed to only talk to nearby, presumably trusted, neighbors. And the only way to program them is by a corrupt update. The reported attack subverted both these protections, which goes to show you that it isn’t wise to be overly confident in such confidence building features.

This all would be funny if it weren’t so serious.

I continue to be astonished at the pace at which so-called Internet of Things IoT are deployed, despite the utter lack of compelling use cases, and the totally-not-ready-yet-ness of the technology.

Why do I need an internet connected light bulb?  You got me.

I would like to “just say no” to these things, but that is becoming impossible. Can I buy a new car that isn’t a rolling computer network, complete with Internet and satellite uplink?  I don’t think I can.

And, as these and many other studies show, these systems are like any other distributed computing system.  No matter how well designed and implemented, they are still going to be hacked.  And the consequences will be gigantic.

  1. Mordechai Guri, Dima Bykhovsky, and Yuval Elovici, aIR-Jumper: Covert Air-Gap Exfiltration/Infiltration via Security Cameras & Infrared (IR). arXiv arXiv:1709.05742 [cs.CR], 2017.
  2. Eyal Ronen, Adi Shamir, Achi-Or Weingarten, and Colin O’Flynn, IoT Goes Nuclear: Creating a ZigBee Chain Reaction, in IEEE Symposium on Security and Privacy (SP), San Jose, CA, 2017, pp. 195-212. 2017. p. 195-212.


Inaudible Speech Commands Hack Your Home

I’m not a huge fan of speech interfaces, or of Internet connected home assistants a la Alexa in general.

I have already complained that these devices are potentially nasty invaders of privacy and likely to have complicated failure modes not least due to a plethora of security issues. (Possibly a plethora of plethoras.) I’ve also complained about the psychology of surveillance and instant gratification inherent in the design of these things. Especially for childred.

Pretty much exactly what you don’t want in your home.

This fall a group at Zhejiang University report on yet another potential issue: Inaudible Voice Commands [1].

Contemporary mobile devices have pretty good speakers and microphones, good enough to be dangerous. Advertising agencies and other attackers have begun using inaudible sound beacons to detect the location of otherwise cloaked devices. It is also possible to monkey with the motion sensors on a mobile device, via inaudible sounds.

Basically, these devices are sensitive to sound frequencies that the human user can’t hear, which can be sued to secretly communicate with and subvert the device.

Zhang, Guoming and colleagues turn this idea onto speech activated assistants, such as Alexa, or Siri  [1]. They describe a method to encode voice commands into innocent sounds. The humans can’t hear the words, but the computer decones it and takes it as a voice command.

These devices are capable of almost any operation on the Internet. Sending messages, transferring money, downloading software. The works.

In other words, if this attack succeeds, the hacker can secretly load malware or steal information, unbeknownst to the user.


Combine this with ultrasound beacons, and the world becomes a dangerous place for speech commanded devices.

The researchers argue that

The root cause of inaudible voice commands is that microphones can sense acoustic sounds with a frequency higher than 20 kHz while an ideal microphone should not.

This could be dealt with by deploying better microphones or by software that filters out ultrasound, or detects the difference between voiced commands and the injected commands.

I would add that a second root cause is the number of functions of these devices, and the essentially unimodular design of the system. Recent voice activated assistants are installed as programs on general purpose computers with a complete operating system, and multiple input and output channels, including connections to the Internet. In general, any program may access any channel and perform any computation.

This is a cheap and convenient architecture, but is arguably overpowered for most individual applications. The general purpose monolithic device requires that the software implement complicated security checks, in an attempt to limit privileges. Worse, it requires ordinary users to manage complex configurations, usually without adequate understanding or even awareness.

One approach would be to create smaller, specialized hardware modules, and require explicit communication between modules. I’m thinking of little hardware modules, essentially one to one with apps. Shared resources such as I/O channels will have monitors to mediate access. (This is kind of “the IoT inside every device”.)

This kind of architecture is difficult to build, and painful in the extreme to program. (I’m describing what is generally called a secure operating system.) It might well reduce the number of apps in the world (which is probably a good thing) and increase the cost of devices and apps (which isn’t so good). But it would make personal devices much harder to hack, and a whole lot easier to trust.

  1. Guoming Zhang, Chen Yan, Xiaoyu Ji, Taimin Zhang, Tianchen Zhang, and Wenyuan Xu, DolphinAtack: Inaudible Voice Commands. arxive, 2017.


IOTA’s Cart Is Way, Way Before the Horse

Earlier I commented on SatoshiPay microcrasactions switching from Bitcoin to IOTA. Contrary to early hopes, Bitcoin has not been successful as a medium for microtrasactions because transaction fees are too high and latency may be too long.

IOTA is designed for Internet of Things, so it uses a different design than Nakamoto, that is said to be capable of much lower latency and fees. SatoshiPay and other companies are looking at adopting IOTA for payment systems.

The big story is that IOTA is reinventing Bitcoin from the ground up, with its own home grown software and protocols. I described it (IOTA) as “funky” in my earlier post.

It is now clear that this funkiness extended to the implementation, including the cryptographic hashes used [1,2]. This is not a good idea, because you generally want to use really well tested crypto algorithms.

So when we noticed that the IOTA developers had written their own hash function, it was a huge red flag.

Unsurprisingly, Neh Haruda reports that their home grown hash function is vulnerable to a very basic attack, with potentially very serious consequences.

The specific problems have been patched, but the fact remains that IOTA seems to be a home made mess of a system.

Narula also notes other funkiness.  For some reason they use super-funky trinary code which, last time I checked, isn’t used by very many computers. Everything has to be interpreted by their custom software which is slow and bulky. More important, this means that their code is completely incompatible with any other system, precluding the use of standard libraries and tools. Such as well tried crypto libraries and software analysis tools.

I have no idea why you would do this, especially in a system that you want to be secure and trusted.

The amazing thing is not the funkiness of the software. There is plenty of funky software out there. The amazing thing is that lots of supposedly competent companies have invested money and adopted the software. As Narula says, “It should probably have been a huge red flag for anyone involved with IOTA.

How could they get so much funding, yet only now people are noticing these really basic questions?

It is possible that these critiques are finally having some effect. Daniel Palmer reports that the exchange rate of IOTA’s tokens (naturally, they have their on cryptocurrency, too) has been dropping like a woozy pigeon [3].  Perhaps some of their partners have finally noticed the red flags.

The part I find really hard to understand is how people could toss millions of dollars at this technology without noticing that it has so many problems. Aren’t there any grown ups supervising this playground?

I assume IOTA have a heck of a sales pitch.

Judging from what I’ve seen, they are selling IOTA as “the same thing as Bitcoin, only better”. IOTA certainly isn’t the same design as Bitcoin, and it also does not use the same well-tested code.  I note that a key selling point is “free” transactions, which sounds suspiciously like a free lunch. Which there ain’t no.

IOTA’s claims are so amazingly good, I fear that they are too good to be true.

Which is the biggest red flag of all.

  1. Neha Narula, Cryptographic vulnerabilities in IOTA, in Medium. 2017.
  2. Neha Narula, IOTA Vulnerability Report: Cryptanalysis of the Curl Hash Function Enabling Practical Signature Forgery Attacks on the IOTA Cryptocurrency. 2017.
  3. Daniel Palmer, Broken Hash Crash? IOTA’s Price Keeps Dropping on Tech Critique Coindesk.September 8 2017,
  4. Dominik Schiener, A Primer on IOTA (with Presentation), in IOTA Blog. 2017.


Cryptocurrency Thursday

Yet Another IOT Security Problem

One of the hottest trend these days is the Internet of Things, which aims to install zillions of network connected devices everywhere, including your home. Unsupervised microphones, cameras, and sensors, connected to the Internet, listening and watching you at all times. What could possibly go wrong?

This summer a group from University of Washington report on yet another jaw dropping technology:   motion detection to track what you are doing, which they call CovertBand [2].

This technique uses active sonar, broadcasting sound and listening for the echoes. Any device with a speaker and microphone could do this, in principle. “Smart” TVs and assistants such as Alexa, for instance.

The technology is sneaky, because they use the idea of steganography (which we knew was going to be important way back when [1]). The sonar chirps are concealed in other sounds, such as music. That pop music ear worm you downloaded is not only rotting you brain, it might also be snooping on you!


The paper reports detailed studies which demonstrate considerable abilities to covertly monitor activities, even through barriers. It’s quite impressive.

I’ll note that the researchers suggest three motivating scenarios for using this technology. (This kind of list is conventionally required in academic papers about security.) They identify three use cases:

  • National intelligence
  • Vigilante Justice
  • Remote Hacking of Phones and Smart TVs

The important point is that there are no socially positive use cases, at least for normal, law abiding civilians. This is purely wicked.

The researchers identify counter measures, which include sound proofing and jamming the sonar signals. The latter can be done with a mobile phone app, so we may soon see people setting up their phone to check and block such snooping!

Surprisingly, the researchers do not consider other defensive measures such as not installing such devices in private areas, not connecting them to the Internet, or engineering the speakers and microphones to not be able to be used in this way.

In a sense, this attack is made possible by the fact that these devices have vastly more capability that is needed most of the time. It might be better to engineer the devices to have “just enough” resolution to do their work.

In another sense, this attack is made possibly by the fact that these devices wrap a whole bunch of functions in one device, with common memory and so on. Including speech generation, speech detection, and multichannel music playing in one device might be convenient, but it isn’t necessary. It could be three simpler devices communicating by simple, easier to secure, channels. This would be harder to build, but much, much harder to hack.

And, following Bob’s Rule for home devices, every IOT device should have a prominent “Off” switch that really works.

This is a really nice piece of work. Well done, all.

  1. Adam D. Cain, Text Steganography, in Electrical Engineering. 1996, University of Illinois at Urbana-Champaign: Urbana.
  2. Rajalakshmi Nandakumar, Alex Takakuwa, Tadayoshi Kohno, and Shyamnath Gollakota, CovertBand: Activity Information Leakage using Music [to appear). Proceedings of ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2017.
  3. James Urton, Computer scientists use music to covertly track body movements, activity, in UW News. 2017.


Is “Cute” Enough for a Robot?

In the great rush to create home robots, it seems that 1,000 flowers are blooming. Many different robots are being tried, combining the basic core of features with different appearances and artificial personalities.

One of this year’s models is ‘Kuri’, which is designed to be simple and cute. It understands speech commands, but “speaks robot”—not synthesized speech, but “cute” beeps and buzzes.

As far as I can tell, it does nothing that a computer or tablet or Alexa can’t do, except in a “friendly”, autonomously mobile package.

It seems that Kuri wanders around your house with its cute face and twin HD cameras. These can live stream over the Internet, to “be your eyes when you’re” away. Kiri also has microphones, of course, to capture sounds and conversations. Kuri will “investigate” unusual sounds. It has speakers, so you can play music, and yell at your baby sister.

This little guy is supposed to “bring joy to your house”. As far as I can tell, the main feature of Kuri is “cuteness”. Is this enough?

Well maybe.

Unfortunately, Kuri has gone way off the rails with a new feature, “autonomous video”.

Basically, as Kuri wanders around mapping your house, listening to you, and generally being cute, it will record videos.

The results of this snooping are sent to you (or at least to whoever controls Kiri), where you can select ones that you like. Supposedly, Kiri uses this feedback to learn what you like, and thereby to construct a please selfie video of your house.

Who doesn’t want that?

Well, me, obviously.  But, who asked for this feature, anyway???

I have no idea why I would ever want “daily dose of short “life at home’ videos”.  I mean, if there is any place I don’t need to visit virtually, it’s the place that I live physically.

But if I did want it, I don’t want an Internet connected device streaming video out of my house to the Internet. And I really don’t want an “autonomous” camera wandering around unpredictably recording my private life.

It’s Alexa on wheels. Eeek.

“Turn it off” doesn’t even begin to cover it.

I’ll add a couple of other points that Kuri brings to mind.

Like may contemporary robots, Kuri does some simple psychological tricks to indicate that he (apparently Kuri is male) is listening. It looks up, it looks ‘happy’, it makes ‘eye contact’ (more or less). This is “cute” in the same way as a pet may be “cute”, and for the same reason—you are projecting human personality onto a non-human actor.

This is probably good marketing, but there is some weird psychology going on here, especially if kids are involved.

First of all:  No, Kuri doesn’t actually like you. It isn’t capable of feelings of any kind.

The head and eye gestures raise the interesting question of whether people will tend to mirror these inhuman movements in the same way that they tend to mirror other people as they interact. And will children develop weird behavioral patterns from living with a household robot?  Who knows.

Then there is Kuri’s gaze.

It is becoming common to put camera’s behind features that look like human eyes. Kuri has a very abstract, but unmistakably analog to a human head and face, and the eyes are where the cameras are. This is a physical analogy to human senses, but has a sort of perverse twist to it. While a person or a dog sees you with their eyes, a robot is usually recording and streaming with its eyes. This mismatch means that you may unconsciously overlook the invasiveness of those robot eyes (which are really web cams), or perhaps edge toward paranoia about other people’s eyes (which are not web cams).

These “uncanny” confusions are hardly unique to Kuri, though the “cuter” the robot the more powerful the psychological illusions.

Is “cute” a good thing for a robot to be? I’m not so sure.

  1. Alyssa Pagano, Kuri Robot Brings Autonomous Video to a Home Near You, in IEEE Spectrum -Automation. 2017.


Robot Wednesday Friday

Hacking the Grid via Solar Panels

It seems there is continuous stream of computer security vulnerabilities (from your USB hub  to synthetic DNA and all modalities in between), and the still unresolved challenges of the Internet of Things (IoT), which promise to enlarge the (just barely working) Internet by orders of magnitude.

This month there is discussion that these issues affect the Solar Power industry as well.

In particular, small scale PV systems that are connected to the Grid may be vulnerable to hacking. In a student project, Willem Westerhof discovered security flaws in a consumer market PV inverter which connects the home system to the power grid. He then sketches a scenario in which determined hackers could take over large numbers of these systems, and then orchestrate power fluctuations that would crash wide areas of the power grid.

I have not found many details of the vulnerabilities, though it would be a remarkable system indeed to not have any security weaknesses. And, like the rest of the IoT, these systems are deployed in the hands and homes of ordinary people, who in no position to investigate or fix the software. In addition, it appears that these systems are, for whatever reasons, connected to the Internet, and therefore vulnerable to network hacking.

In short, it is extremely plausible that home PV systems are hackable.

Westerhof works out what he calls “The Horus Scenario” , which is a worst case episode. Assuming that all the installed PV systems have similar vulnerabilities, a determined hacker could penetrate and gain control over large numbers of the systems. This would enable the hacker to turn off and on the flow to the grid.

The devastating attack involves simultaneous shutdown of large numbers of PV systems, resulting in a dramatic and near instantaneous drop in available power. This would unbalance the grid and likely force shutdowns—sudden, widespread blackouts.

One reason this attack is possible is that, at least in Europe, a significant fraction of the total generating power is from PV. Knocking out one or a few homes would have minimal effects, but knocking out 10 or 20% of the generating power in a few minutes without warning is a fatal problem.

This is clearly a possibility, and a very serious potential threat. Even if only a faction of PV inverters were successfully attacked in the way, it would probably be a serious catastrophe.

It is important to note that this problem has little to do with solar energy per se. The problems stem from connecting a device to both critical infrastructure and the Internet at the same time. This is a concern for the IoT overall. Connecting lots of Internet capable devices to each other and to utilities is surely a bad idea, especially in the wild and unsupervised environment of ordinary homes.

Glancing at the vulnerabilities that have been reported, they are mostly garden variety Internet break ins. (I mean, one of the vulnerabilities is a data overrun via the TELNET port, for goodness sake.) Which leads to the question, why are these things connected to the Internet? I assume there are reasons, but maybe this should be reconsidered.

I get rather nervous that this is reported as “Hackers ‘could target electricity grid’ via solar panel tech, which seems likely to play into the hands of the power monopolies and fossil fuel industry. This will be used as yet more misleading propaganda that will be used to roll back all local generation initiatives.

As I said, this is more about Internet security that solar energy.

That said, I would strongly encourage PV equipment makers to step up their game. If you want to be part of vital infrastructure, then you have to design the systems to be as fail safe as possible.

  1. Chris Baraniuk, Hackers ‘could target electricity grid’ via solar panel tech, in BBC News – Technology. 2017.
  2. The Horus Scenario. The Horus Scenario-Exploiting a weak spot in the power grid. 2017,