Category Archives: “Inappropriate touch Screen”

Shock Report: IOT Home Devices Are Not Secure

I’ve complained about the bad design of consumer “IOT”—“let’s slap a touchscreen on ‘X’ and connect it to the Internet”—and noted grievous security and privacy issues.

As far as I’m concerned, these beasts are a Bad Idea ™ even when they are working as intended.

But, of course, the odds on them working correctly is vanishingly small.

This winter, a team of researchers at Ben-Gurion University of the Negev report on studies of consumer IOT devices including baby monitors, door bells, and a thermostat [1]. The paper sketches their cookbook of methods to analyze, break into, and take over these and similar devices.  This is a great tutorial on just how vulnerable your device really is!

They start by opening the cover, which is usually possible without disabling the device.  Visual inspection reveals the components which have easy to read identification printed on them, of course.  The hackers can easily identify the juicy memory modules and gaping access ports to go after.

The crucial step is, as they say, “Extraction of Firmware and Data”.  The paper makes clear just how naked a computer is when the cover is off and it is in the enemy’s hands. Simple techniques sufficed to capture passwords and take control of the system.  Ultimately, one way or another, they were able to extract the firmware for analysis.  The game is pretty much over at this point.

It is interesting that the widespread use of Linux versions in IOT means that a captured firmware image can be analyzed with standard Unix utilities which have been around since I was a young programmer.  For example, IOT devices which use Linux password protection can be conveniently analyzed on any Linux system, because the libraries are the same.  Of course, brute force cracking can be slowed by good passwords, but they report relatively easy success with these IOT devices. Beside breaking passwords, analysis of the firmware can also reveal any number of other vulnerabilities.

Overall, they were able to break passwords in a number of devices (and found one null password—oops!).  In some cases, they found open network access ports (telnet and FTP), or were able to set them up (because they had root access).  Yoiks.  They found WiFi credentials, they found hard coded private keys. Double Yoiks!

They also found strong indications that some of the devices were rebranded, i.e., built on the same components as other devices.  In this case, the shared hardware and software also shared passwords and vulnerabilities as well.  In any case, there are many replicates of each type of IOT device, so breaking in to one potentially compromises whole swathes of the network.

The authors make some completely straightforward recommendations for producers. Disable the portsUse good passwordsEncrypt as much data as feasible!

Users have little they can do except don’t use these things!

The authors note that economic forces press manufacturers to create cheap products, cutting corners on security. It is also true that many designers have inadequate understanding of security issues.

I’ll add that these problems don’t stem from devices that are too dumb.  Quite the contrary. The problem is that very capable processors with a full Linux operating system are cheap and ubiquitous.  The success of these technologies has a perverse effect on design, lifting constraints an allowing complex and insecure systems to be built.  These devices are far too capable for their intended use.

It is also interesting to note that these ubiquitous and mostly open source technologies are so widely used they are becoming a sort of technical monoculture. The same vulnerabilities are found in many different products because they are all built from a very limited “gene pool” of cheap technology.

Open source development is often put forward as a good approach to creating trustworthy and secure software.  The more eyes, the better the software.  And open software can’t have secret back doors or other shenanigans in them.

But, as we see in IOT, when relying on open source software also means that everyone is using the same software, there is the risk that serious bugs will be ubiquitous throughout a software monoculture.

Overall, it’s not a comforting picture.  And there isn’t much a consumer can do, other than Turn It Off.

  1. Omer Shwartz, Yael Mathov, Michael Bohadana, Yuval Elovici, and Yossi Oren. Opening Pandora’s Box: Effective Techniques for Reverse Engineering IoT Devices. In Smart Card Research and Advanced Applications, 2018, 1-21.


IoT Isn’t Even Close To Trustworthy

I have already beefed about the current version of the Internet of Things, which is of dubious value and badly engineered, to boot. (Here, here, here, here, here, here)

The most visible face of these developments are the network connected home “Assistants”, such as Alexa, Siri, Google Home, and so on. Aside from the extremely questionable rationale (Why do I need a voice interface to my refrigerator? Why do I need my refrigerator to connect to the entire freaking Internet?) there are famous cases that illustrate that these beasts are deeply invasive.

Last fall, Hyunji Chung and colleages at US National Institute for Standards and Technology (NIST) wrote about the trustworthiness of these systems.

[S]uch interactions should be solely between you and the device assisting you. But are they? How do you know for sure?” (p. 100)

These are complicated, network connected systems which are not trivial to understand and evaluate. But they are in our homes, so everyone needs to know just how far to trust them.

The researchers sketch the “ecosystem” of network connected components and services. The very fact that they are complex enough to warrant the term, “ecosystem”, is the fundamental problem.

[W]e performed cloud-native artifact analysis, packet analysis, voice-command tests, application analysis, and firmware analysis” (p. 101)

Uh, oh. Does anyone besides me see a problem with deploying such a system unsupervised in private homes?

The threat envelope is huge. The basic logic of the assitant is implemented mainly in “the cloud”, with components on local devices that communicate with the cloud. Many assistants have third party apps as well. They report that the Alexa “Skill Store” has 10,000 such voice-actuated apps.

The point of the analysis is, of course, risk assessment. They identify many, many risks—basically, everything that might threaten the Internet.

  • Wiretapping
  • Compromises devices
  • Malicious voice commands
  • Eavesdropping

Wireless communication is, of course, a weakness. The researchers report the appalling fact that not all the communications are encrypted. Even when encrypted, traffic sniffing can still reveal considerable information about the devices and users.

Obviously, devices may be hacked. In this case, there is no expert IT department to defend the network, detect intrusions, or patch bugs. One has to think that home devices are relatively defenseless, and certain to be cracked over time.

One reason I don’t like voice commands is that they are hard to secure. Even the best voice recognition systems are vulnerable to mistakes, and low-cost, consumer-maintained systems probably aren’t top of the line. (And who wants your Alexa to reject commands because it isn’t certain that you are really you.)

And, of course, every link is a potential channel for someone to listen in on your life.

This article makes clear that these systems have a lot of potential issues, even if they are configured correctly and work as designed. Unfortunately, personal and home devices are not likely to be carefully configured or monitored. I have a PhD in computer science and have done my share of sysadmin, and I have not the remotest clue how to set up and keep one of these systems.

These researchers carefully don’t answer the question, “can I trust you?” But it is very clear that the answer is “no”.

I’m afraid that people are taking these devices on faith. They are sold as appliances, and the look like appliances, so they must be as safe as a consumer appliance, right?

Well, no.

This is a really great article, and everyone should read it before turning on any cloud service, let alone installing an “assistant” in their home.

And if you don’t understand what this article says, then you definitely shouldn’t install one of these assistants in your home.

  1. Hyunji Chung, Michaela Iorga, Jeffrey Voas, and Sangjin Lee, “Alexa, Can I Trust You?”. Computer, 50 (9):100-104, 2017.

Close Reading Apps: Brilliantly Executed BS

One of the maddening things about the contemporary Internet is the vast array of junk apps—hundreds of thousands, if not many millions—that do nothing at all, but look great. Some of them are flat out parodies, some are atrocities, many are just for show (no one will take us seriously if we don’t have our own app). But some are just flat out nonsense, in a pretty package. (I blame my own profession for creating such excellent software development environments.)

The only cure for this plague is careful and public analysis of apps, looking deeply into not only the shiny surface, but the underlying logic and metalogic of the enterprise. This is a sort of “close reading” of software, analogous to what they do over there in the humanities buildings.  Where does the app come from? What does it really do, compared to what they say it does? Whose interests are served?

Today’s example are two apps that pretend to do social psychology: Crystal (“Become a better communicator”) and Knack (“for unlocking the world’s potential”).

[Read Whole Article]

“Hair Coach”–with App

In recent years, CES has become an undisputed epicenter of gadgets, so I can’t let the occasion pass without at least one addition to the Inappropriate Touch Screen Files.

I’ll skip the boneheaded “Catspad”, which isn’t particularly new, and certainly makes you wonder who would want this.

I think the winner for today is the “Hair Coach”, which uses a “Smart Hair Brush” to offer you “coaching” on your hair care.

The brush itself has a microphone to listen to the hair as it is brushed (which I think is slightly cool—some kind of machine learning using the crackle of your hair), accelerometers in the brush to detect your technique (and, for the mathematically challenged, count your strokes). It also has a vibrator to provide haptic feedback (to train you to brush your hair more optimally?).

Of course, no product would be complete without a mobile app: “the simple act of brushing begins the data collection process.” The app is supposed to give you “personalized tips and real-time product recommendations”. The latter are basically advertisements.

I will note that the materials on the web offer absolutely no indication that any of this “optimization” actually does anything at all, other than increase profits (they hope).

This product caught my eye as particularly egregious “inappropriate touch screen”, because this is clearly a case of a non-solution chasing a non-problem. (Of course, most of the “hair care” industry is non-solutions to non-problems.)

My own view is that the simple and millennia old technology of a hairbrush was not actually broken, or in need of digital augmentation. Worse, this technology actually threatens one of the small pleasures of life. The soothing, sensual brushing of your own hair can be a simple and comforting personal ritual, a respite from the cares of the day.

Adding a digital app (and advertising) breaks the calm of brushing, digitally snooping and “optimizing”, and pulling your attention away from the experience and toward the screen—with all its distractions. How is this good for you?

Add this to the Inappropriate Touch Screen Files.


Inappropriate Touch Screen

CES: Lot’s of Voice Recognition

The annual Consumer Electronics Show (CES) is always a rich source of blog-fodder. This is, after all, densely packed with Innappropriate Touch Screens Interfaces and hundreds of “why would anyone want this” gadgets.

This year everyone is remarking on the plethora of IoT devices, including the canonical toasters and refrigerators. Sigh.

Another trend is the explosion of voice recognition. As Amy Nordrum says, it is “The Year of Voice Recognition” Driven by improved accuracy, voice recognition is already out there (I’ve been seeing ads for Amazon and Google home assistants on TV). But it is sure to show up in lots of products no doubt including toasters and refrigerators. Sigh.

In one sense, this is a reasonable response to the plague of Inappororiate Touch Screens I have complained about. Talking to your toaster via your mobile device is dubious design, and, as Tekla S. Perry says, “For years now, the consumer electronics industry has been trying to sell slightly intelligent Internet-connected appliances that you can control from your smart phone—and not gotten very far.

So, the thinking goes, lets replace that stupid idea with a voice interface, which is hands free and possibly more natural for the in-home setting. And Perry has a point when she says that this approach moves the center of the home away from the TV and into the kitchen. “[A]s has been true since the beginnings of civilization, the heart of the home will be the hearth.”

By now, we have all seen the current generation of chatty, “friendly” digital assistant, so it is easy to imagine them infesting our appliances. If you are happy to search for restaurants or call up a play list by voice command to your phone, you’ll probably be content telling your toaster to make toast, or your refrigerator to order more sprouts.

You have probably noticed that I’m not a huge fan of voice interfaces.  if people could always speak clearly, honestly and unambiguously, we wouldn’t need lawyers or psychotherapists.  If people always understood what is said to them, we wouldn’t have divorces or wars.

As far as I can tell, this technology seems to depend on being internet connected, so the assistant software can reside in “the cloud” somewhere. There are so many implications of this architecture that I won’t go into it now. Suffice it to say that I am not enthusiastic about having the Internet listening to my family conversations. As Evan Ackerman comments, CES is full of “appliances that spy on you in as many different ways as they possibly can“.

Finally, we might wonder just how such devices might affect family life. Innocent gadgets can have profound impacts on our attention, interpersonal relations, and family life. We need look no farther than the example of TV and the mobile phone to see how disruptive a chatty refrigerator might turn out to be.

Have these devices been field tested? Do we have any idea what the side effects might be? Just how benign is this technology? Is it safe for children? Is it good for children?

For example, I’m imagining that children will quickly learn that the world is supposed to respond to your commands, instantly and without argument, so long as it is prefixed by “OK Google”.

“OK, Mom. Give my cereal now.”

“OK Dad, Buy me a new xbox.”

Is this a good lesson to teach your children?

Deltu Robot – Tres Cool!

Designer Alexia Léchot of Ecole cantonale d’art de Lausanne (ECAL) presents Deltu, a strange and useless little robot that wants to play with you.

First of all, I love the completely non-humanoid body for this robot. It’s not human, so why should it try to look human? And it is flat out fascinating to watch how it operates, elegant and simple.

But…even if he (this appears to be the preferred pronoun) doesn’t look human, he exhibits a very definite personality and intelligence, apparently desiring attention from humans, and wanting to best us at his games. so cool!

I admit that I was astonished by the way the robot can manipulate the touch screens. Apparently his artificial finger is close enough to human to work.  I wonder how that is made.

It is also kind of cool that the robot takes selfies—very useful for documenting the project. (He can probably make phone calls and purchases, too.  And boast of his victories on Twitter. Why not?) I guess this would be “humanoid” but not “intelligent” behavior, no?

What a lovely project! Very nice work.  I look for great things in the future from Sensei Léchot.

By the way, I rule that this robot definitely is not an inappropriate use of a touchscreen!



Robot Wednesday

Robot Furniture?

Inspired by MIT research, Ori (rom “Origami”) “architectural robotics” reconfigures a small apartment into different configurations. The bed slides away, a desk slides out, the wall slides over to make more living room when the bed is not in use.

This is described as “modular and scalable mechatronic”, though it is triggered by pushing a button. The only “automation” I can find mentioned is presets, a la a thermostat.

Oh, and, of course, an “app to reconfigure the unit from anywhere in the world.”  Sigh.

I’m trying to find the innovation here.

Murphy beds and other fold out/ slide away furniture have been around since, well, forever. (They work fine without a motor, if you design them well.) The motorized sliding wall looks pretty much like the compact shelving my local library installed decades ago. The interface even looks similar.

I have to wonder if this could possible be work the expense and complexity. I’m user you could make it work without the motors. In fact, it better work without the motors, otherwise your home would become unusable in a power failure.

I hate to think of the failure modes, jammed or failed motors, debris, junk, or toddlers in the way of the works. Spilled drinks. Real life is a lot messier than architectural renderings.

In the end, this is just barely robotic. And with the silly app, I’m going to have to consign this to the Inappropriate touch Screen File.


Robot (?) Wednesday