Category Archives: “Inappropriate touch Screen”

IoT Isn’t Even Close To Trustworthy

I have already beefed about the current version of the Internet of Things, which is of dubious value and badly engineered, to boot. (Here, here, here, here, here, here)

The most visible face of these developments are the network connected home “Assistants”, such as Alexa, Siri, Google Home, and so on. Aside from the extremely questionable rationale (Why do I need a voice interface to my refrigerator? Why do I need my refrigerator to connect to the entire freaking Internet?) there are famous cases that illustrate that these beasts are deeply invasive.

Last fall, Hyunji Chung and colleages at US National Institute for Standards and Technology (NIST) wrote about the trustworthiness of these systems.

[S]uch interactions should be solely between you and the device assisting you. But are they? How do you know for sure?” (p. 100)

These are complicated, network connected systems which are not trivial to understand and evaluate. But they are in our homes, so everyone needs to know just how far to trust them.

The researchers sketch the “ecosystem” of network connected components and services. The very fact that they are complex enough to warrant the term, “ecosystem”, is the fundamental problem.

[W]e performed cloud-native artifact analysis, packet analysis, voice-command tests, application analysis, and firmware analysis” (p. 101)

Uh, oh. Does anyone besides me see a problem with deploying such a system unsupervised in private homes?

The threat envelope is huge. The basic logic of the assitant is implemented mainly in “the cloud”, with components on local devices that communicate with the cloud. Many assistants have third party apps as well. They report that the Alexa “Skill Store” has 10,000 such voice-actuated apps.

The point of the analysis is, of course, risk assessment. They identify many, many risks—basically, everything that might threaten the Internet.

  • Wiretapping
  • Compromises devices
  • Malicious voice commands
  • Eavesdropping

Wireless communication is, of course, a weakness. The researchers report the appalling fact that not all the communications are encrypted. Even when encrypted, traffic sniffing can still reveal considerable information about the devices and users.

Obviously, devices may be hacked. In this case, there is no expert IT department to defend the network, detect intrusions, or patch bugs. One has to think that home devices are relatively defenseless, and certain to be cracked over time.

One reason I don’t like voice commands is that they are hard to secure. Even the best voice recognition systems are vulnerable to mistakes, and low-cost, consumer-maintained systems probably aren’t top of the line. (And who wants your Alexa to reject commands because it isn’t certain that you are really you.)

And, of course, every link is a potential channel for someone to listen in on your life.

This article makes clear that these systems have a lot of potential issues, even if they are configured correctly and work as designed. Unfortunately, personal and home devices are not likely to be carefully configured or monitored. I have a PhD in computer science and have done my share of sysadmin, and I have not the remotest clue how to set up and keep one of these systems.

These researchers carefully don’t answer the question, “can I trust you?” But it is very clear that the answer is “no”.

I’m afraid that people are taking these devices on faith. They are sold as appliances, and the look like appliances, so they must be as safe as a consumer appliance, right?

Well, no.

This is a really great article, and everyone should read it before turning on any cloud service, let alone installing an “assistant” in their home.

And if you don’t understand what this article says, then you definitely shouldn’t install one of these assistants in your home.


  1. Hyunji Chung, Michaela Iorga, Jeffrey Voas, and Sangjin Lee, “Alexa, Can I Trust You?”. Computer, 50 (9):100-104, 2017. https://www.computer.org/csdl/mags/co/2017/09/mco2017090100-abs.html

Close Reading Apps: Brilliantly Executed BS

One of the maddening things about the contemporary Internet is the vast array of junk apps—hundreds of thousands, if not many millions—that do nothing at all, but look great. Some of them are flat out parodies, some are atrocities, many are just for show (no one will take us seriously if we don’t have our own app). But some are just flat out nonsense, in a pretty package. (I blame my own profession for creating such excellent software development environments.)

The only cure for this plague is careful and public analysis of apps, looking deeply into not only the shiny surface, but the underlying logic and metalogic of the enterprise. This is a sort of “close reading” of software, analogous to what they do over there in the humanities buildings.  Where does the app come from? What does it really do, compared to what they say it does? Whose interests are served?

Today’s example are two apps that pretend to do social psychology: Crystal (“Become a better communicator”) and Knack (“for unlocking the world’s potential”).

[Read Whole Article]

“Hair Coach”–with App

In recent years, CES has become an undisputed epicenter of gadgets, so I can’t let the occasion pass without at least one addition to the Inappropriate Touch Screen Files.

I’ll skip the boneheaded “Catspad”, which isn’t particularly new, and certainly makes you wonder who would want this.

I think the winner for today is the “Hair Coach”, which uses a “Smart Hair Brush” to offer you “coaching” on your hair care.

The brush itself has a microphone to listen to the hair as it is brushed (which I think is slightly cool—some kind of machine learning using the crackle of your hair), accelerometers in the brush to detect your technique (and, for the mathematically challenged, count your strokes). It also has a vibrator to provide haptic feedback (to train you to brush your hair more optimally?).

Of course, no product would be complete without a mobile app: “the simple act of brushing begins the data collection process.” The app is supposed to give you “personalized tips and real-time product recommendations”. The latter are basically advertisements.

I will note that the materials on the web offer absolutely no indication that any of this “optimization” actually does anything at all, other than increase profits (they hope).

This product caught my eye as particularly egregious “inappropriate touch screen”, because this is clearly a case of a non-solution chasing a non-problem. (Of course, most of the “hair care” industry is non-solutions to non-problems.)

My own view is that the simple and millennia old technology of a hairbrush was not actually broken, or in need of digital augmentation. Worse, this technology actually threatens one of the small pleasures of life. The soothing, sensual brushing of your own hair can be a simple and comforting personal ritual, a respite from the cares of the day.

Adding a digital app (and advertising) breaks the calm of brushing, digitally snooping and “optimizing”, and pulling your attention away from the experience and toward the screen—with all its distractions. How is this good for you?

Add this to the Inappropriate Touch Screen Files.

 

Inappropriate Touch Screen

CES: Lot’s of Voice Recognition

The annual Consumer Electronics Show (CES) is always a rich source of blog-fodder. This is, after all, densely packed with Innappropriate Touch Screens Interfaces and hundreds of “why would anyone want this” gadgets.

This year everyone is remarking on the plethora of IoT devices, including the canonical toasters and refrigerators. Sigh.

Another trend is the explosion of voice recognition. As Amy Nordrum says, it is “The Year of Voice Recognition” Driven by improved accuracy, voice recognition is already out there (I’ve been seeing ads for Amazon and Google home assistants on TV). But it is sure to show up in lots of products no doubt including toasters and refrigerators. Sigh.

In one sense, this is a reasonable response to the plague of Inappororiate Touch Screens I have complained about. Talking to your toaster via your mobile device is dubious design, and, as Tekla S. Perry says, “For years now, the consumer electronics industry has been trying to sell slightly intelligent Internet-connected appliances that you can control from your smart phone—and not gotten very far.

So, the thinking goes, lets replace that stupid idea with a voice interface, which is hands free and possibly more natural for the in-home setting. And Perry has a point when she says that this approach moves the center of the home away from the TV and into the kitchen. “[A]s has been true since the beginnings of civilization, the heart of the home will be the hearth.”

By now, we have all seen the current generation of chatty, “friendly” digital assistant, so it is easy to imagine them infesting our appliances. If you are happy to search for restaurants or call up a play list by voice command to your phone, you’ll probably be content telling your toaster to make toast, or your refrigerator to order more sprouts.

You have probably noticed that I’m not a huge fan of voice interfaces.  if people could always speak clearly, honestly and unambiguously, we wouldn’t need lawyers or psychotherapists.  If people always understood what is said to them, we wouldn’t have divorces or wars.

As far as I can tell, this technology seems to depend on being internet connected, so the assistant software can reside in “the cloud” somewhere. There are so many implications of this architecture that I won’t go into it now. Suffice it to say that I am not enthusiastic about having the Internet listening to my family conversations. As Evan Ackerman comments, CES is full of “appliances that spy on you in as many different ways as they possibly can“.

Finally, we might wonder just how such devices might affect family life. Innocent gadgets can have profound impacts on our attention, interpersonal relations, and family life. We need look no farther than the example of TV and the mobile phone to see how disruptive a chatty refrigerator might turn out to be.

Have these devices been field tested? Do we have any idea what the side effects might be? Just how benign is this technology? Is it safe for children? Is it good for children?

For example, I’m imagining that children will quickly learn that the world is supposed to respond to your commands, instantly and without argument, so long as it is prefixed by “OK Google”.

“OK, Mom. Give my cereal now.”

“OK Dad, Buy me a new xbox.”

Is this a good lesson to teach your children?

Deltu Robot – Tres Cool!

Designer Alexia Léchot of Ecole cantonale d’art de Lausanne (ECAL) presents Deltu, a strange and useless little robot that wants to play with you.

First of all, I love the completely non-humanoid body for this robot. It’s not human, so why should it try to look human? And it is flat out fascinating to watch how it operates, elegant and simple.

But…even if he (this appears to be the preferred pronoun) doesn’t look human, he exhibits a very definite personality and intelligence, apparently desiring attention from humans, and wanting to best us at his games. so cool!

I admit that I was astonished by the way the robot can manipulate the touch screens. Apparently his artificial finger is close enough to human to work.  I wonder how that is made.

It is also kind of cool that the robot takes selfies—very useful for documenting the project. (He can probably make phone calls and purchases, too.  And boast of his victories on Twitter. Why not?) I guess this would be “humanoid” but not “intelligent” behavior, no?

What a lovely project! Very nice work.  I look for great things in the future from Sensei Léchot.


By the way, I rule that this robot definitely is not an inappropriate use of a touchscreen!

 

 

Robot Wednesday

Robot Furniture?

Inspired by MIT research, Ori (rom “Origami”) “architectural robotics” reconfigures a small apartment into different configurations. The bed slides away, a desk slides out, the wall slides over to make more living room when the bed is not in use.

This is described as “modular and scalable mechatronic”, though it is triggered by pushing a button. The only “automation” I can find mentioned is presets, a la a thermostat.

Oh, and, of course, an “app to reconfigure the unit from anywhere in the world.”  Sigh.

I’m trying to find the innovation here.

Murphy beds and other fold out/ slide away furniture have been around since, well, forever. (They work fine without a motor, if you design them well.) The motorized sliding wall looks pretty much like the compact shelving my local library installed decades ago. The interface even looks similar.

I have to wonder if this could possible be work the expense and complexity. I’m user you could make it work without the motors. In fact, it better work without the motors, otherwise your home would become unusable in a power failure.

I hate to think of the failure modes, jammed or failed motors, debris, junk, or toddlers in the way of the works. Spilled drinks. Real life is a lot messier than architectural renderings.

In the end, this is just barely robotic. And with the silly app, I’m going to have to consign this to the Inappropriate touch Screen File.

 

Robot (?) Wednesday

 

Allison Arieff on “Innovation”

Designer Allison Arieff comments in the NYT this week about contemporary design (i.e., “apps” and related “services”) are “Solving All the Wrong Problems”. She recites the usual litany of stupid and trivial apps e.g., “A “smart” button and zipper that alerts you if your fly is down.”

I’ve been beefing about this myself for quite a while, and she is right on target.

These products are not solving Big Problems, nor are they solving problems that a lot of people actually need solved. She recounts the quip, “for most people working on such projects, the goal is basically to provide for themselves everything that their mothers no longer do.” Exactly.

Arieff notes that these same folks have little interest in (or even knowledge of) the problems of real, but unsexy people, such as working mothers, older workers, or poor people in their own city.

Worse, I would say that this same myopia has leaked into critical social arenas including dating and “the new way of work”. The hackers are not only not solving real problems, they are imposing their own life style on everybody else, and creating problems where there weren’t any.

“Make the world better?”  Not for most of us.

Part of Arieff’s diagnosis is that designers and especially funders (i.e., so called venture capitalists) are interested in “disruption” more than meeting needs, and this means that, as she quotes from Jessica Helfand, “innovation is now predicated less on creating and more on the undoing of the work of others.

If the most fundamental definition of design is to solve problems, why are so many people devoting so much energy to solving problems that don’t really exist? How can we get more people to look beyond their own lived experience?

Arieff sees this as design (and a culture) that is morally adrift. “Can we reset that moral compass?” she asks.

Is there an app for this? Can we disrupt disruption?

 

 

 

Allison Arieff