Category Archives: Interface Design

Fribo: culturally specific social robotics?

This spring a research group from Korea report on a home robot that seeks to address social isolation of young adults [2].  Fribo is similar to many other home assistants such as Alexa, but is specifically networked to other Fribos that reside with people in the same social network.  (The network of Fribos overlays the human social network.)

The special feature is that Fribo listens to the activity in the home and certain sounds are transmitted to all the other Fribos.  For example, the sound of the refrigerator door is played to other Fribos, offering a low key cue about the activity of the person.

Actually, it’s a little more elaborate: the Fribo actually narrates the cue.  The sound of the refrigerator is accompanied by a message such as, “Oh, someone just opened the refrigerator door. I wonder which food your friend is going to have”.  ([2], p. 116)

The idea is that, the network of friends– who live alone– gain an awareness of the presence and activity of each other.  It may also encourage more social contact with others.

The “creepy” factor with this product seems obvious to me.  Yoiks. But I know that there is a very dramatic difference in attitudes about creepiness among younger people, so who knows?

There are also significant issues with privacy (how much to you trust the filtering?) and security (if one Fribo is hacked, the whole network is probably exposed).   I wouldn’t touch it with a barge pole, myself.

But the field study reported is very interesting for another reason.  First, the fact that people were even willing to try this device indicates an interest in this kind of social awareness.  In particular, there seems to be an implicit sense of belonging and trust in a group of peers.  Not only that, but the participants seem to share similar concerns about the isolation of living alone, and the idea that these kind of cues are a way of feeling connected.  The study also suggests that being aware of others stimulates more contact, such as phone calls.

I have to say that the reports of the users experiences don’t resonate with my own experience.  Aside from the obvious digital-nativism of the young users, there seems to be a definitely cultural factor, i.e., young adults in Korea.  There is a level of mutual trust and solidarity among the users that I’m not sure is universal.  If so, then Fribo might be a hit in Korea, but a flop in the US, for instance.

By the way, the users refer to how quite their one-person apartment is.  My own experience is that even living alone there is plenty of noise from neighbors, for better or worse.  If anything, there is probably way to much awareness of strangers in most living spaces.  Deliberately adding in awareness of your friends might or might not be an attractive feature, depending on just how much other “awareness” there is.

If my speculation is correct, then this is an interesting example of using ubiquitous digital technology in a culturally specific manner.   As the researchers suggest, it would be very interesting to test this hypothesis by replicating the study in other places in the world.

Finally, I have to point out that if what you want to do is achieve a sense of joint living, it is always possible to live together.

A group house or dormitory could provide awareness of others, as well as even easier opportunities to socialize.  Why not explore alternative living arrangements, rather than install intrusive digital systems in isolated units?  This would make another interesting comparison condition for future studies.

  1. Evan Ackerman, Fribo: A Robot for People Who Live Alone, in IEEE Spectrum – Home Robotics. 2018.
  2. Kwangmin Jeong, Jihyun Sung, Hae-Sung Lee, Aram Kim, Hyemi Kim, Chanmi Park, Yuin Jeong, JeeHang Lee, and iJnwoo Kim, Fribo: A Social Networking Robot for Increasing Social Connectedness through Sharing Daily Home Activities from Living Noise Data, in Tthe 2018 ACM/IEEE International Conference on Human-Robot Interaction. 2018: Chicago. p. 114-122.

Awesome 3D Display from BYU

For computer interfaces, one of the mountains we must climb is the free standing, 3D, interactive visual display.  Real 3D, holographic movies.  Hollywood aside, we’re still working on it.

This winter researchers from Brigham Young University report on a new technique, they call Optical Trap Display.  This uses lasers to trap air molecules and bouncing light at selected wavelengths—i.e., in full color. [1]  By ‘painting’ a volume of air with this laser guided point, a three dimensional image can be created, floating in air, visible from almost every angle.  Cool!

This isn’t the only open-air display, but it is a very, very impressive advance. If I understand correctly, this is sort of like mist displays, except the lasers are grabbing and manipulating the particles, rather than projecting on randomly drifting mist.  A simple but powerful advance.


The researchers indicate that this technique is vulnerable to air currents, which can push the particle out of control of the laser.  So it won’t be easy to use outdoors. And they report that “Higher beam power is correlated with better trapping until the particle begins to disintegrate.” (p. 487), which sounds like a cool failure mode.  (Everything was fine until my pixel exploded….)

Nice work, all.

  1. D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, A photophoretic-trap volumetric display. Nature, 553:486, 01/24/online 2018.


Tail Therapy?

Social robots are the flavor of the year these days.  If robots are to live with humans (which is not a foregone conclusion, IMO), they need to mesh with human psychology.  This means they need to appear harmless and attractive, they need to understand and emit unconscious signals, and generally play nicely.  It doesn’t matter what they do, so much as how they do it.

This has led to a variety of interesting research.  Some pursue the goal of mimicking human behavior.  Other approaches use non-human forms with intelligible behavior.  There is a great range of possibilities, with more and less human-like appearance.

There are actually some really interesting questions here about the psychology of humans interacting with non-human machines, intelligent or not.  It seems pretty clear that trying to faithfully imitate human forms and nuances isn’t necessary, nor is speech.  (See perhaps the thoughtful work of Sense Thecla Schiphorst [2.3]).

This principle is clear in a new product, “Qoobo: A Tailed Cushion That Heals Your Heart”.  While this has been described as a “robot”, it certainly lies at the edge of that term.  It has only one behavior; waving the tail.  No face.  No dialog.  Certainly no “useful” functions.

The Tokyo based inventor, Prof. Nobuhiro Sakata, (who apparently also created necomimi in 2011) believes that this is comforting.  In fact, it is supposed to “heals your heart”, whatever that means exactly.

This is harmless, I guess, though vacuous.

But there are so many dubious aspects of this product, I can’t let it pass

First, they have tried to carefully reproduce the motion of a cat’s tail.  It’s clear from the video that they haven’t succeeded in that effort, but in any case they seem to have no understanding of cats at all.  Swishing the tail means the cat is agitated, not happy or friendly.  A contented cat rubs and purrs, and does not swish the tail.  If you pet a cat and its tail starts moving, it is unhappy and probably going to fight and/or run.

Second, setting aside the complete misunderstanding of natural cat behavior, the project claims that the responsive behavior of the tail enhances the human’s feelings.  The crux of the case is that you “would project your emotions onto how the tail moves, and you could get a sense of healing from that”.   Well, maybe so, though there is no evidence that this is actually true.

Third, the claimed benefits are nebulous and new agey.  What exactly does “heals your heart” or a “sense of healing” mean?  How ever these benefits may be defined, has Qoobo been shown to actually work as advertised?  Furthermore, is it better than a placebo, such as a cushion without a tail, or a plush animal without animation?  And how does it compare to alternatives such as a real cat or even to a virtual conversation via social media?

You might hope that the product would be proved safe and effective before it is sold, but that’s not how we do things these days. In fact, they are doing a kickstarter, and part of the work will be the unspecified pledge, “We will be conducting a proof of concept to ensure Qoobo is providing a sense of comfort to its users as intended.


Qoobo is charming and cute and nice and all that. I really hate to criticize it.  But I really think you should not make claims about supposed psychological or other benefits without legitimate evidence.

  1. Qoobo. Qoobo : A pillow with a wagging tail. 2017,
  2. Thecla Schiphorst, soft(n): toward a somaesthetics of touch, in Proceedings of the 27th international conference extended abstracts on Human factors in computing systems. 2009, ACM: Boston, MA, USA.
  3. Thecla Henrietta Helena Maria Schiphorst, THE VARIETIES OF USER EXPERIENCE: BRIDGING EMBODIED METHODOLOGIES FROM SOMATICS AND PERFORMANCE TO HUMAN COMPUTER INTERACTION, in Center for Advanced Inquiry in the Integrative Arts (CAiiA). 2009, University of Plymouth: Plymouth.


The “Ethical Knob” Won’t Work

If the goal was to make a splash, they succeeded.

But if this is supposed to be a serious proposal, it’s positively idiotic.

This minth Giuseppe Contissa, Francesca Lagioia, and Giovanni Sartor of the Univerity of Bloogna published a description of “the ethical knob”, which adjusts the behavior of an automated vehicle.

Specifically, the “knob” is supposed to set a one-dimensional preference whether to maximally protect the user (i.e., the person setting it) or others. In the event of a catastrophic situation where life is almost certain to be lost, which lives should the robot car sacrifice?

Their paper is published in Artificial Intelligence and Law, and they have a rather legalistic approach. In the case of a human driver, there are legals standards of liability that may apply to such a catastrophe. In general, in the law, choosing to harm someone incurs liability, while inadvertant harm is less culpable.

Extending the principles to AI cars raises the likelihood that whoever programs the vehicle bears responsibility for its behavior, and possibly liability for choices made by his or her software logic. Assuming that software can correctly implement a range of choices (which is a fact not in evidence), the question is what should designers do?

The Bologna team suggests that the solution is to push the burden of the decision onto the “user”, via a simple, one-dimensional preference for how the ethical dilemma should be solved. Someone (the driver? the owner? the boss?) can choose “altruist”, “impartial”, or “egoist” bias in the life and death decision.

This idea has been met with considerable criticism, with good reason. It’s pretty obvious that most people would select egoist, creating both moral and safety issues.

I will add to the chorus.

For one thing, the semantics of this “knob” are hazy. They envision a simple, one-dimensional preference that is applied to a complex array of situations and behavior. Aside from the extremely likely prospect of imperfect implementation, it isn’t even clear what the preference means or how a programmer should implement the feature.

Even more important, it is impossible for the “user” to have any idea what the knob actually does, and therefore to understand what the choice actually means. It isn’t possible to make an informed decision, which renders the user’s choice morally empty and quite possibly legally moot.

If this feature is supposed to shield the user and programmer from liability, I doubt it will succeed. The implementation of the feature will surely be at issue. Pushing a pseudo-choice to the user will not insulate the implementer from liability for how the knob works, or any flaws in the implementation.  (“The car didn’t do what I told it to.”, says the defendant.)

The intentions of the user will also be at issue. If he chooses ‘egoist’, did he mean to kill the bystanders? Did he know it might have that effect? Ignorance of the effects of a choice is not a strong defense.

I’m also not sure exactly who gets to set this knob. The authors use the term “user”, and clearly envision one person who is legally and morally responsible for operating the vehicle. This is analogous to the driver of a vehicle.

However, the “user” is actually more of a passenger, and may well be hiring a ride. So who says who gets to set the knob? The rider? The owner of the vehicle? Someone’s legal department? The terms and conditions from the manufacturer? The rules of the road (i.e., the T&C of the highway)? Some combination of all of the above?

I would imagine there would be all sorts of financial shenanigans arising from such a featuer. Rental vehicles charging more for “egoist” settings, with the result that rich people are protected over poor people. Extra charges to enable the knob at all. Neighborhood safety rules that require “altruist” setting (except for wealthy or powerful people). Insurance companies charging more for different settings (though I’m not sure how their actuaries will find the relative risks). And so on.

Finally, the entire notion that this choice can be expressed in a one-dimensional scale, set in advance, is questionable. Setting aside what the settings mean, and how they should be implemented, the notion that they can be set once, in the abstract, is problematic.

For one thing, I would want this to be a context sensitive. If I have children in the car, that is a different case than if I am alone. If I am operating in a protected area near my home, that is a different case than riding through a wide open, “at your own risk” situation.

Second, game theory makes me want to have a setting to implement tit-for-tat strategy. If I am about to crash into someone set at ‘egoist’, then set me to ‘egoist’. If she is set to ‘altruist’, set me to ‘altruist’, too. And so on. (And, by the way, shouldn’t there be a visible indication on the outside so we know which vehicles are set to kill us and which ones aren’t?)

This whole thing is such a conceptual mess. It can’t possibly work.

I really hope no one tries to implement it.

  1. Giuseppe Contissa, Francesca Lagioia, and Giovanni Sartor, The Ethical Knob: ethically-customisable automated vehicles and the law. Artificial Intelligence and Law, 25 (3):365-378, 2017/09/01 2017.


Robot Wednesday

Database of App UI Designs

This month Ranjitha Kumar and colleagues report on ‘Rico’, which is a large dataset of UI’s from published Android apps [1]. The dataset has tools to search the data for similar apps, and to use the data to autogenerate app code to follow ‘best practice’ determined by the sample. Ideally, this can aide designers to find examples to guide development.

The data itself was collected from apps from the Android app store (which has metadata, too). Screens and sequences of interactions were collected though a hybrid of crowdsourcing (human) and automated interaction.

The data was processed to extract the UI elements underlying each screen, a set of interaction paths sampled, and animations of transitions. The visual appearance is encoded in a 75 dimensional vector, which is used for searching an generating screens.

This approach lets a designer search by example, to find other that are UIs similar. Or a designer can sketch a UI, and find others that suggest ‘the rest’ of the elements for the screen, based on  similar apps.

The information encoded in this data set are a large sample of current designs, encapsulating something about current practice. The paper says this is ‘best practice’, though it actually is just ‘common’ practice not necessarily ‘best’.

It would be natural to link this dataset with empirical data about the quality of the product, e.g., user satisfaction, number of downloads, or revenue. Then, it would be possible to rank the instances and find the actual best practices.

The data is a snapshot of current practice, and it took a lot of effort to gather, The authors would like to improve the data gathering process so they can continuously update the dataset with new and upgraded apps. If they can indeed collect data over teim, they could create a dataset of historical trends in app design. This could reveal changes over time both functional and esthetic. And tt might be possible to observe UI ‘fads’ emerge and spread throughout the population of apps. That would be neat!

The project ultimately  aims to develop tools that help designers, e.g., to autogenerate code based on sketches and the knowledge encoded in the tool and dataset.

I’m a little concerned that this tool might be basically just copying what other people have done—leading designers toward the average. This may be fast and cheap, but it is no way to create outstanding products.  In my view, apps are already too similar to each other, due to the use of ubiquitous infrastructure such as standard cloud services APIs and other toolkits.

But this kind of data might actually be used to search for novel solutions. For example, the encoded designs might be used in the fashion of a genetic algorithm. A design is encoded, then the encoding is mutated and new designs generated. Or the encodings might be mixed or crossed with each other, generating a ‘mash up’ of two designs.  Many such mutations would not be viable, but you could generate lot’s of them and select the best few.  (I.e., evolutionary design.)

I don’t know how well this would work in the case, but the idea would be to search through the gaps in current practice, and to optimize current designs. Now that would be kind of cool!

  1. Biplab Deka, Zifeng Huang, Chad Franzen, Joshua Hibschman, Daniel Afergan, Yang Li, Jeffrey Nichols, and Ranjitha Kumar, Rico: A Mobile App Dataset for Building Data-Driven Design Applications (to appear), in Symposium on User Interface Software and Technology (UIST ’17). 2017: Qubec.

Telepresence Robot – At the zoo

These days we see a lot of exciting stories about telepresence—specifically, live, remote operation of robots. From the deadly factual reports from the battlefields of South Asia through science fiction novels to endless videos from drone racing gamers, we see people conquering the world from their living room.

One of the emerging technologies is telepresence via a remote robot that resembles ‘an ipad on a segway’. These are intended for remote meetings and things like that. There is two way video, but the screen is mobile and under the command of the person on the other end. So you can move around, talk to people, look at things.

On the face of it, this technology is both amazing (how does it balance like that?) and ridiculous (who would want to interact with an ipad on wheels?) And, of course, many of the more expansive claims are dubious. It isn’t, and is never going to be, “just like being there”.

But we are learning that these systems can be fun and useful. The may be a reasonable augmentation for remote workers, not as good as being there, but better than just telcons. And, as Emily Dreyfus comments, a non representational body is sometimes an advantage.

Last year Sensei Evan Ackerman reported on an extensive field test of one of these telepresence sticks, called the Double 2. This test drive was an interesting test because he deliberately took it out of the intended environment, which stressed the technology in many ways. The experience is a reminder of the limitations of telepresence, but also gives insights into when it might work well.

First of all, he played with it across the continental US (from Maryland to Oregon) thousands of KM apart. Second, he took it outdoors, which it isn’t designed for at all. And he necessarily relied on whatever networks were available, which varied, and often had weak signals.

As part of the test, he went to the zoo and to the beach!

Walking the dog was impossible.

Overall, the system worked amazingly well, considering that it wasn’t designed for outdoor terrain and needs networking. He found it pretty good for standing still and chatting with people, but moving was difficult and stressful at times. Network latency and dropouts meant a loss of control, with possibly harmful results.

Initially skeptical, Sensei Evan recognized that the remote control has advantages.

I’m starting to see how a remote controlled robot can be totally different [than a laptop running Skype] . . . You don’t have to rely on others, or be the focus of attention. It’s not like a phone call or a meeting: you can just exist, remotely, and interact with people when you or they choose.

Whether or not it is “just like being there”, when it works well, there is a sense of agency and ease of use, at least compared to conventional vidoe conferencing.

This is an interesting observation. Not only does everybody need to get past the novelty, but it works best when you are cohabitating for considerable periods of time. Walking the dog, visiting the zoo—not so good. Hanging out with distant family—not so bad.

I note that the most advertised use case—a remote meeting—may be the weakest experience. A meeting has constrained movement, a relatively short time period, and often is tightly orchestrated.  This takes little advantage of the mobility and remote control capabilities. You may as well as well just do a video conference.

The better use is for extended collaboration and conversation. E.g., Dreyfus and others have used it for whole working days, with multiple meetings, conversations in the hall, and so on.  Once people get used to it, this might be the right use case.

I might note that this is also an interesting observation to apply to the growing interest in Virtual Reality, including shared and remote VR environments.  If a key benefit of the telepresence robot is moving naturally through the environment, then what is the VR experience going to be like?  It might be “natural” interactions, but it will be within a virtual environment.  And if everyone is coming in virtually, then there is no “natural” intereaction at all (or rather, the digital is overlaid on the (to be ignored) physical environments. There will be lots of control, but will there be “ease”?  We’ll have to see.

  1. Evan Ackerman, Double 2 Review: Trying Stuff You Maybe Shouldn’t With a Telepresence Robot, in IEEE Spectrum – Automation. 2016.


Robot Wednesday

Facebook’s AI Led Astray By Human Behavior

I don’t follow the roiling waters of online advertising giant Facebook. Having moved fast and broke things, they are now thrashing around trying to fix stuff that they broke.

This month a team of researchers at Facebook released some findings from yet another study [3]. Specifically, the experiments (which don’t seem to have been reviewed by an Institutional Review Board) are trying to build simple AI’s that can “bargain” with humans. This task requires good-enough natural language to communicate with the carbon-based life form, and enough of a model of the situation to effectively reach a deal.

Their technical approach is to use machine learning so that bots can learn by example. Specifically, they use a collection of human-human negotiations, and tried to analyze the behavior to discover algorithms to replicate human-like interactions.

With preposterous amounts of computing power, who knows? It might work.

Unfortunately, the results were less than stunning.

Glancing at the conclusions in the paper, the good news is that method was able to learn “goal maximizing” instead of “likelihood maximizing” behaviors. This is neat, though given the constrained context (we know that the parties are negotiating) it’s less than miraculous.

The resulting bots aren’t completely satisfactory, though. For one thing these machine intelligences are, well, pretty mechanical. Specifically, they are obsessive and aggressive, “negotiating harder” than other bots.  Also, the conversation generated by the bots  made sense at the sentence level, but consecutive sentences did not necessarily make sense. (The examples sound rather “Presidential” to me.)

But the headline finding was that the silicon-based entities picked up some evil, deceptive tactics from their carbon-based role models. Sigh. It’s not necessarily “lying” (despite Wired magazine [1]), but, in line with “negotiating harder”, the bots learned questionable tactics that probably are really used by the humans exemplars.   (Again, this rhetoric certainly sounds Presidential to me.)

The hazards of trying to model human behavior–you might succeed too well!

I’m not surprised that this turned out to be a difficult task.

People have been trying to make bots to negotiate since the dawn of computing. The fact that we are not up to our eyeballs in NegotiBots™ suggests that this ain’t easy to do. And the versions we have seen in online markets are, well, peculiar.

One question raised by this study is, what is a good dataset to learn from? This study used a reasonably sized sample, but it was a convenience sample: people* recruited from Amazon Mechanical Turk. Easy to get, but are they representative? And what is the target population that you’d like to emulate?

(* We assume they were all people, but how would you know?)

I don’t really know.

But at least some of the results (e.g., learning aggressive and borderline dishonest tactics) may reflect the natural behavior of Mechanical Turk workers more than normal humans. This is a critical question if this technology is ever to be deployed. It will be necessary to make sure that it is learning culturally correct behavior for the cultures that it is to be deployed in.

I will add a personal note. I really don’t want to have to ‘negotiate’ with bots (or humans), thank you very much. The deployment of fixed prices was a great advance in retail marketing [2], and it is a mistake to go backwards from this approach.

  1. Liat Clark, Facebook teaches bots how to negotiate. They learn to lie instead. 16 2017,
  2. Steven Johnson, Wonderland: How Play Made the Modern World, New York, Riverhead Books, 2016.
  3. Mike Lewis, Denis Yarats, Yann N Dauphin, Devi Parikh, and Dhruv Batra, Deal or No Deal? End-to-End Learning for Negotiation Dialogues. eorint, 2017.