Speaking of whole body interfaces, Google is demonstrating another variation of this theme. Project Soli is a radar based sensor to track hand gestures. Their video shows how cool this can be, using your hands to manipulate virtual knobs and buttons.
The technology is in the hands of selected developers, so I can’t say what they might be building, other than the somewhat disappointing “SoliType” demo. (“Disappointing” to me because typing is such a boring, over explored, interaction.)
Like On-Skin Interfaces (and, for that matter, MS Kinnect and others), this is a cool way to move away from the infernal, satanic touch screen which has infested our lives this last decade. Combined with projection, we can get rid of screens and devices, and design much better, much more embodied interfaces.
Looking at Soli’s radar tech, we can see that it is really useful, at least for things that have been modeled. It appears that the initial device knows about one hand. That’s really cool, but scarcely everything you need. For example, non of the demos show even two hands, let alone full body. I have to wonder how bad it freaks out if someone sticks their hand into the field when you are trying to use it.
This isn’t meant to be a slam: I fully understand that there are reasons to want a really tiny, self-contained sensor that is just a hand detector. Indeed, the SoliType video makes clear what you do with that.
Radar is cool, it works in the dark and rain, through glass and fabric. On the other hand, it seems to be pretty short range (which is both good and bad), and, I bet, sucks power like mad. My point is not to complain, but to identify use cases: Soli is really an “almost touch” interface, at least in its initial form.
Comparing it to something like the On-Skin interfaces, we can see different cases that work for each. The radar is really sensitive an flexible: the same sensor hardware can work with zillions of different virtual knobs, and couldn’t care less if you are wearing winter gloves.
The On-Skin “tattoo” interface is really specific (literally written in ink), but it can be put anywhere on the skin (not just the hands). It also employs tactile and haptic feedback in cool ways. The radar system can use similar tactile gestures, but the radar itself does not interact with the skin. Of course, and On-Skin interface may be hampered by clothing, or immobility. If you can’t touch it, nothing happens.
In any case, both these technologies are cool, and invite us to start imagining post-touchscreen interactions.