I don’t really know what to think about this wearable computing piece from Proforma Videodesign, DROMe. It does make me think, though. (“Makes me think” is a major complement.)
DROMe combines realtime 3D project (which I’ve seen before dating back more than five years) with visualization (which is ancient), to create a “smart dress”.
The video illustrates how this could be interesting, blending your cloths into the visual and sonic scenery in an individualized and responsive way. That’s a cool idea, though the demo doesn’t look like it is very controllable by the wearer. It’s good for saying “look at me”, but not really capable of any kind of subtle come-hither.
This particular project is an installation: the magic only works in one particular setting. (It’s not obvious how it works in “traffic”—would the presence of other dancers occlude the effect? If so, that’s a bummer, because it’s basically a single user effect.)
From one point of view, this project is kind of silly. Who wants a not-that-special dress that does something special only in one very specific place? Nine tenths of the time, it’s just a dress.
On the other hand, it is kind of cool that you have to get out and actually go to the club, and be there now. Only then does the dog dance. I like that version of the story, and it could be reinforced by other electronic messages, e.g., text messages that ping onto a small area of the dress, inviting you to come dance.
Also, it would be very cool if each dress is recognized, so that if and only if you and your friend are there, dancing at the same time, does the magic happen. Or the special “together” magic happen. That would be cool, even if it only works in one place.