Over the past decade, I’ve dinked around with a number of still-relatively-new digital technologies, including Augmented Reality and 3D printing. And I’ve had colleagues doing music and dance visualization.
So, when I saw “REIFY”, I was excited, if somewhat wary of hype. It seems to mash up all these cool technologies into a personal app.
At first, I was thrown off by the misleading headline at Wired.com, “What Songs Look Like as 3-D Printed Sculptures” (by Liz Stinson). This sure sounds cool, but I don’t think it is a very accurate description.
What the studio says:
“REIFY transforms music into something you can hear, see and hold.
“We make totems that visually represent an artist’s song, and encode them with music and interactive visual experiences that you can play on your mobile phone or tablet. The mobile app used to play totems is called Stylus.”
First of all, the 3D printed “totems” may represent a song, what does that mean? They claim to have “an audio-to-physical design process”, that also includes “totem fabrication processes” to create viable 3D printed objects.
Apparently, this process involves generating “a range of visual interpretations of a specific song”, which are then refined by hand. This makes sense to me, though it’s difficult to say whether their “interpretation” is particularly valid. (My own experience suggests that randomly generated visualizations may be perceived as just as valid as any other “interpretations”.)
The resulting models are cleaned up (again, with a lot of human supervision) to be suitable for 3D printing. The result is printed and distributed.
The third part of the game is to add Augmented Reality using vuforia. The “totem” is used as the target for AR. They work with the artist to create 3D animated content to project onto the target, designed to play in synch with the song. This is an iterative process with intense human intervention.
The resulting AR stuff looks pretty cool, and I know it takes skill and time to make such slick AR visualizations.
Bundling all this together you get a totem you can hold, display, or, I guess, worship. Run the custom app on your mobile, point it at the totem and you hear the song and see the visualization—all tied to the totem. You can walk around it, or turn it, or toss it in the air, and see all sides.
All in all, this is pretty cool and I think I know how all the pieces are done.
But their rhetoric is rather hyperbolic and generally questionable.
For starters, while the product may be an impressive multimedia experience, there isn’t any reason to think that these visualizations are any better than all the other annoying lightshows out there. No worse, but no better, either. Definitely not “what songs look like”, as Liz Stinson inaccurately put it.
They claim to “transform music into something you can hear, see and hold.” That is an interesting way to put it. The “something” turns out to be an Augmented Reality application that delivers a digital version of a light show, mapped to a plastic token. You can hear the music replay, you can see the digital animation, and you can hold the totem (an your phone), but the “somthingness” is purely the simultaneity provided by the AR. This is a cool experience, but is that a “something”? I’m not sure, but I think their claims are misleading.
Finally there is the word “transform”. They create this cool AR through conventional digital design processes, augmented with some signal processing algorithms. Whatever “transformation” is happening is quite complex, and include multiple human inputs.
In that sense, putting on a play “transforms” the words into something you can see or hear. And performing music “transforms it” into something you can see or hear. And creating movie or 3D animations “transforms” data into something you can see and hear. And so on.
Taken this way, the word “transform” has almost no meaning.
So Much More Can Be Done
But still in all this is still way cool. But for other reasons than “transforming music into something you can hold”. This technology has a lot of possibilities that REIFY hasn’t really explored yet.
First of all, the totems make the experience extremely localized: you have to “be here now” for it to work. This can make it a very personal experience, the opposite of a giant arena—each of us gets close up performance, in our own hands. Artists should work with this feature.
Second, the visualizations can have really cool interactive behavior. There can be virtual buttons or slides that make it react to your touch. You can dance with it! It can dance with you! You can see different things from different sides. You can turn it over and look at the bottom. Again, artists should see what can be done with these capabilities.
The app can know about more than one token, and can do different things if you and you sweetie place your tokens together. Or require that all six of you are present in order to unlock the final, awesome verse. Or whatever. Think of the totem as a “key” for some of the content.
Another cool thing: the “totems” could be made as jewelry or other wearables. Think of it. I buy my special someone an accessory, that not only looks good, but comes with a song. A song that only plays when she or he wears the token, and only we can hear it! And, more, it has a 3D aura that paints her or him up with glory, and only we can see it.
These experiences are straightforward extensions of the technolgies REIFY is already using.
This Defintiely Is A Possible Future
For this reason, I strongly disagree with Liz Stinson that “this isn’t the future, or even a future, of listening to music”. This could very well be a prototype for one way that people enjoy music—very personal, localized, and socially interactive. Very special delivery.
I would also note that the 3D objects could be distributed digitally (embedded in the app?). So you might purchase the song in a complex package that includes audio, the software app, the 3D model data, software to unlock and print out the totem, and who knows what.