Category Archives: Future of Storytelling

Data Comics?

Benjamin Bach and colleagues wrote in IEEE Computer Graphics about “The Emerging Genre of Data Comics” [1]. I like data and I like comics, so I’ll love data comics, right?

Data comics is a combination of data + story + visualization. They say that it is “a new genre, inspired by how comics function” ([1], p.7)

The “how comics function” is largely about flow and multiple panels. As Scott McCloud says, the action happens in the gutter ([2], p. 66) (i.e., between the panels).

(By the way, Sensei McCloud teaches that this happens though the active engagement of the reader, who closes the gap with his or her imagination. If you haven’t read Understanding Comics [2], stop reading this blog right now and go read McCloud. I’ll wait here.)

The authors assert that data always has context, and “Context creates story, which wants to be narrated” ([1], p. 10). Well, maybe, though I think it is a mistake to read this as “you can tell whatever story you want” (the Hollywood approach). Part of the context is what kinds of stories it is OK to tell.

The authors give four advantages of the medium,

  • Combines text and pictures
  • Delivers one message at a time in a guided tour
  • Data visualization gives evidence for facts
  • Other types of visualization can tell the story clearly

This article itself is delivered in the form of a comic (though not a data comic), which highlights both the advantages and the limitations of this approach.

One really good thing about storyboards and comix is that they force you to boil down your story to a handful of panels, with only so much on each. This isn’t always easy, but it surely helps organize the story.

Compare this to written or spoken word, which can flow any way you want and can go on as long as you have strength, with no guarantee that any organized narrative is told.

I note that any good visualization (or demo) probably had a storyboard in the beginning, which is essentially a comic strip of the overall story to be told.

The medium isn’t without drawbacks.

Fro example, this article was very difficult for my ancient eyes to read. The text was rather too small and blurry for me to read and white on black lettering is hard for me to make out. Many of the pictures were below my visual threshold. E.g., One panel is about “Early examples led the way” has tiny versions of other comics, which are illegible and may as well not be there.

Also, it was difficult to quote (i.e., remix) ideas from this article. E.g., I couldn’t easily quote the “Early examples” panel to make my point about it. I could probably have extracted the picture, fiddled with it in a drawing package, and saved a (blurry) image to include here. But how would that make my point about the illegibility of the original?

As a general rule, comix need to be pretty simple or they are impossible to read. This means that they can only deliver a very concise story. As Back, et al. suggest, this is a feature, not a bug.

On the other hand, telling “only one message at a time” is not just “concise” it is a Procrustean bed. For complicated data there isn’t one message, there are many. A data comic runs the risk of trivializing or misleading by omission. This is a bug, not a feature.

The challenge is to make “concise” be deep rather than shallow.

This is why trying to express the story in a storyboard (comic) is an extremely good design practice, even if the story isn’t ultimately published in the form of a comic.

  1. Benjamin Bach, Nathalie Henry Riche, Sheelagh Carpendale, and Hanspeter Pfister, The Emerging Genre of Data Comics. IEEE Computer Graphics and Applications, 38 (3):6-13, 2017.
  2. Scott McCloud,, Understanding Comics, HarperCollins, 1994.

“Games For Change” 2017 Student Challenge

And speaking of mobile apps with a social purpose….

The upcoming annual Games For Change (G4C) meeting has a lot of interesting stuff, on the theme “Catalyzing Social Impact Through Digital Games”. At the very least, this gang is coming out of the ivory tower and up off their futons, to try to do something, not just talk about it.

Part of this year’s activities is the Student Challenge , which si a competition that

“invites students to make digital games about issues impacting their communities, combining digital storytelling with civic engagement.

This year’s winners were announced last month, from local schools and game jams in NYC, Dallas, and Pittsburg. (Silicon Valley, where were you?) Students were asked to invent games on three topics,

  • Climate Change (with NOAA),
  • Future Communities (with Current by GE), and
  • Local Stories & Immigrant Voices (with National Endowment for the Humanities).

Eighteen winners were highlighted.

The “Future Cities” games mostly are lessons on the wonders of “smart cities”, and admonitions to clean up trash. One of them has a rather compelling “heart beat” of Carbon emissions, though the game mechanics are pretty obscure, doing anything or doing nothing at all increases Carbon. How do I win?

The “Climate Change” also advocates picking up trash, as well as planting trees. There is also a quiz, and an Antarctic Adventure (though nothing even close to “Never Alone”)

The “local stories” and “immigrant stories” tell stories about immigrants, past and present. (This kids are from the US, land of immigration.) There are two alarming “adventures” that sketches how to illegally enter the US, which is a dangerous undertaking with a lot of consequences. Not something I like to see “gamified”.

Overall, the games are very heavy on straight story telling, with minimal game-like features. Very much like the “educational games” the kids no doubt have suffered through for years. And not much like the games everyone really likes to play. One suspects that there were teachers and other adults behind the scenes shaping what was appropriate.

The games themselves are pretty simple technically, which is inevitable given the short development time and low budgets. The games mostly made the best of what they had in the time available.

I worry that these rather limited experiences will give the students a false impression of both technology and story telling. The technology used is primitive, they did not have realistic market or user testing, and the general game designs are unoriginal. That’s fine for student projects, but not really a formula for real world success, and has little to do with real game or software development.

Worse, the entire enterprise is talking about it. One game or 10,000 games that tell you (again) to pick up trash doesn’t get the trash picked up. If you want to gamify neighborhood clean up, you are going to need to tie it to the actual physical world, e.g., a “trashure hunt”, with points for cleaning up and preventing litter.

These kids did a super job on their projects, but I think the bar was set far too low. Let’s challenge kids to actually do something, not just make a digital story about it. How would you use game technology to do it? I don’t know. That’s what the challenge is.

  1. Games for Change, Announcing the winners of the 2017 G4C Student Challenge, in Games For Change Blog. 2017.


Blockchain Use Cases: Theme Parks?

Jegar Pitchforth writes in Coindesk about “5 Ways Theme Parks Could Embrace Blockchain” [1]. His basic idea is that theme parks are historically “early adopters” and pioneers of technology, and should pioneer the use of blockchain technology.

He specifically identifies five use cases:

  1. Ticketing
  2. “Fastpass tickets” (i.e., specific deals)
  3. Theme Park Currency (Branded)
  4. Audience Surveys
  5. Pay audience to advertise


These are scarcely new ideas. Indeed, the entire article refers to existing programs. The point must be, and the question is, what does blockchain technology bring to the table? How would a blockchain be better than current technology?

Let’s look at his use cases to see what value blockchain brings, if any.

In the case of ticketing, it seems that the main advantage is that a blockchain system can be securely accessed by any smartphone.   Current systems work fine, as far as I know, and wearable technology makes it even more convenient than a smartphone.

The “Fastpass” use case has the potentially interesting wrinkle of using “smart contracts” to implement markets for these ‘rights’. Guests could trade and bargain for seats on rides, and so on.  Or there could be various conditions attached (“You can ride if you and 3 of your friends show up in 15 minutes….”)

Assuming that this kind of activity is a desirable feature (and for some fantasy worlds, I’m not sure that you want people diverting attention to such matters), it isn’t clear that blockchain is any better or worse than any other technology. After all, so called “smart contracts” are really, really simple logic, which can easily be built into a conventional database.

The idea of Theme Park Currency is nothing more or less than digital tokens or coupons, with a ton of general purpose overhead. Since these ‘coins’ are essentially private tokens issued by the park, they aren’t “decentralized” at all. In that sense, blockchain is a terrible choice, completely incongruent with the use case.

The last two hinge on using the cryptocurrency as loyalty points to incentivize the victims guests. This may or may not be desirable thematically (and is certainly ethically problematic when children are involved), but you don’t need a blockchain or private cryptocurrency to make it work.

Overall, there is little technical or logical reason why blockchain technology is especially well suited for any of these use cases. Indeed, to the degree that blockchain is generic and invites attention to commerce it is interfering with the effort to create a magic world and to command total attention and immersion.

It is true that a blockchain-based solution might be cheap and easy compared to creating a secure private network. However, much of the cost and effort must go into the user experience not the back end details, so I’m not sure if there would be much cost savings.

Most of the features of the blockchain are actually irrelevant to these use cases. The data systems of a theme park are extremely private and highly localized. What is the advantage of using an open, internet-wide data system?

Above all, the entire theme of a “theme park” is trust. We hand over part of our life to the designers, trusting them to give us a safe and enchanting experience. Ticketing, tokens, and whatever else must all be integrated to be part of this trusted experience. What is the advantage of using a “trustless” technology to implement this deeply trustful system?

Overall, it looks to me like you could use blockchain technology, but there is hardly a compelling case to do so. And if you do, it will be necessary to integrate it into the overall magic, which likely will mean that the blockchain should be invisible. If it is done right, you’ll never know it is there.

Actually, a successful deployment would be very good for blockcahin technology in general, because it would have to create a safe and wonderful user experience.  To data, the “user experience” with blockchains is very, very weak. A Disney quality interface would lift all boats.

For example, a blockchain system requires guests (including children?) to manage cryptokeys  In the theme park this must be safe, intuitive, and generally invisible.  Developing cool metaphors and UI to do this would be a great thing to see, and would advance the whole field.

  1. Jegar Pitchforth, 5 Ways Theme Parks Could Embrace Blockchain (And Why They Should) May 16 2017,


Cryptocurrency Thursday

RoboThespian: Uncanny or Just Plain Unpleasant?

RoboThespian  is disturbing.

I think this particularly humanoid robot has climbed out of the uncanny valley of discomfort, and ambled out onto the  plain of extremely annoying coworker. Disney animatronics gone walkabout.

RoboThespian is a life sized humanoid robot designed for human interaction in a public environment. It is fully interactive, multilingual, and user-friendly, making it a perfect device with which to communicate and entertain.

Clearly, these guys have done a ton of clever work, integrating human like locomotion, speech synthesis, projection, face tracking, and serious chat bot software.

The standard RoboThespian design offers over 30 degrees of freedom, a plethora of sensors, and embedded software incorporating text-to-speech synthesis in 20 languages, facial tracking and expression recognition. The newly developed RoboThespian 4.0 will offer a substantial upgrade, adding additional motion range in the upper body and the option of highly adept manipulative hands.”

What can you do with all this? I think the key clue is that the programming is done via a GUI enviroment  Blender

which means that you basically create a computer generated scene, which is “rendered” in physical robots.

Much of the spectacular effect is due to well coordinated facial expressions, head movement, and speech. The robot also has sensors to detect people and especially faces, and to orient to them. It also has facial expression recongnition, which lets it “reproduce” facial expressions. All these effects are “uncanny”, and make the beast appear to be talking to you (or singing at you). Ick!

All this is in the pursuit of…I’m not sure what.

I grant you that this is a great effect, at least on video. But what is it for?

The title and demos suggests that it replaces human thespians (live onstage), which seems far fetched. If you want mechanized theater, you always have computer generated movies. As far as I can tell, the main use case is for advertising, e.g., trade show demos. It either replaces human presenters (demo babes) or it replaces video billboards.

They also suggest that this is a good device for telepresence, It “can inhabit an environment in a more human manner; it’s the next best thing to being there.”   I’m not at all sure about that. Humanoid appearance is not really important for effective telepresence in most cases, and there is no reason to think this humanioid is well suited for any give telepresence situation.

Let me be clear: this product is really nicely done.  I do appreciate a well crafted system, integrating lots of good ideas.

But I really don’t see that roboThespian is anything other than a flashy gimmick. (Human actors are way, way cheaper, and probably better.)

On the other hand, when I saw the first computer mouse on campus, I declared that it was a useless (and stupid) interface, and no one would ever use it.   I was wrong about mice (Boy was I wrong!), so my intuitions about humanoid chatter bots may be wildly off.

Update May 4 2017:  Corrected to indicated taht Engineering Arts does not use Blender, as the original post said. I must have seen some out of date information.  EngArt have their own environment which, if not built from Blender, is built to look just like it.  Thanks to Joe Wollaston for the correction.


Robot Wednesday

Automated Story Synthesis From Disney

Mubbasir Kapadia from Disney Research in Zurich have reported on CANVAS, a tool for composing “stories” [1]. The idea is to streamline the storyboarding process, with an interface that makes it easier to create a story. Being Disney, the system also generates a 3D animation that illustrates the story.

The result is a tool that enables “untrained users to create complex narrative-driven animations within minutes” (p. 200) Tres cool!

Technically, the system tries to balance the role of the human author with the automated system, keeping the latter largely hidden. This work builds on earlier approaches, mapping author specified points in the plot into “a constraint satisfaction problem with multiple, possibly contradicting goal constraints”, which is not in any way trivial to solve.

The system is similar to many AI planning systems, except with a pretty clever interface–and automatic movie generation!. Given a library of domain knowledge, actors and relationships, the author constructs points in a story with a graphical interface. Then you can push the “complete” button to fill in gaps in the story, and “play” to see a 3D animation of the story. Wow!

Aside from the instant automation (which would make a great product just for home use), I was intrigued by their demo scenario: the story of a bank robbery. This made me wonder if this technology would actually be useful for planning a real caper.

Thinking about it I suspect that, with the right domain knowledge including accurate intelligence and simulation about defenses, this might be a useful planner for this kind of escapade. In fact, the ability to test out alternative narratives would be extremely useful for planning contingencies and back up plans.

Does this mean we might see black market planning software, with libraries of specific domain knowledge for sale? Maybe. “Make your own movie” software, with premium mods to output more than just the movie: it might output a “parts list”, scripts, and briefing materials, so you could act out the caper in real life.

With the instant planning an replanning, you might push out the “script” to the “actors” in real time using mobile devices, monitoring their progress in your 3D simulation. The actual behavior of the actors would be fed back in as new plot points, forcing recalculation of the “story” in real time. Would that really work? I dunno.

And, of course, experts could sell their knowledge over the dark net. Want to know how to break in to a specific bank? Maybe you can buy a mod on the net, filled with expert local knowledge and intelligence.

Pretty neat, buh? A push button heist.

On the other hand…

I’d say this approach would be quite risky, and not just because the input data might be incomplete or wrong. The system would be a tempting target for hacking. Imagine the mischief generated by manipulating such a complex simulation, if the user was crazy enough to actually try it. You would need pretty solid security against hackers and also against leaks because possession of the simulation would be pretty damning evidence against you.

I can imagine white hats collecting intelligence in the form of these libraries of domain knowledge off the dark web, and possible baiting traps with tempting honey—false or misleading input data, treacherous behavior modules, and sneaky tell tale units that report back to the white hats, and lead you into a trap.

I would note that the entire idea might be misguided. While complex, precisely planned crimes make great stories, there is much to be said for simplicity and fluid improvisation. Fantastically complex stories are fragile and tend to fail catastrophically, whereas a simple, straightforward raid might be much more robust in the face of real life events.

…Sorry, I seem to have wandered way off topic.

This technology looks really neat, and I’d love to be able to buy it.

I don’t think it would lead to clever crimes. But it might lift the quality of web animations and game stories. Your average teen ager could make a pretty decent movie with something like this.

That would be neat.

  1. Kapadia, Mubbasir, Seth Frey, Alexander Shoulson, Robert W. Sumner, and Markus Gross, CANVAS: computer-assisted narrative animation synthesis, in Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation. 2016, Eurographics Association: Zurich, Switzerland. p. 199-209.


FurFur: A Cool “Shared” Robot For Couples

In our digital age, many people sustain long distance relationships using digital media. For that matter, many digital experiences are inherently social, including social media and multiplayer games.

The emergence of haptic interfaces presents interesting new opportunities for this use case, which has only begun to be explored in personal relationships. There have been some suggestive notions of remote touching (e.g., [2]), remote interaction with pets (e.g., [3]), and the emerging technology of remote dildonics.

At the same time, there has been expanding interest in telepresence robots, which ultimately would combine with “affective” robots to project a remote emotional presence.

There are many challenges here, not least of which is how to create reasonable two way remote presence, and what sort of remote haptic interactions would people actually want to do.

Wei-Chi Chien and colleagues at the Folkwang University of the Arts in Essen (I had to look it up) have an interesting idea: Furfur.

Furfur is a shared robot pet, but one that has two incarnations, one for each person in a distant relationship. Furfur has one personality and one “life line”, but can interact with either of two people via the Internet. The simple rule that when I am playing with Furfur, you can’t interact or even see him (her? It?), and vice versa, creates a clever illusion that Furfur is teleporting back and forth between us. Cool!

The team also created some simple interactive patterns that sort of transmit comfortable touch and sounds. Either party can pet the Furfur, which enjoys the contact, and this joy is displayed to the other party. Furfur can also pass along what it hears from one to the other person.

The overall effect is quite striking: with just a few constraints, Furfur creates a sense of a single, shared pet across the distance.

This design utilizes a ‘joint action’ strategy to sustain the interpersonal relationship between the people. As the authors comment, this strategy is rarely used in the literature, but we can certainly see how effective it can be.

Very nice work!

Oh, and notice that there is no touch screen at all, inappropriate or otherwise!

  1. Wei-Chi Chien, Wei-Chi, Marc Hassenzahl, and Julika Welge, Sharing a Robotic Pet as a Maintenance Strategy for Romantic Couples in Long-Distance Relationships.: An Autobiographical Design Exploration, in Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. 2016, ACM: Santa Clara, California, USA. p. 1375-1382.
  2. Chung, Keywon, Carnaven Chiu, Xiao Xiao, and Pei-Yu Chi, Stress outsourced: a haptic social network via crowdsourcing, in CHI ’09 Extended Abstracts on Human Factors in Computing Systems. 2009, ACM: Boston, MA, USA. p. 2439-2448.
  3. Lee, Shang Ping, Adrian David Cheok, Teh Keng Soon James, Goh Pae Lyn Debra, Chio Wen Jie, Wang Chuang, and Garzam Farbiz, A Mobile Pet Wearable Computer and Mixed Reality System for Human-Poultry Interaction Through the Internet. Personal and Ubiquitous Computing, 10 (5):301-317, 2006.

VR Roller Coaster, Flavor of the Month

Kristen Clark reports at IEEE Spectrum that “virtual reality roller coasters are having a moment”.  The maturation of VR headsets and related tech has opened the door to adding VR 3D visuals to conventional Roller Coaster amusement rides.

This would seem to be combining two notorious nausea-inducing technologies, which might be much worse than either independently. In fact, the designers believe that combining the two actually reduces the problem. Specifically, the idea is that if you can very precisely match the motion and visuals, you eliminate the cue conflict between vestibular and ocular cues to the brain. This requires very precise synchronization, matching the visual projections to the rapidly and unpredictably moving body and head.

Roller coasters are a favorable environment to do this, however. The path of the car is always the same, and the rider is strapped in, so much of the motion is predictable. The car also provides a harness for precise motion tracking of the person’s free moving head, and for delivering video rapidly. For example the car can carry significant batteries, and the headset can be tethered.

I’m not totally convinced that this technology will entirely eliminate motion sickness—there is a lot of individual variability and even situational variability in these effects.  Some people probably will still have trouble. The VR designers also consider that the feeling of presence, the absorption in the visual world is important. “Presence”, too, is highly variable among different people. So I would expect that some people will still have problems. (If nothing else, what happens if I close my eyes during the ride?)

But when it works, how significant is VR for a Roller Coaster? Certainly it is great to be able to create the scenery digitally. This opens great possibilities for the designer, and is way cheaper to change than physical props.

The digital visuals can portray any storybook world that can be realized, divorced from constraints of time and space. And the graphics in conjunction with high G motion can trick the eye (and ear?), potentially creating illusions of weightlessness, great speed, and falling from heights. I would imagine that a compelling experience could elongate time perception as well.

On the other hand, my limited experience with Roller Coasters was mainly all about the motion, not the visuals. In fact, I tend to close my eyes to concentrate on the flying, ignoring the (known to be fake) visuals. So would the VR do any thing for me? I don’t know.

In fact, Clark noticed the same thing, when she repeated the ride without the VR effects.

Rather than a tunnel to the stars, I was staring facedown at a pavement littered with dead brown leaves, until…suddenly, I was flying—swooping down into close calls with the ground below, hurtling up into barrel rolls, darting through real trees, and feeling the wind in my hair. My body went limp in exhilarated, childlike joy.”

Perhaps, she says, “a coaster as physically thrilling as Galactica might have been more thrilling if it had just been left alone.” And here she hones in on a crucial point: “if it’s presence you’re after, it may turn out to be surprisingly difficult to beat the old-fashioned method. It’s called being present.

Clearly VR RC’s (“VRoller Coasters”?)are the flavor of the month. We shall see what can be done with them, and how long they last.

Occulus is Evil

Speaking of the flavor of the month, the Occulus Rift is rolling out to huge fanfare. Occulus and similar headsets are reimplementations of standard old VR technology, which was cool in 1997, and is just as cool (and way lighter and cheaper) now.

One new feature that has been added, though, is the absurdly arrogant license agreements. I don’t have an Occulus and have not examined the Terms and Conditions in detail, but Andrew Liptak reports at Gizmodo that “There Are Some Super Shady Things in Oculus Rift’s Terms of Service”.

Actually, the “news” isn’t especially surprising or especially “shady”, they are similar to Facebook and others’ nonsense.  This is pretty common stuff these days.

Specifically, Libtak reports that Occulus basically asserts complete rights to do anything they want with any “content” you produce: i.e., every thing you do with your Occulus effectively belongs to Facebook.  Your behavior is tracked and sold off, and anything you create can be stolen and sold off.

Such a deal!

Aside from the awe-inspiring arrogance of this approach, it is utterly bone-headedly stupid and short sighted. How can I use this technology, explore this technology, invent new uses of this technology, under such a license? I can’t.  Forget it.

Clearly, Occulus has been captured by Hollywood, and is utterly clueless about technology and how to make money.