Category Archives: Design

Data Comics?

Benjamin Bach and colleagues wrote in IEEE Computer Graphics about “The Emerging Genre of Data Comics” [1]. I like data and I like comics, so I’ll love data comics, right?

Data comics is a combination of data + story + visualization. They say that it is “a new genre, inspired by how comics function” ([1], p.7)

The “how comics function” is largely about flow and multiple panels. As Scott McCloud says, the action happens in the gutter ([2], p. 66) (i.e., between the panels).

(By the way, Sensei McCloud teaches that this happens though the active engagement of the reader, who closes the gap with his or her imagination. If you haven’t read Understanding Comics [2], stop reading this blog right now and go read McCloud. I’ll wait here.)

The authors assert that data always has context, and “Context creates story, which wants to be narrated” ([1], p. 10). Well, maybe, though I think it is a mistake to read this as “you can tell whatever story you want” (the Hollywood approach). Part of the context is what kinds of stories it is OK to tell.

The authors give four advantages of the medium,

  • Combines text and pictures
  • Delivers one message at a time in a guided tour
  • Data visualization gives evidence for facts
  • Other types of visualization can tell the story clearly

This article itself is delivered in the form of a comic (though not a data comic), which highlights both the advantages and the limitations of this approach.

One really good thing about storyboards and comix is that they force you to boil down your story to a handful of panels, with only so much on each. This isn’t always easy, but it surely helps organize the story.

Compare this to written or spoken word, which can flow any way you want and can go on as long as you have strength, with no guarantee that any organized narrative is told.

I note that any good visualization (or demo) probably had a storyboard in the beginning, which is essentially a comic strip of the overall story to be told.

The medium isn’t without drawbacks.

Fro example, this article was very difficult for my ancient eyes to read. The text was rather too small and blurry for me to read and white on black lettering is hard for me to make out. Many of the pictures were below my visual threshold. E.g., One panel is about “Early examples led the way” has tiny versions of other comics, which are illegible and may as well not be there.

Also, it was difficult to quote (i.e., remix) ideas from this article. E.g., I couldn’t easily quote the “Early examples” panel to make my point about it. I could probably have extracted the picture, fiddled with it in a drawing package, and saved a (blurry) image to include here. But how would that make my point about the illegibility of the original?

As a general rule, comix need to be pretty simple or they are impossible to read. This means that they can only deliver a very concise story. As Back, et al. suggest, this is a feature, not a bug.

On the other hand, telling “only one message at a time” is not just “concise” it is a Procrustean bed. For complicated data there isn’t one message, there are many. A data comic runs the risk of trivializing or misleading by omission. This is a bug, not a feature.

The challenge is to make “concise” be deep rather than shallow.

This is why trying to express the story in a storyboard (comic) is an extremely good design practice, even if the story isn’t ultimately published in the form of a comic.

  1. Benjamin Bach, Nathalie Henry Riche, Sheelagh Carpendale, and Hanspeter Pfister, The Emerging Genre of Data Comics. IEEE Computer Graphics and Applications, 38 (3):6-13, 2017.
  2. Scott McCloud,, Understanding Comics, HarperCollins, 1994.

Origamizier: Origami Anything


I think I’ve always believed that origami could, in principle, represent most any shape, if you were clever enough. Until now, that was just a hunch.

This summer Erik D. Demaine and Tomohiro Tachi (of MIT and U. Tokyo) have published a complete algorithm for “Folding any Polyhedron” [1].  In short, you can make origami anything.

I don’t know what the limits of the algorithm are, but they say they can make an origami version of The Bunny, so it must be for real!

Mot of the technical details are beyond my own puny understanding of computational geometry, but I know this is potentially very important.

The traditional craft of origami is a repository of knowledge for how to create complicated shapes out of a single sheet of paper. These techniques are now a very important source of design for foldable and flatpack designs for robots and objects.

For one thing, these designs are amenable to simple digital manufacturing with laser cutters and 3D printers. For another, just like flat pack furniture, it is interesting to deliver a compact package that folds into a complex device on location. At small scales, this might deliver medical robots inside a body. At larger scales, this might deliver a planetary rover or temporary shelter via air drop.

I’m sure there are many more cases I haven’t though about.

My own view is that every engineering and design student should study origami.  It should be part of the mental (and manual) toolkit.

The “origamizer” is extremely significant because it means that it should be possible to realize any CAD design in one or more origamis. Combined with different manufacturing techniques, designers can deliver self-assembling and DIY designs of greater and greater complexity. Cool!

I’d love to see an ‘origamizer plugin’ for Blender!

  1. Erik D. Demaine and Tomohiro Tachi, Origamizer: A Practical Algorithm for Folding Any Polyhedron, in The 33rd International Symposium on Computational Geometry (SoCG 2017),, B. Aronov and M.J. Katz, Editors. 2017: Brisbane. p. 34:1–34:15.
  2. Tomohiro Tachi. Software. 2017,

Smell Maps of Cities

Daniele Quercia and colleagues have published research aimed at mapping the smells of entire cities [2]. They want to analyze social media to detect recent descriptions of smells to create city wide maps of what people are smelling in different places. To do this, they needed to create a dictionary of terms for smells.

They authors are mainly concerned with aesthetics, not with chemical analysis of the air or sources of smells.  They are concerned with “the positive role that ‘smell’ as opposed to ‘air pollution’ can play in the environmental experience” ([2], p.334) They comment that there is little work on this topic, so they hope to “enrich the urban smell toolkit” ([2], p.327)

The study collected residents’ reports of what they smell, and clustered similar terms to form a dictionary of smells (i.e., of concepts about smells). This was also correlated with existing dictionaries of smell terms.

Smell terms from geotagged social media entries were used to create maps of smells across the city. The researchers suggest that there are different spatial scopes for the smells, from broad to very localized. They call these “base notes”, “mid-level notes”, and “high notes”, an to perfume advertisements that is pretty shaky in this case.

The resulting maps seem to capture coarse features of the city (e.g., industrial concentrations, large food market), and are slightly correlated with air quality measures.

The main implications are public awareness of the cityscape, and perhaps an increased attention from urban designers.


I found this study to be competently done, but not really useful. It borders on what my statistics teachers would call a “Type 3 Error”: they may be asking the wrong question.

Their concept of “smells” seems to be rather questionable.

First of all, as they generally acknowledge, the sense of smell is rather complex. There are large individual differences, not just the demographic variation they mention. Worse, smell is highly affected by both short term and long term experience. People habituate to smells rapidly, and learn over time. Our sense of smell changes as we age, as well as due to illness, exercise, and other activities. For that matter, we wear scents and scented clothing, that form a private smellscape right under our own nose.

The study worked hard to cluster the words people use to describe smell, but this exercise in linguistics brings in  a slew of factors of culture and learning. These cultural and cognitive elements are definitely relevant to their interest in urban experiences, though describing smells isn’t simply perceiving smells. As the paper notes, there are quite a few contextual factors that may go into how a smell is perceived and described.

For example, I suspect that future studies might find that people report more negative words for how the “bad parts of town” smell, compared to areas they prefer, regardless of the objective chemistry of the air.  (This might be called “the New Jersey effect”.)

The terms chosen for a given smell also reflect personal and cultural contexts. For example, the smell of human sweat can be attractive or awful, depending on the people involved and the situation.

This study treats smells as rather permanent and large grain features, though they are ephemeral and subject to micro weather, e.g., wind direction. They do enumerate very localized “high notes”, though these smells can be extremely localized, detectable only within a meter or less, which cannot be represented on their maps.

In other words, the maps are coarse-grained in both time and space. Perhaps this level of analysis is useful for urban design, but it is certainly not self-evident just how much this matters. The reported low correlations with other measures probably reflects this overly coarse granularity of the geotagged terms and the other measures.

This methodology is primarily about outdoor smell. But urban experiences are mainly indoors, and indoor smells are a totally different animal. Sure, the open air farmer’s market smells wonderful for that 15 minutes I am passing it, but when I go inside, I can’t smell it any more, no matter where I am on the city map.

The researchers make an interesting point about wanting to create attractive urban spells, not just mitigate pollution and repellant odors. On the other hand, this work shows little reason to think that this kind of analysis is an accurate measure of harmful pollution, despite what the authors may sometimes claim in the popular media. I’m all in favor of my city smelling nice (at least to some people), but it is important to monitor and reduce dangerous pollution, much of which cannot be seen or smelled.

This method is not, I repeat, not a good way to monitor the air quality of a city.

  1. Matt McGrath, Can city ‘smellfies’ stop air pollution? . BBC News.March 10 2017,
  2. Daniele Quercia, Rossano Schifanella, Luca Maria Aiello, and Kate McLean, Smelly Maps: The Digital Life of Urban Smellscapes. International AAAI Conference on Web and Social Media; Ninth International AAAI Conference on Web and Social Media:327-336, 2017.

Health Apps Are Potentially Dangerous

The “Inappropriate Touch Screen Files” has documented many cases of poor design of mobile and wearable apps, and I have pointed out more than once the bogosity of unvalidated cargo cult environment sensing.

This month Eliza Strickland writes in IEEE Spectrum about an even more troubling ramification of these bad designs and pseudoscientific claims: “How Mobile Health Apps and Wearables Could Actually Make People Sicker” [2].

 Strickland comments that the “quantified self” craze has produced hundreds of thousands of mobile apps to track exercise, sleep, and personal health. These apps collect and report data, with the goal of detecting problems early and optimizing exercise, diet, and other behaviors. Other apps monitor the environment, providing data on pollution and micro climate. (And yet others track data such as hair brushing techniques.)

These products are supposed to “provide useful streams of health data that will empower consumers to make better decisions and live healthier lives”.

But, Strickland says, “the flood of information can have the opposite effect by overwhelming consumers with information that may not be accurate or useful.

She quotes David Jamison of the ECRI Institute comments that many of these apps are not regulated as medical devices, so they have not been tested to show that they are safe and effective.

Jamison is one of the authors of an opinion piece in the JAMA, “The Emerging Market of Smartphone-Integrated Infant Physiologic Monitors[1]. In this article, the authors strongly criticize the sales of monitoring systems aimed at infants, on two grounds.

First, the devices have not been proven accurate, safe, or effective for any purpose, let alone the advertised aid to parents. Second, even if the devices do work, there is considerable danger of overdiagnosis. If a transient and harmless event is detected, it may trigger serious actions such as an emergency room visit. If nothing else, this will cause needless anxiety for parents.

I have pointed out the same kind of danger from DIY environmental sensing: if misinterpreted, a flood of data may produce either misplaced anxiety about harmless background level events or misplaced confidence that there is no danger if the particular sensor does not detect any threat.

An important design question in these cases is, “is this product good for the patient (or user)”?  More data is not better, if you don’t know how to interpret it.

This is becoming even more important than the “inappropriateness” of touchscreen interfaces:  the flood of cargo cult sensing in the guise of “quantified self” is not only junk, it is potentially dangerous.

  1. Christopher P. Bonafide, David T. Jamison, and Elizabeth E. Foglia, The Emerging Market of Smartphone-Integrated Infant Physiologic Monitors. JAMA: Journal of the American Medical Association, 317 (4):353-354, 2017.
  2. Eliza Strickland, How Mobile Health Apps and Wearables Could Actually Make People Sicker, in The Human OS. 2017, IEEE Spectrum.


Barcelona Fab Market for Open Source Design

Cat Johnson writes about the “Fab Market”, which is an initiative associated with the world-renowned Barcelona Fab Lab. The basic idea is an online shop that sells products to be made at a local Fab Lab. The designs are created by designers anywhere in the world, and are supposed to be open source. The Barcelona group curates the collection, conducting quality control and overseeing the system.

The business model appears to be that you will pay to obtain either the plans (which are supposedly “open source”), or the parts ready to assemble (DIY), or a fully assembled product. The fabrication and assembly are done at your local Fab Lab—supporting the local economy and reducing transport costs. Some of the revenue goes to the local Fab Lab, some to the workers, and some to the designer.

This effort is part of a larger vision of “Fab Cities, which imagines more self sufficient cities that fabricate a significant portion of their goods locally. Even before anything like that is achieved, this idea may be an opportunity for designers and for local workers.

Johnson summarizes the potential of the Fab Market:

Some of the benefits of the Fab Market system are:

  • Engaging and empowering people in the manufacturing process
  • Spreading the open-source ethos of sharing and collaboration
  • Reducing environmental impact of creating and transporting goods
  • Increasing transparency in the supply chain
  • Reducing the time and costs of production
  • Giving talented designers a platform for showcasing and sharing their products
  • Connecting a global community of makers

The big picture for Fab Market is to create a distributed economy based on good design and quality products that are made to last.

This effort joins existing “open source hardware” concepts, all of which are creating a global collection of artifacts for gardening, office furniture, clothing, plastic recycling and housing and homesteading.

In the same vein as Fab Market, Obrary is a global library of open source designs, available for free download (under creative commons).

Looking at Obrary back in 2014, I commented:

Suggested Feature:  One thing I would really like in a service like this would be some way to find local workers who will build. For example, if I need beehives, and I find a design I like at Obrary, and I want to buy one or more.  It would be nice to have a way to find one or more people in my town with the skills and tools, and pay them to do the build. In this case, there might reasonably be a “suggest donation” back to the designers, but most of the money would be in my local economy, supporting families where I live.

“This can be done informally, and I’m sure it will.  But is there a role for something like Obrary in this process?  And if so, how should it be done?”  (Posted September 5, 2014)

Voila! Barcelona is trying to do exactly this with their Fab Market. How can I disagree with something that was my own idea! 🙂

The obvious next step is to integrate and cross-fertilize these “open source hardware” collections. For example, it should be easy to order up anything in Obrary, and the collection in Fab Market should be accessible via Obrary. Ditto for Aker, OpenDesk, The Global Village Construction Kit, and so on.

I think this kind of interoperation should be doable, with a little bit of imagination to make Fab Market, Obrary, and so on part of an open network of catalogs. (Talk to your local librarian about open standards for catalogs….)

Such a development will also make it possible for others to join in with yet other curated collections of open source hardware, possibly with different business models. For example, garden equipment might be discounted for people who are certified participants in local food exchanges.

Note that Fab Market and the other sites are effectively offering their services as expert curators. This means that a consumer can have several options among curators, to get different perspectives. Opening up the curating process will make it possible for bottom up and peer-to-peer “curation”, so anyone can pull together an inventory of designs, and offer them to the global market of local makers.  It is also an opportunity for local makers and builders to advertise their expertise (by referring to the global catalog).

This is an interesting developments. We’ll see what happens in the future.

  1. Cat Johnson, Here’s How Fab Market is Creating a Sustainable Marketplace. Sharable.January 17 2017,


CES: Lot’s of Voice Recognition

The annual Consumer Electronics Show (CES) is always a rich source of blog-fodder. This is, after all, densely packed with Innappropriate Touch Screens Interfaces and hundreds of “why would anyone want this” gadgets.

This year everyone is remarking on the plethora of IoT devices, including the canonical toasters and refrigerators. Sigh.

Another trend is the explosion of voice recognition. As Amy Nordrum says, it is “The Year of Voice Recognition” Driven by improved accuracy, voice recognition is already out there (I’ve been seeing ads for Amazon and Google home assistants on TV). But it is sure to show up in lots of products no doubt including toasters and refrigerators. Sigh.

In one sense, this is a reasonable response to the plague of Inappororiate Touch Screens I have complained about. Talking to your toaster via your mobile device is dubious design, and, as Tekla S. Perry says, “For years now, the consumer electronics industry has been trying to sell slightly intelligent Internet-connected appliances that you can control from your smart phone—and not gotten very far.

So, the thinking goes, lets replace that stupid idea with a voice interface, which is hands free and possibly more natural for the in-home setting. And Perry has a point when she says that this approach moves the center of the home away from the TV and into the kitchen. “[A]s has been true since the beginnings of civilization, the heart of the home will be the hearth.”

By now, we have all seen the current generation of chatty, “friendly” digital assistant, so it is easy to imagine them infesting our appliances. If you are happy to search for restaurants or call up a play list by voice command to your phone, you’ll probably be content telling your toaster to make toast, or your refrigerator to order more sprouts.

You have probably noticed that I’m not a huge fan of voice interfaces.  if people could always speak clearly, honestly and unambiguously, we wouldn’t need lawyers or psychotherapists.  If people always understood what is said to them, we wouldn’t have divorces or wars.

As far as I can tell, this technology seems to depend on being internet connected, so the assistant software can reside in “the cloud” somewhere. There are so many implications of this architecture that I won’t go into it now. Suffice it to say that I am not enthusiastic about having the Internet listening to my family conversations. As Evan Ackerman comments, CES is full of “appliances that spy on you in as many different ways as they possibly can“.

Finally, we might wonder just how such devices might affect family life. Innocent gadgets can have profound impacts on our attention, interpersonal relations, and family life. We need look no farther than the example of TV and the mobile phone to see how disruptive a chatty refrigerator might turn out to be.

Have these devices been field tested? Do we have any idea what the side effects might be? Just how benign is this technology? Is it safe for children? Is it good for children?

For example, I’m imagining that children will quickly learn that the world is supposed to respond to your commands, instantly and without argument, so long as it is prefixed by “OK Google”.

“OK, Mom. Give my cereal now.”

“OK Dad, Buy me a new xbox.”

Is this a good lesson to teach your children?

“The technology of touch”

I have frequently blogged about haptics (notably prematurely declaring 2014 “the year of remote haptics”), which is certainly a coming thing, though I don’t think anyone really knows what to do with it yet.

A recent BBC report  “From yoga pants to smart shoes: The technology of touch”  brought my attention to a new product from down under, “Nadi X”, “fitness tights designed to correct your form”. Evidently, these yoga pants are programmed to monitor your pose, and offer subtle guidance toward ideal position via vibrations in the “smart pants”.

(I can’t help but recall a very early study on activity tracking, with the enchanting title, “What shall we teach our pants?” [2]  Apparently, this year the answer is, “yoga”.)

Source: Wearable Experiments Inc.
Source: Wearable Experiments Inc.

It’s not totally clear how this works, but it is easy to imagine that the garment can detect your pose, compute corrections, and issue guidance in the form of vibrations from the garment. Given the static nature of yoga, detecting and training for the target pose will probably work, at least for beginners. I’d be surprised if even moderately experienced practitioners would find this much help, because I don’t know just how refined the sensing and feedback really will be.  (I’m prepared to be surprised, should they choose to publish solid evidence about how well this actually works.)

Beyond the “surface” use as a tutor, the company suggests a deeper effect: it may be that this clothing not only guides posture but can create “a deeper connection with yourself”. I would interpret this idea to mean, at least in part, that the active garment can promote self-awareness, especially awareness of your body.

I wonder about this claim. For one thing, there will certainly be individual differences in perception and experience. Some people will get more out of a few tickles in their trousers than others do. Other people may be distracted or pulled away from sensing their body by the awareness of their garment groping them.

Inevitably, touch is sensual, and quickly leads to, well, sex. I’m too old not to be creeped out by the idea of my clothing actively touching me, especially under computer control. Even worse, when the computer (your phone) is connected to the Internet, so we can remotely touch each other via the Internet.

Indeed, the same company that created Nadi X created a product called “fundawear” which they say is, “the future of foreplay” (as of 2013).  Sigh. (This app is probably even more distracting than texting while driving….)

Connecting your underwear to the Internet—what could possibly go wrong? I mean, everything is private on your phone, right?  No one can see, or will ever know. Sure.

I’m pretty sure fundawear will “work”, though I’m less certain of the psychological effects of this kind of “remote intimacy”.  Clearly, this touching is to physical touching like video chat is to face to face. Better than nothing, perhaps, but most people will prefer to be in person.

Looking at the videos, it is apparent that the haptics have pretty limited variations. Only a few areas can buzz you, and the interface is pretty limited, so there are only so many “tunes” you can play. The stimulation will no doubt feel mechanical and repetitive, and probably won’t wear very well. Sex can be many things, but it shouldn’t become boring.

(As a historical note, I’ll point out that, despite their advertising claims, this is scarcely the first time this idea has ever been done. The same basic idea was demonstrated by MIT students no later than 2009 [1], and I’ll bet there have been many variations on this theme.  And the technology is improving rapidly.)

This is a very challenging and interesting area to explore. After following developments for the last decade and more, I remain skeptical about how well any sensor system can really communicate body movement beyond the most trivial aspects of posture.

My own observation is that an interesting source of ideas comes from the intersection of art and wearable technology. In this case, I argue that, if you want to learn about “embodied” computing, you really should work with trained dancers.

For example, you could do far worse than considering the works of Sensei Thecla Schiphorst, a trained computer scientist and dancer, whose experiments are extremely creative and very well documented [4].

One of the interesting points that I have learned from Sensei Thecla and other dancers and choreaographers, is how much of the experience of movement is “inside”, and not easily visible to the computer (or observer). I.e., the “right” movement is defined by how it feels, not by the pose or path of the body. Designing “embodied” systems needs to think “from the inside out”, to quote Schiphorst.

In her work, Schiphorst has explored various “smart garments” which reveal and augment the body and movement of one person, or connect to the body of another person.

Since those early says, these concepts have now appeared in many forms, some interesting, and many not as well thought out as Sensei Thecla.

  1. Keywon Chung, Carnaven Chiu, Xiao Xiao, and Pei-Yu Chi, Stress outsourced: a haptic social network via crowdsourcing, in CHI ’09 Extended Abstracts on Human Factors in Computing Systems. 2009, ACM: Boston, MA, USA. p. 2439-2448.
  2. Kristof Van Laerhoven and Ozan Cakmakci. What shall we teach our pants? In Digest of Papers. Fourth International Symposium on Wearable Computers, 2000, 77-83.
  3. Thecla Schiphorst, soft(n): toward a somaesthetics of touch, in Proceedings of the 27th international conference extended abstracts on Human factors in computing systems. 2009, ACM: Boston, MA, USA.
  4. Thecla Henrietta Helena Maria Schiphorst, THE VARIETIES OF USER EXPERIENCE: BRIDGING EMBODIED METHODOLOGIES FROM SOMATICS AND PERFORMANCE TO HUMAN COMPUTER INTERACTION, in Center for Advanced Inquiry in the Integrative Arts (CAiiA). 2009, University of Plymouth: Plymouth.

Bonus video: Sensei Thecla’s ‘soft(n)’ [3].  Exceptionally cool!