Category Archives: Design

Yet More AI Fashion Advice

Computer geeks are just as interested in fashionable clothing as anyone else, though they tend to apply the mental hammers they possess to driving the nail.  There are any number of projects that are applying contemporary AI to the alleged problems of finding (and creating) fashionable garments and outfits.

Whatever the problem may really be, this research essentially treats it as a “recommender” system, a la online shopping and streaming services. This leads to two products, a “virtual stylist” to advise you on what to wear, and a “trend spotter” to advise producers about what to sell.

So who would want to use an AI recommender?

This kind of technology can probably detect group uniforms pretty easily, especially if social media and other metadata are included in the data.  This may be useful for market intelligence, but probably not terribly useful to individuals.  If you have to have a computer to tell you how to dress like the people you admire, you’re pretty lost.

On the other hand, some people might enjoy having a “virtual stylist” who helps them construct and maintain an individual look. For that matter, the ability to generate something that is in the style of X, but new, would be exactly what you might be looking for. What will Susie be wearing today?  She’s always ahead of the curve.  Etc.

Underlying these ideas are collections of data about what people are wearing, and the usual “people who liked this, also liked this other thing”.  Grist to this mill are images from the internet, social media posts, personal shopping history, and metadata about who’s who and what they do and want.  Many readers will recognize that this is also the data and technology used by advertising and intelligence services, who are looking to predict specific kinds of individual behaviors.

This fall, a team from UCSD and Adobe report on yet another permutation of this technique, which uses sophisticated image processing and machine learning to create “look plausible yet substantially different from” the examples [1].  (Note: their paper cites quite a bit of earlier work, which is worth looking over.)

The most interesting idea is to use the “user X image” preference data to generate new items that are predicted to be attractive to the user.

a richer form of recommendation might consist of guiding users and designers by helping them to explore the space of potential fashion images and styles.

This is pretty neat, and most machine learning approaches can’t do it nearly as well as what this group has done.

The technical details are non-trivial, see the paper.

This work is interesting, but there are a number of questions raised by this work.

First of all, it’s far from clear that this is addressing a problem that anyone needs to solve. For those who go beyond pure utilitarianism, “fashion” is a signaling system. What you wear is supposed to send messages about yourself.

The two messages most commonly sent are either “look at me, desire me” or “I belong to this group”.  Note that these are somewhat contradictory messages, asserting either individuality or conformity, respectively.

How do these messages relate to “preferences”?

Second, the authors suggest that this technology could be used as an aide to designers:

In the future, we believe this opens up a promising line of work in using recommender systems for design.”

“We believe that such frameworks can lead to richer forms of recommendation, where content recommendation and content generation are more closely linked”

In other words, this is a feedback loop, from design to user’s reception, back to designer.

As I have said before, this would seem to be a mechanism that pushes designers to produce “more of the same”; scarcely a formula for creativity.  But perhaps the AI is actually chasing user’s reactions which will eventually reject “more of the same”, it is retrospective—not a formula for being a fashion leader.

On the other hand, this technology does generate new designs, though it is hard to judge just how creative it is. Also, the technique learns continuously, which seems critical to me-preferences change.  Done right, the AI model might be a good way to represent a target user and to generate designs that hone in on their (momentary) preferences.

However, there are lots of underlying issues.

To the degree that a person wants to make fashion statements, this process of digging out “preferences” and generating examples that exemplify them is only indirectly related to whatever the intended statement might be. The AI knows little, if anything, about the semantics of the clothing in the images, which are highly subjective in any case.

Many fashion preferences are based on social aspirations, such as the desire to emulate a celebrity and / or fit in with a clique.  These factors are not only not visible in the image, they are completely missing from the concept of a personalized design. Don’t you want a social design?

The entire process is based on (small) 2D images with standard poses. This is a very impoverished set of information. Users have been trained to deal with tiny 2D mages by the internet, but this is not a full representation of “fashion” or anything else.  Clothing is 3D and bodies move, and both exist in physical context (e.g., a dance floor). These factors are missing from both the input and output of this AI.

The training  uses ratings and other inputs from observers as proxies for “preferences”.  It isn’t clear exactly what these data actually represent, and there is a very real possibility that there are multiple “communities” with different preferences all using the same online services.  Averaging across multiple sub-cultures will produce a meaningless common denominator.  (The research project used tiny, probably homogeneous samples, so it doesn’t explore this problem.)

Building a tight feedback loop between user preferences and the AI opens the possibility of hacking or gaming the system.  Flooding the inputs with ratings and supposed positive comments could manipulate the recommendations. I could easily imagine a PR campaign that combined celebrity product placement with ‘AI placement’ that biases the results in favor of the product.

Worse, there could well be AIs gaming other AIs.  It could devolve to the point where everyone has a ‘virtual fashion advisor’, and all the AIs are tracking the behavior of all the other AIs. Kind of like professional fashionistas do. Sigh.

I have to ask, just who is the AI serving?  The researchers seem to believe that the designer’s interests are aligned with the user, but I don’t think it is that simple.  Designers usually are or work for producers, who aim to sell the product.  Is the recommender helping the consumer or promoting consumption?

We all know the answer to that question. Technology just like this is already used by advertising and intelligence to track and predict the behavior of individuals. This behavior modelling is not for the benefit of the subject, it is for the benefit of the wealthy and powerful.

If these types of system come into wide use, it will probably change concepts of fashion recommendations.  An on-line suggestion that “people like you, also liked these items” is not valued as much as a recommendation from a close friend.  Similarly, trends spotted (or created) by AI will be considered second rate, compared to actual human trend spotting and creativity.  People will work hard to outguess the AI.

I predict that the more virtual assistants there are, the more valuable a human assistant will become!

  1. Wang-Cheng Kang, Chen Fang, Zhaowen Wang, and Julian McAuley, Visually-Aware Fashion Recommendation and Design with Generative Image Models. arXiv, 2017.
  2. Will Knight,  Amazon Has Developed an AI Fashion Designer: The retail giant is taking a characteristically algorithmic approach to fashion. MIT Technology Review online.august 24 2017,
  3. Jackie Snow This AI Learns Your Fashion Sense and Invents Your Next Outfit: A new kind of AI system could create personalized clothing based on a shopper’s taste. MIT Technology Review online.November 16 2017,


“Artificial Creatures” from Spoon

There are so many devices wanting to live with us, as well as a crop of “personal” robots. Everything wants to interact with us, but do we want to interact to them?

Too many products and not enough design to go around.

Then there is Spoon.

We design artificial creatures.

A partner to face the big challenges rising in front of us.

A new species, between the real and digital domains for humans, among humans.

OK, these look really cool!

I want one!

But what are they for?

This isn’t very clear at all. The only concrete application mentioned is “a totally new and enhanced experience while welcoming people in shops, hotels, institutions and events.” (I guess this is competing with RoboThespian.)

Anyway, it is slick and sexy design.

The list of company personnel has, like, one programmer and a whole bunch of designers and artisans. Heck, they have an art director, and a philosopher, for crying out loud.

Did I forget to say that they are French!

I have no idea exactly what they are going to build, but I will be looking forward to finding out.


Robot Wednesday

Data Comics?

Benjamin Bach and colleagues wrote in IEEE Computer Graphics about “The Emerging Genre of Data Comics” [1]. I like data and I like comics, so I’ll love data comics, right?

Data comics is a combination of data + story + visualization. They say that it is “a new genre, inspired by how comics function” ([1], p.7)

The “how comics function” is largely about flow and multiple panels. As Scott McCloud says, the action happens in the gutter ([2], p. 66) (i.e., between the panels).

(By the way, Sensei McCloud teaches that this happens though the active engagement of the reader, who closes the gap with his or her imagination. If you haven’t read Understanding Comics [2], stop reading this blog right now and go read McCloud. I’ll wait here.)

The authors assert that data always has context, and “Context creates story, which wants to be narrated” ([1], p. 10). Well, maybe, though I think it is a mistake to read this as “you can tell whatever story you want” (the Hollywood approach). Part of the context is what kinds of stories it is OK to tell.

The authors give four advantages of the medium,

  • Combines text and pictures
  • Delivers one message at a time in a guided tour
  • Data visualization gives evidence for facts
  • Other types of visualization can tell the story clearly

This article itself is delivered in the form of a comic (though not a data comic), which highlights both the advantages and the limitations of this approach.

One really good thing about storyboards and comix is that they force you to boil down your story to a handful of panels, with only so much on each. This isn’t always easy, but it surely helps organize the story.

Compare this to written or spoken word, which can flow any way you want and can go on as long as you have strength, with no guarantee that any organized narrative is told.

I note that any good visualization (or demo) probably had a storyboard in the beginning, which is essentially a comic strip of the overall story to be told.

The medium isn’t without drawbacks.

Fro example, this article was very difficult for my ancient eyes to read. The text was rather too small and blurry for me to read and white on black lettering is hard for me to make out. Many of the pictures were below my visual threshold. E.g., One panel is about “Early examples led the way” has tiny versions of other comics, which are illegible and may as well not be there.

Also, it was difficult to quote (i.e., remix) ideas from this article. E.g., I couldn’t easily quote the “Early examples” panel to make my point about it. I could probably have extracted the picture, fiddled with it in a drawing package, and saved a (blurry) image to include here. But how would that make my point about the illegibility of the original?

As a general rule, comix need to be pretty simple or they are impossible to read. This means that they can only deliver a very concise story. As Back, et al. suggest, this is a feature, not a bug.

On the other hand, telling “only one message at a time” is not just “concise” it is a Procrustean bed. For complicated data there isn’t one message, there are many. A data comic runs the risk of trivializing or misleading by omission. This is a bug, not a feature.

The challenge is to make “concise” be deep rather than shallow.

This is why trying to express the story in a storyboard (comic) is an extremely good design practice, even if the story isn’t ultimately published in the form of a comic.

  1. Benjamin Bach, Nathalie Henry Riche, Sheelagh Carpendale, and Hanspeter Pfister, The Emerging Genre of Data Comics. IEEE Computer Graphics and Applications, 38 (3):6-13, 2017.
  2. Scott McCloud,, Understanding Comics, HarperCollins, 1994.

Origamizier: Origami Anything


I think I’ve always believed that origami could, in principle, represent most any shape, if you were clever enough. Until now, that was just a hunch.

This summer Erik D. Demaine and Tomohiro Tachi (of MIT and U. Tokyo) have published a complete algorithm for “Folding any Polyhedron” [1].  In short, you can make origami anything.

I don’t know what the limits of the algorithm are, but they say they can make an origami version of The Bunny, so it must be for real!

Mot of the technical details are beyond my own puny understanding of computational geometry, but I know this is potentially very important.

The traditional craft of origami is a repository of knowledge for how to create complicated shapes out of a single sheet of paper. These techniques are now a very important source of design for foldable and flatpack designs for robots and objects.

For one thing, these designs are amenable to simple digital manufacturing with laser cutters and 3D printers. For another, just like flat pack furniture, it is interesting to deliver a compact package that folds into a complex device on location. At small scales, this might deliver medical robots inside a body. At larger scales, this might deliver a planetary rover or temporary shelter via air drop.

I’m sure there are many more cases I haven’t though about.

My own view is that every engineering and design student should study origami.  It should be part of the mental (and manual) toolkit.

The “origamizer” is extremely significant because it means that it should be possible to realize any CAD design in one or more origamis. Combined with different manufacturing techniques, designers can deliver self-assembling and DIY designs of greater and greater complexity. Cool!

I’d love to see an ‘origamizer plugin’ for Blender!

  1. Erik D. Demaine and Tomohiro Tachi, Origamizer: A Practical Algorithm for Folding Any Polyhedron, in The 33rd International Symposium on Computational Geometry (SoCG 2017),, B. Aronov and M.J. Katz, Editors. 2017: Brisbane. p. 34:1–34:15.
  2. Tomohiro Tachi. Software. 2017,

Smell Maps of Cities

Daniele Quercia and colleagues have published research aimed at mapping the smells of entire cities [2]. They want to analyze social media to detect recent descriptions of smells to create city wide maps of what people are smelling in different places. To do this, they needed to create a dictionary of terms for smells.

They authors are mainly concerned with aesthetics, not with chemical analysis of the air or sources of smells.  They are concerned with “the positive role that ‘smell’ as opposed to ‘air pollution’ can play in the environmental experience” ([2], p.334) They comment that there is little work on this topic, so they hope to “enrich the urban smell toolkit” ([2], p.327)

The study collected residents’ reports of what they smell, and clustered similar terms to form a dictionary of smells (i.e., of concepts about smells). This was also correlated with existing dictionaries of smell terms.

Smell terms from geotagged social media entries were used to create maps of smells across the city. The researchers suggest that there are different spatial scopes for the smells, from broad to very localized. They call these “base notes”, “mid-level notes”, and “high notes”, an to perfume advertisements that is pretty shaky in this case.

The resulting maps seem to capture coarse features of the city (e.g., industrial concentrations, large food market), and are slightly correlated with air quality measures.

The main implications are public awareness of the cityscape, and perhaps an increased attention from urban designers.


I found this study to be competently done, but not really useful. It borders on what my statistics teachers would call a “Type 3 Error”: they may be asking the wrong question.

Their concept of “smells” seems to be rather questionable.

First of all, as they generally acknowledge, the sense of smell is rather complex. There are large individual differences, not just the demographic variation they mention. Worse, smell is highly affected by both short term and long term experience. People habituate to smells rapidly, and learn over time. Our sense of smell changes as we age, as well as due to illness, exercise, and other activities. For that matter, we wear scents and scented clothing, that form a private smellscape right under our own nose.

The study worked hard to cluster the words people use to describe smell, but this exercise in linguistics brings in  a slew of factors of culture and learning. These cultural and cognitive elements are definitely relevant to their interest in urban experiences, though describing smells isn’t simply perceiving smells. As the paper notes, there are quite a few contextual factors that may go into how a smell is perceived and described.

For example, I suspect that future studies might find that people report more negative words for how the “bad parts of town” smell, compared to areas they prefer, regardless of the objective chemistry of the air.  (This might be called “the New Jersey effect”.)

The terms chosen for a given smell also reflect personal and cultural contexts. For example, the smell of human sweat can be attractive or awful, depending on the people involved and the situation.

This study treats smells as rather permanent and large grain features, though they are ephemeral and subject to micro weather, e.g., wind direction. They do enumerate very localized “high notes”, though these smells can be extremely localized, detectable only within a meter or less, which cannot be represented on their maps.

In other words, the maps are coarse-grained in both time and space. Perhaps this level of analysis is useful for urban design, but it is certainly not self-evident just how much this matters. The reported low correlations with other measures probably reflects this overly coarse granularity of the geotagged terms and the other measures.

This methodology is primarily about outdoor smell. But urban experiences are mainly indoors, and indoor smells are a totally different animal. Sure, the open air farmer’s market smells wonderful for that 15 minutes I am passing it, but when I go inside, I can’t smell it any more, no matter where I am on the city map.

The researchers make an interesting point about wanting to create attractive urban spells, not just mitigate pollution and repellant odors. On the other hand, this work shows little reason to think that this kind of analysis is an accurate measure of harmful pollution, despite what the authors may sometimes claim in the popular media. I’m all in favor of my city smelling nice (at least to some people), but it is important to monitor and reduce dangerous pollution, much of which cannot be seen or smelled.

This method is not, I repeat, not a good way to monitor the air quality of a city.

  1. Matt McGrath, Can city ‘smellfies’ stop air pollution? . BBC News.March 10 2017,
  2. Daniele Quercia, Rossano Schifanella, Luca Maria Aiello, and Kate McLean, Smelly Maps: The Digital Life of Urban Smellscapes. International AAAI Conference on Web and Social Media; Ninth International AAAI Conference on Web and Social Media:327-336, 2017.

Health Apps Are Potentially Dangerous

The “Inappropriate Touch Screen Files” has documented many cases of poor design of mobile and wearable apps, and I have pointed out more than once the bogosity of unvalidated cargo cult environment sensing.

This month Eliza Strickland writes in IEEE Spectrum about an even more troubling ramification of these bad designs and pseudoscientific claims: “How Mobile Health Apps and Wearables Could Actually Make People Sicker” [2].

 Strickland comments that the “quantified self” craze has produced hundreds of thousands of mobile apps to track exercise, sleep, and personal health. These apps collect and report data, with the goal of detecting problems early and optimizing exercise, diet, and other behaviors. Other apps monitor the environment, providing data on pollution and micro climate. (And yet others track data such as hair brushing techniques.)

These products are supposed to “provide useful streams of health data that will empower consumers to make better decisions and live healthier lives”.

But, Strickland says, “the flood of information can have the opposite effect by overwhelming consumers with information that may not be accurate or useful.

She quotes David Jamison of the ECRI Institute comments that many of these apps are not regulated as medical devices, so they have not been tested to show that they are safe and effective.

Jamison is one of the authors of an opinion piece in the JAMA, “The Emerging Market of Smartphone-Integrated Infant Physiologic Monitors[1]. In this article, the authors strongly criticize the sales of monitoring systems aimed at infants, on two grounds.

First, the devices have not been proven accurate, safe, or effective for any purpose, let alone the advertised aid to parents. Second, even if the devices do work, there is considerable danger of overdiagnosis. If a transient and harmless event is detected, it may trigger serious actions such as an emergency room visit. If nothing else, this will cause needless anxiety for parents.

I have pointed out the same kind of danger from DIY environmental sensing: if misinterpreted, a flood of data may produce either misplaced anxiety about harmless background level events or misplaced confidence that there is no danger if the particular sensor does not detect any threat.

An important design question in these cases is, “is this product good for the patient (or user)”?  More data is not better, if you don’t know how to interpret it.

This is becoming even more important than the “inappropriateness” of touchscreen interfaces:  the flood of cargo cult sensing in the guise of “quantified self” is not only junk, it is potentially dangerous.

  1. Christopher P. Bonafide, David T. Jamison, and Elizabeth E. Foglia, The Emerging Market of Smartphone-Integrated Infant Physiologic Monitors. JAMA: Journal of the American Medical Association, 317 (4):353-354, 2017.
  2. Eliza Strickland, How Mobile Health Apps and Wearables Could Actually Make People Sicker, in The Human OS. 2017, IEEE Spectrum.


Barcelona Fab Market for Open Source Design

Cat Johnson writes about the “Fab Market”, which is an initiative associated with the world-renowned Barcelona Fab Lab. The basic idea is an online shop that sells products to be made at a local Fab Lab. The designs are created by designers anywhere in the world, and are supposed to be open source. The Barcelona group curates the collection, conducting quality control and overseeing the system.

The business model appears to be that you will pay to obtain either the plans (which are supposedly “open source”), or the parts ready to assemble (DIY), or a fully assembled product. The fabrication and assembly are done at your local Fab Lab—supporting the local economy and reducing transport costs. Some of the revenue goes to the local Fab Lab, some to the workers, and some to the designer.

This effort is part of a larger vision of “Fab Cities, which imagines more self sufficient cities that fabricate a significant portion of their goods locally. Even before anything like that is achieved, this idea may be an opportunity for designers and for local workers.

Johnson summarizes the potential of the Fab Market:

Some of the benefits of the Fab Market system are:

  • Engaging and empowering people in the manufacturing process
  • Spreading the open-source ethos of sharing and collaboration
  • Reducing environmental impact of creating and transporting goods
  • Increasing transparency in the supply chain
  • Reducing the time and costs of production
  • Giving talented designers a platform for showcasing and sharing their products
  • Connecting a global community of makers

The big picture for Fab Market is to create a distributed economy based on good design and quality products that are made to last.

This effort joins existing “open source hardware” concepts, all of which are creating a global collection of artifacts for gardening, office furniture, clothing, plastic recycling and housing and homesteading.

In the same vein as Fab Market, Obrary is a global library of open source designs, available for free download (under creative commons).

Looking at Obrary back in 2014, I commented:

Suggested Feature:  One thing I would really like in a service like this would be some way to find local workers who will build. For example, if I need beehives, and I find a design I like at Obrary, and I want to buy one or more.  It would be nice to have a way to find one or more people in my town with the skills and tools, and pay them to do the build. In this case, there might reasonably be a “suggest donation” back to the designers, but most of the money would be in my local economy, supporting families where I live.

“This can be done informally, and I’m sure it will.  But is there a role for something like Obrary in this process?  And if so, how should it be done?”  (Posted September 5, 2014)

Voila! Barcelona is trying to do exactly this with their Fab Market. How can I disagree with something that was my own idea! 🙂

The obvious next step is to integrate and cross-fertilize these “open source hardware” collections. For example, it should be easy to order up anything in Obrary, and the collection in Fab Market should be accessible via Obrary. Ditto for Aker, OpenDesk, The Global Village Construction Kit, and so on.

I think this kind of interoperation should be doable, with a little bit of imagination to make Fab Market, Obrary, and so on part of an open network of catalogs. (Talk to your local librarian about open standards for catalogs….)

Such a development will also make it possible for others to join in with yet other curated collections of open source hardware, possibly with different business models. For example, garden equipment might be discounted for people who are certified participants in local food exchanges.

Note that Fab Market and the other sites are effectively offering their services as expert curators. This means that a consumer can have several options among curators, to get different perspectives. Opening up the curating process will make it possible for bottom up and peer-to-peer “curation”, so anyone can pull together an inventory of designs, and offer them to the global market of local makers.  It is also an opportunity for local makers and builders to advertise their expertise (by referring to the global catalog).

This is an interesting developments. We’ll see what happens in the future.

  1. Cat Johnson, Here’s How Fab Market is Creating a Sustainable Marketplace. Sharable.January 17 2017,