Category Archives: Design

Cool Dodecahedral Gripper

File this under, “I want one”!

This summer researchers from Harvard report on a new design for an undersea gripper, inspired by origami [2]. The device is “a folding polyhedral enclosure”, essentially a dodecahedral box that unfolds and folds to gently capture delicate specimens within the box.

The goal is to be able to corral soft bodied creatures—think jellyfish—for close examination without harming them.

The origami-inspired design is a 2D layout of articulated pieces (hexagons and pentagons and the final triangular segments of a hexagon) that fold into a complete dodecahedron.  (For underwater use, the seams have flexible rubber seals, so the box is (somewhat) water tight.)  The design involves a clever arrangement of actuators that transforms “a 10-DOF system to a 1-DOF system”. (p.1)

Aside from the elegant dodecahedral geometry, what caught my eye is the elegant simplicity of this folding mechanism.  The fact that it runs from a single rotary power source is a bonus.

However, this is not a one-off.  The research is based on a theory which makes it possible to create any 3D polyhedron from a foldable 1D array.  The actuator is a “plane symmetric Bricard linkage” described in the nineteenth century.  This design replaces actuators at each fold with a complex linkage. They note that this works at many scales, and the folding is completely reversible.

The linkage and the panes of the enclosure are all simple enough to be fabricated with simple tools, laser cutters an 3D printers.

This is really cool!

By the way, scaled up this design would make a cool novelty “cone of silence”, that folds up out of the floor to envelop part of a room within a dodecahedron.


  1. Lindsay Brownell, Studying aliens of the deep, in Wyss Institute for Biologically Inspired Engineering – News. 2018. https://wyss.harvard.edu/studying-aliens-of-the-deep/
  2. Zhi Ern Teoh, Brennan T. Phillips, Kaitlyn P. Becker, Griffin Whittredge, James C. Weaver, Chuck Hoberman, David F. Gruber, and Robert J. Wood, Rotary-actuated folding polyhedrons for midwater investigation of delicate marine organisms. Science Robotics, 3 (20) 2018. http://robotics.sciencemag.org/content/3/20/eaat5276.abstract

 

Robot Wednesday

IoT: The Internet of Trash

Stacey Higginbotham writes in IEEE Spectrum about yet another ramification of the Internet of Too Many Things—the Looming E-Waste Problem [1].

I have commented on the poor design (here, here, here,  here) and likely problems (here, here) with the IoT. One of the recurring themes is that no one is in charge of these devices, so no one is responsible for how they work or what they do.

Higginbotham adds yet another implication of this basic flaw: there is no one to recycle the device at the end of its life.

She points out that the plethora of new IoT devices are built on the same model as conventional consumer electronics, only more so. There is already a huge problem with e-waste from broken and obsolete devices. IoT technology basically turns everything into e-waste.

In addition, IoT technology also puts everything on the short life span of cheap electronics. Even if the battery in your “smart toaster” is replaceable, who will do it? And when it stops working (perhaps because the software is no longer compatible), it is now e-waste filled with exotic materials that need careful handling.

If your smart toaster even can be recycled. A lot of IoT stuff cannot be recycled in any easy way.

“A lack of forethought will leave us with a mountain of obsolete devices and no way to dispose of them”

Making IoT that is both useable and recycleable takes serious engineering. And it would be nice to build longer lasting products.

The Internet of Too Many Things: not just poorly design and bad for users, but bad for the planet, too.


  1. Stacey Higginbotham, The Internet of Trash: IoT Has a Looming E-Waste Problem, in IEEE Spectrum – Internet. 2018. https://spectrum.ieee.org/telecom/internet/the-internet-of-trash-iot-has-a-looming-ewaste-problem

Cool Bioinspired Cooling

One of the big challenges for micro- and nanoengineering is heat.  As devices and components get closer together, thermal energy becomes significant, indeed overwhelming.  Essentially any useful activity generates waste heat (i.e., more than is used for the activity) which must be dissipated.   We have all seen the absurd looking cooling fins, not to mention electric fans, on contemporary computer components.  Sheesh!  It works, but it sure isn’t elegant engineering.

An example of heroic measures used to dissipate heat in contemporary electronic systems.

This spring a research team reports on analysis of the design of porous membranes, materials which have columns of liquid which remain contained [1].  The end of the column is exposed, and can evaporate.   (See the article for illustrations of this design.) In principle, these columns can dissipate heat efficiently, while protecting the interior from moisture or dust.

This is cool work, and it’s not surprising that there is considerable interest in deploying it as soon as possible.


The thing that caught my eye is the fact that this design is inspired by the skins of ancient insects, Springtails (Collembola Entognatha) [3]. These insects have skins with complicated textures, “bristles and a comb-like hexagonal or rhombic mesh of interconnected nanoscopic granules” ([2]).

The new work creates similar “sharp edged” structures in Silicon (easy to manufacture), which has led to the recent results.

As Jeremy Hsu put is, “The Electronics Cooling System 400 Million Years in the Making”.

Cool! in more than one way.


  1. Damena D. Agonafer, Hyoungsoon Lee, Pablo A. Vasquez, Yoonjin Won, Ki Wook Jung, Srilakshmi Lingamneni, Binjian Ma, Li Shan, Shuai Shuai, Zichen Du, Tanmoy Maitra, James W. Palko, and Kenneth E. Goodson, Porous micropillar structures for retaining low surface tension liquids. Journal of Colloid and Interface Science, 514:316-327, 2018/03/15/ 2018. http://www.sciencedirect.com/science/article/pii/S0021979717314017
  2. Ralf Helbig, Julia Nickerl, Christoph Neinhuis, and Carsten Werner, Smart Skin Patterns Protect Springtails. PLOS ONE, 6 (9):e25105, 2011. https://doi.org/10.1371/journal.pone.0025105
  3. Jeremy Hsu, The Electronics Cooling System 400 Million Years in the Making, in IEEE Spectrum – Energywise. 2018. https://spectrum.ieee.org/energywise/computing/hardware/meet-the-electronics-cooling-system-400-million-years-in-the-making

 

Bioinspired “spring origami”

Our latter day Prometheans (is that a word?) heartily boast of creating “programmable matter” and “4D printing”.  This would be crazy if it weren’t true that astonishing, near magical designs are coming every day.

Many of these developments are inspired by nature and by origami.  As I have said, it is clear that all Engineering and Design students should learn origami as part of the twenty first century curriculum.

This spring researchers at ETH Zurich report on an cool development which is inspired by the wing of an earwig [1].  This is especially interesting because the biological system actually works better than conventional origami.

The wing of the Dermaptera has an extremely large range from compactly folded to open in flight. It also deploys without muscular action (i.e., it unfolds), but snaps into a strong rigid form for flight. Their analysis shows that “current origami models are not sufficient to describe its exceptional functionality” ([1], p.1387)

They conclude that the key feature is that unlike “strict” origami, the earwig wings are not folded on straight rigid lines.  Instead, they folds are curved and consist of  elastic biopolymer, which is springy  The biopolymer behaves as a system of extensional and rotational springs.

Not origami, but origami plus (biological) clockwork!

The researchers explain that this bioinspired analysis opens a broad space for “spring origami”, which exceeds the capabilities of traditional origami. The paper has the technical details, which, among other things, involve complex surfaces of energy levels in multiple springs which yield bistable regimes (i.e., snap through).

This analysis makes possible the design and fabrication of many different low energy, folding systems.

“We transferred the biological design principles extracted from the earwig wing into a functional synthetic folding system that can be directly manufactured by 4D printing” ([1], p. 1390)

“Our ability to tune the energy barrier between bistable states using simple geometrical and material properties […] enables the design and fabrication of spring origami structures that can undergo fast morphing, triggered by an environmental stimulus.”

The researchers see potential for many applications, including antennas and solar arrays for space craft, architecture, robots, or packaging.

I’m seeing a fancy new version of an umbrella—lighter, stronger, and simpler design.


  1. Jakob A. Faber, Andres F. Arrieta, and André R. Studart, Bioinspired spring origami. Science, 359 (6382):1386, 2018. http://science.sciencemag.org/content/359/6382/1386.abstract
  2. Peter Rüegg, Earwigs and the art of origami, in ETH News. 2018. https://www.ethz.ch/en/news-and-events/eth-news/news/2018/03/earwigs-and-the-art-of-origami.html

 

 

Robot Origami Wednesday

Yet More AI Fashion Advice

Computer geeks are just as interested in fashionable clothing as anyone else, though they tend to apply the mental hammers they possess to driving the nail.  There are any number of projects that are applying contemporary AI to the alleged problems of finding (and creating) fashionable garments and outfits.

Whatever the problem may really be, this research essentially treats it as a “recommender” system, a la online shopping and streaming services. This leads to two products, a “virtual stylist” to advise you on what to wear, and a “trend spotter” to advise producers about what to sell.

So who would want to use an AI recommender?

This kind of technology can probably detect group uniforms pretty easily, especially if social media and other metadata are included in the data.  This may be useful for market intelligence, but probably not terribly useful to individuals.  If you have to have a computer to tell you how to dress like the people you admire, you’re pretty lost.

On the other hand, some people might enjoy having a “virtual stylist” who helps them construct and maintain an individual look. For that matter, the ability to generate something that is in the style of X, but new, would be exactly what you might be looking for. What will Susie be wearing today?  She’s always ahead of the curve.  Etc.

Underlying these ideas are collections of data about what people are wearing, and the usual “people who liked this, also liked this other thing”.  Grist to this mill are images from the internet, social media posts, personal shopping history, and metadata about who’s who and what they do and want.  Many readers will recognize that this is also the data and technology used by advertising and intelligence services, who are looking to predict specific kinds of individual behaviors.


This fall, a team from UCSD and Adobe report on yet another permutation of this technique, which uses sophisticated image processing and machine learning to create “look plausible yet substantially different from” the examples [1].  (Note: their paper cites quite a bit of earlier work, which is worth looking over.)

The most interesting idea is to use the “user X image” preference data to generate new items that are predicted to be attractive to the user.

a richer form of recommendation might consist of guiding users and designers by helping them to explore the space of potential fashion images and styles.

This is pretty neat, and most machine learning approaches can’t do it nearly as well as what this group has done.

The technical details are non-trivial, see the paper.


This work is interesting, but there are a number of questions raised by this work.

First of all, it’s far from clear that this is addressing a problem that anyone needs to solve. For those who go beyond pure utilitarianism, “fashion” is a signaling system. What you wear is supposed to send messages about yourself.

The two messages most commonly sent are either “look at me, desire me” or “I belong to this group”.  Note that these are somewhat contradictory messages, asserting either individuality or conformity, respectively.

How do these messages relate to “preferences”?

Second, the authors suggest that this technology could be used as an aide to designers:

In the future, we believe this opens up a promising line of work in using recommender systems for design.”

“We believe that such frameworks can lead to richer forms of recommendation, where content recommendation and content generation are more closely linked”

In other words, this is a feedback loop, from design to user’s reception, back to designer.

As I have said before, this would seem to be a mechanism that pushes designers to produce “more of the same”; scarcely a formula for creativity.  But perhaps the AI is actually chasing user’s reactions which will eventually reject “more of the same”, it is retrospective—not a formula for being a fashion leader.

On the other hand, this technology does generate new designs, though it is hard to judge just how creative it is. Also, the technique learns continuously, which seems critical to me-preferences change.  Done right, the AI model might be a good way to represent a target user and to generate designs that hone in on their (momentary) preferences.


However, there are lots of underlying issues.

To the degree that a person wants to make fashion statements, this process of digging out “preferences” and generating examples that exemplify them is only indirectly related to whatever the intended statement might be. The AI knows little, if anything, about the semantics of the clothing in the images, which are highly subjective in any case.

Many fashion preferences are based on social aspirations, such as the desire to emulate a celebrity and / or fit in with a clique.  These factors are not only not visible in the image, they are completely missing from the concept of a personalized design. Don’t you want a social design?

The entire process is based on (small) 2D images with standard poses. This is a very impoverished set of information. Users have been trained to deal with tiny 2D mages by the internet, but this is not a full representation of “fashion” or anything else.  Clothing is 3D and bodies move, and both exist in physical context (e.g., a dance floor). These factors are missing from both the input and output of this AI.

The training  uses ratings and other inputs from observers as proxies for “preferences”.  It isn’t clear exactly what these data actually represent, and there is a very real possibility that there are multiple “communities” with different preferences all using the same online services.  Averaging across multiple sub-cultures will produce a meaningless common denominator.  (The research project used tiny, probably homogeneous samples, so it doesn’t explore this problem.)


Building a tight feedback loop between user preferences and the AI opens the possibility of hacking or gaming the system.  Flooding the inputs with ratings and supposed positive comments could manipulate the recommendations. I could easily imagine a PR campaign that combined celebrity product placement with ‘AI placement’ that biases the results in favor of the product.

Worse, there could well be AIs gaming other AIs.  It could devolve to the point where everyone has a ‘virtual fashion advisor’, and all the AIs are tracking the behavior of all the other AIs. Kind of like professional fashionistas do. Sigh.


I have to ask, just who is the AI serving?  The researchers seem to believe that the designer’s interests are aligned with the user, but I don’t think it is that simple.  Designers usually are or work for producers, who aim to sell the product.  Is the recommender helping the consumer or promoting consumption?

We all know the answer to that question. Technology just like this is already used by advertising and intelligence to track and predict the behavior of individuals. This behavior modelling is not for the benefit of the subject, it is for the benefit of the wealthy and powerful.


If these types of system come into wide use, it will probably change concepts of fashion recommendations.  An on-line suggestion that “people like you, also liked these items” is not valued as much as a recommendation from a close friend.  Similarly, trends spotted (or created) by AI will be considered second rate, compared to actual human trend spotting and creativity.  People will work hard to outguess the AI.

I predict that the more virtual assistants there are, the more valuable a human assistant will become!


  1. Wang-Cheng Kang, Chen Fang, Zhaowen Wang, and Julian McAuley, Visually-Aware Fashion Recommendation and Design with Generative Image Models. arXiv, 2017. https://arxiv.org/abs/1711.02231
  2. Will Knight,  Amazon Has Developed an AI Fashion Designer: The retail giant is taking a characteristically algorithmic approach to fashion. MIT Technology Review online.august 24 2017, https://www.technologyreview.com/s/608668/amazon-has-developed-an-ai-fashion-designer/
  3. Jackie Snow This AI Learns Your Fashion Sense and Invents Your Next Outfit: A new kind of AI system could create personalized clothing based on a shopper’s taste. MIT Technology Review online.November 16 2017, https://www.technologyreview.com/s/609469/this-ai-learns-your-fashion-sense-and-invents-your-next-outfit/

 

“Artificial Creatures” from Spoon

There are so many devices wanting to live with us, as well as a crop of “personal” robots. Everything wants to interact with us, but do we want to interact to them?

Too many products and not enough design to go around.

Then there is Spoon.

We design artificial creatures.

A partner to face the big challenges rising in front of us.

A new species, between the real and digital domains for humans, among humans.

OK, these look really cool!

I want one!

But what are they for?

This isn’t very clear at all. The only concrete application mentioned is “a totally new and enhanced experience while welcoming people in shops, hotels, institutions and events.” (I guess this is competing with RoboThespian.)

Anyway, it is slick and sexy design.

The list of company personnel has, like, one programmer and a whole bunch of designers and artisans. Heck, they have an art director, and a philosopher, for crying out loud.

Did I forget to say that they are French!

I have no idea exactly what they are going to build, but I will be looking forward to finding out.

 

Robot Wednesday

Data Comics?

Benjamin Bach and colleagues wrote in IEEE Computer Graphics about “The Emerging Genre of Data Comics” [1]. I like data and I like comics, so I’ll love data comics, right?

Data comics is a combination of data + story + visualization. They say that it is “a new genre, inspired by how comics function” ([1], p.7)

The “how comics function” is largely about flow and multiple panels. As Scott McCloud says, the action happens in the gutter ([2], p. 66) (i.e., between the panels).

(By the way, Sensei McCloud teaches that this happens though the active engagement of the reader, who closes the gap with his or her imagination. If you haven’t read Understanding Comics [2], stop reading this blog right now and go read McCloud. I’ll wait here.)

The authors assert that data always has context, and “Context creates story, which wants to be narrated” ([1], p. 10). Well, maybe, though I think it is a mistake to read this as “you can tell whatever story you want” (the Hollywood approach). Part of the context is what kinds of stories it is OK to tell.

The authors give four advantages of the medium,

  • Combines text and pictures
  • Delivers one message at a time in a guided tour
  • Data visualization gives evidence for facts
  • Other types of visualization can tell the story clearly

This article itself is delivered in the form of a comic (though not a data comic), which highlights both the advantages and the limitations of this approach.

One really good thing about storyboards and comix is that they force you to boil down your story to a handful of panels, with only so much on each. This isn’t always easy, but it surely helps organize the story.

Compare this to written or spoken word, which can flow any way you want and can go on as long as you have strength, with no guarantee that any organized narrative is told.

I note that any good visualization (or demo) probably had a storyboard in the beginning, which is essentially a comic strip of the overall story to be told.

The medium isn’t without drawbacks.

Fro example, this article was very difficult for my ancient eyes to read. The text was rather too small and blurry for me to read and white on black lettering is hard for me to make out. Many of the pictures were below my visual threshold. E.g., One panel is about “Early examples led the way” has tiny versions of other comics, which are illegible and may as well not be there.

Also, it was difficult to quote (i.e., remix) ideas from this article. E.g., I couldn’t easily quote the “Early examples” panel to make my point about it. I could probably have extracted the picture, fiddled with it in a drawing package, and saved a (blurry) image to include here. But how would that make my point about the illegibility of the original?

As a general rule, comix need to be pretty simple or they are impossible to read. This means that they can only deliver a very concise story. As Back, et al. suggest, this is a feature, not a bug.

On the other hand, telling “only one message at a time” is not just “concise” it is a Procrustean bed. For complicated data there isn’t one message, there are many. A data comic runs the risk of trivializing or misleading by omission. This is a bug, not a feature.

The challenge is to make “concise” be deep rather than shallow.

This is why trying to express the story in a storyboard (comic) is an extremely good design practice, even if the story isn’t ultimately published in the form of a comic.


  1. Benjamin Bach, Nathalie Henry Riche, Sheelagh Carpendale, and Hanspeter Pfister, The Emerging Genre of Data Comics. IEEE Computer Graphics and Applications, 38 (3):6-13, 2017. http://ieeexplore.ieee.org/document/7912272/
  2. Scott McCloud,, Understanding Comics, HarperCollins, 1994.