Tag Archives: Adriana Schulz

Reverse Engineering Carpentry

Over the last twenty years, we’ve all seen astonishing strides in 3D reconstruction from 2D imagery.  We have also seen an astonishing blossoming of inexpensive 3D fabrication of many types, with all kinds of materials, at many scales.

Inevitably, the next steps will be to mash together these innovations, integrating computerized eyes and hands to create computerized craftwork.  I.e., knowledge intensive, material aware, ever learning, integrated iterated design, analysis, and fabrication. 

So, there have been efforts to develop 3D printed tools and factories, designs for wooden joinery, designs for architectural glass, and so on.

This summer researchers at U. Washington report a new system that not only captures the visual appearance of an object, but reconstructs plausible blueprints for how to construct it via standard (human) carpentry [1]. Pretty neat.

Basically, this system encapsulates a bunch of knowledge about carpentry; boards, saws, joints, etc.  Given an example of a piece of furniture, and assuming it was made with “normal” tools and materials, the system reverse engineers what the pieces are and how they fit together.  The resulting digital plans are recognizable, plausible (i.e., they look like correct versions of the original plans), and usable to recreate the piece.

These digital plans can be shared, edited, and, I assume soon enough it will be possible to feed  one to an automatic furniture factory.  They also can be grist to Machine Learning mills, enabling the Silicon based to learn the craft by copying—not that different from how Carbon based craftbeings learn.

The authors emphasize that this system is really effective in combination with a Carbon based intelligence, who can modify and customize the output blueprints, and swiftly build the object. 

Of course, like any skilled crafter, this system has limitations.  It only knows what it knows, which in this case is small furniture made entirely of wood.  I don’t think it would do well with, say sections of upholstery, nor does it understand anything about decorative veneers or finishes.

It also can only reconstruct what it can see.  Anything hidden from view can’t be directly discovered.  The authors note that it would be interesting to develop inference models to expertly guess missing or hidden features.  This is something that Carbon based experts would do.

The system knows how to make relatively small and simple furniture.  It would be interesting to incorporate expertise in more complex designs, e.g., more complicated joinery.  (Or, indeed, any joinery at all.)

Some of the limitations stem from limits of their image acquisition methods.  In the current version, the target has to be small enough to walk around, and must be placed in a visually uncluttered setting.  (Anyone who has tried point cloud image reconstruction in natural settings will be acutely aware of the challenges of backgrounds and extraneous objects.)

But I’m sure that the basic techniques will work in principle from other image capture methods, including robots and UAVs. 

Interesting work.


  1. James Noeckel, Haisen Zhao, Brian Curless, and Adriana Schulz, Fabrication-Aware Reverse Engineering for Carpentry. arxiv, 2021. https://arxiv.org/abs/2107.09965

Robogami: “Democratizing” Robot Building?

In a recent paper, Cynthia Sung and colleagues at MIT describe their automated design system, which addresses a “long-held goal in the robotics field has been to see our technologies enter the hands of the everyman [sic].” [1]

Well, I don’t know about that. Every nerd, maybe.

The idea is a high level design system that generates simple “fold up” robotic vehicles, suitable for fabrication with ubiquitous laser cutters and other shop tools. The computer system helps the designer create the “geometry”, the 3D shape of the vehicle, and the “gait”, how it moves. The system shows the results in a simulator, so the designer can rapidly iterate. The prototype is then sent to a printer, and snapped together with appropriate motors and wires.

One of the main challenges in robot design is the inter- dependence of the geometry and motion.

Cool!

As the paper makes clear, this idea was influenced by a number of current trends which I’m sure are bouncing around MIT CSAIL and everywhere esle: computational aided iterative design, rapid prototyping with personal fabrication, and, of course, Origami <<link to post>>.

The system also reports performance metrics (e.g, speed of locomotion), and helps optimize the design.

Of course, this isn’t really a general purpose robot design system. Aside from the fact that the hard part in any design is figuring out what to design (and diving into iterative prototyping often distracts from careful thought and research), useful robots have sensors and manipulators, as well as machine learning or domain knowledge or both, which is not part of this design.

This system is really only about the body and the movement: essentially, the basic shell of the robot.  Important, but really only the foundation of a working, useful robot.

“The system enables users to explore the space of geometries and gaits”

It’s cool, but not the whole story.

And, let us not forget, the appearance and sociability of the robot is increasingly important. These cute little robogamis look like toys, and are little more use than a toy. These are certainly not social robots!

Now, if you sold this as a “toy factory”, perhaps with some stickers and funny voices, you’d have a bang up product. Don’t give Suzie a doll, give her a machine to make as many dolls as she wants!  And the dolls move and talk!

Now that would be cool!


  1. Adriana Schulz, Cynthia Sung, Andrew Spielberg, Wei Zhao, Robin Cheng, Eitan Grinspun, Daniela Rus, and Wojciech Matusik, Interactive robogami: An end-to-end system for design of robots with ground locomotion. The International Journal of Robotics Research:0278364917723465, 2017. http://dx.doi.org/10.1177/0278364917723465

 

Robot Wednesday