Reverse Engineering Carpentry

Over the last twenty years, we’ve all seen astonishing strides in 3D reconstruction from 2D imagery.  We have also seen an astonishing blossoming of inexpensive 3D fabrication of many types, with all kinds of materials, at many scales.

Inevitably, the next steps will be to mash together these innovations, integrating computerized eyes and hands to create computerized craftwork.  I.e., knowledge intensive, material aware, ever learning, integrated iterated design, analysis, and fabrication. 

So, there have been efforts to develop 3D printed tools and factories, designs for wooden joinery, designs for architectural glass, and so on.

This summer researchers at U. Washington report a new system that not only captures the visual appearance of an object, but reconstructs plausible blueprints for how to construct it via standard (human) carpentry [1]. Pretty neat.

Basically, this system encapsulates a bunch of knowledge about carpentry; boards, saws, joints, etc.  Given an example of a piece of furniture, and assuming it was made with “normal” tools and materials, the system reverse engineers what the pieces are and how they fit together.  The resulting digital plans are recognizable, plausible (i.e., they look like correct versions of the original plans), and usable to recreate the piece.

These digital plans can be shared, edited, and, I assume soon enough it will be possible to feed  one to an automatic furniture factory.  They also can be grist to Machine Learning mills, enabling the Silicon based to learn the craft by copying—not that different from how Carbon based craftbeings learn.

The authors emphasize that this system is really effective in combination with a Carbon based intelligence, who can modify and customize the output blueprints, and swiftly build the object. 

Of course, like any skilled crafter, this system has limitations.  It only knows what it knows, which in this case is small furniture made entirely of wood.  I don’t think it would do well with, say sections of upholstery, nor does it understand anything about decorative veneers or finishes.

It also can only reconstruct what it can see.  Anything hidden from view can’t be directly discovered.  The authors note that it would be interesting to develop inference models to expertly guess missing or hidden features.  This is something that Carbon based experts would do.

The system knows how to make relatively small and simple furniture.  It would be interesting to incorporate expertise in more complex designs, e.g., more complicated joinery.  (Or, indeed, any joinery at all.)

Some of the limitations stem from limits of their image acquisition methods.  In the current version, the target has to be small enough to walk around, and must be placed in a visually uncluttered setting.  (Anyone who has tried point cloud image reconstruction in natural settings will be acutely aware of the challenges of backgrounds and extraneous objects.)

But I’m sure that the basic techniques will work in principle from other image capture methods, including robots and UAVs. 

Interesting work.


  1. James Noeckel, Haisen Zhao, Brian Curless, and Adriana Schulz, Fabrication-Aware Reverse Engineering for Carpentry. arxiv, 2021. https://arxiv.org/abs/2107.09965

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.