Fourier Analysis of Deep Learning

One of the perennial challenges for contemporary machine learning is that it works, but we usually don’t understand what it is doing.  As ML models have grown larger and more complicated, this has only amplified the gap between outcome and explanation. 

We are in the fascinating situation where an applied math technique is roaring along, while theoreticians are chasing it, trying to grok what’s going on. The theoretical results are generally pretty interesting.

This winter, researchers at Rice U. report a fascinating exploration of yet another technique: Fourier Analysis [2].  If I understand correctly, the basic idea is to treat a neural net as a turbulent flow, and apply Fourier Analysis to find patterns in the model.  (That may not be a correct technical description, please refer to the paper.)

I have to say I don’t understand the math very well.  At least part of the insight is that “what the neural net is learning” can be described as a combination of multiple spectral filters [1].  The ML algorithms are not framed in this way, so this viewpoint is not apparent from the algorithm or data.   Fourier Analysis pulls out these filters and clearly connects them to the input and output data. 

This work suggests that ML algorithms are pretty good at creating these complicated combinations of multiple spectral analyzers, which is a very difficult problem.  To the degree that this is true, this could be a big reason why ML is so successful, and so much more successful than other less general techniques.

Kind of cool.  Nice work.

I guess anyone who has hung around an engineering school won’t be too surprised.  Fourier Analysis is extremely general, and extremely powerful.  It is used to analyze for a vast array of complex problems, to the degree that there are people who basically spend their careers doing Fourier Analysis, and building systems that do FA.

I also note that this approach and others <<link earlier>> are quite spatial, giving us a visual metaphor for what ML is up to.  This is not a coincidence, IMO.  The road to groking ML is going to be geometric.


  1. Charles Q. Choi, 200-Year-Old Math Opens Up AI’s Mysterious Black Box in IEEE Spectrum – Artificial Intelligence, February 25, 2023. https://spectrum.ieee.org/black-box-ai
  2. Adam Subel, Yifei Guan, Ashesh Chattopadhyay, and Pedram Hassanzadeh, Explaining the physics of transfer learning in data-driven turbulence modeling. PNAS Nexus:pgad015,  2023. https://doi.org/10.1093/pnasnexus/pgad015

One thought on “Fourier Analysis of Deep Learning”

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.