AI Learning To Invent Algorithms

These days “The Singularity” has become very mystical and Hollywood, and, frankly, seems to be a techno version of racist replacement theory / Elders of Zion shtick.  (‘You will be replaced by godless aliens’) Back in the day—even Alan Turing’s day—the original concept was really about using ‘computation to improve computation’, accelerating computational capabilities by self-improvement of algorithms.

In the 80-some years of the current computational age, computers have indeed accelerated technological developments on all fronts, though “AI” has been a relatively small contribution to this process, compared to things like miniaturization, precise measurement and rapid data networks have been far, far more revolutionary than anything to do with super “brains” or human like “thinking”.

In recent years, we have (finally?) seen “AI” improving computational capabilities, with useful contributions such as generating, optimizing, and debugging code.  But, honestly, the growth in AI has been mostly due to growth in hardware—Moore’s Law, not technological bootstrapping.

So I was interested to see some cool new results from DeepMind AI, which has developed new, more efficient algorithms for matrix multiplication [1]. 

OK, even I can do matrix multiplication, and I know that a lot of important code, including lots of AI-y stuff, does a lot of matrix multiplications.  In fact, a significant chunk of total compute time is tied up doing zillions of matrix multiplications, every day, all the time.

Which means that shaving even a few percent off the time is potentially “huge”, as my benchmarking friends would say [2].

How does it work?  I’m sure I don’t understand it, exactly.  But the general idea is to represent the matrix multiplication as a higher order tensor.  (I.e., 2 x 2 multiplication is represented as a 3D tensor.) Among other things, this trades space for time, which is the oldest trick in the book. 

The other interesting thing is that the AI was given the problem as a game to be solved.  Basically, the goal is to find a sequence of moves that represents the matrix multiply.  At each turn there are zillions of possible moves, just like Go or other games.  And, sure enough they used their winning Go program to learn to solve these problems [2].

Given this representation and some examples of right answers, the AI was tasked to search for the best way(s) to operate on these tensors.  The results were new algorithms that required fewer costly multiplications to get the results.

And, as in the case of Go, the results are superhuman and also inhuman.  The algorithms discovered are not intuitive, indeed they are alien. 

This is, indeed, huge.  But even though the results show an alien intelligence, it really doesn’t feel like a coming singularity to me.

For one thing, computers are already insanely better at matrix multiplication than puny Carbon-based units.  My laptop can compute many orders of magnitude faster than me, so a few percent more doesn’t seem like much.  And in any case, the matrix multiplication doesn’t seem very “intelligent”, even though it is an important part of a lot of “intelligent” computations (such as NLP, vision, and every kind of search).

However, feed this performance improvement back into DeepMind, and put the slightly faster system to work on other key targets, and you’ve finally started the march.  There are plenty of targets, including optimizing the tensors themselves.

“a limitation of AlphaTensor is the need to pre-define a set of potential factor entries F, which discretizes the search space but can possibly lead to missing out on efficient algorithms. An interesting direction for future research is to adapt AlphaTensor to search for F.”

([1], p. 52)

  1. Alhussein Fawzi, Matej Balog, Aja Huang, Thomas Hubert, Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Francisco J. R. Ruiz, Julian Schrittwieser, Grzegorz Swirszcz, David Silver, Demis Hassabis, and Pushmeet Kohli, Discovering faster matrix multiplication algorithms with reinforcement learning. Nature, 610 (7930):47-53, 2022/10/01 2022. https://doi.org/10.1038/s41586-022-05172-4
  2. Matthew Hutson, DeepMind AI invents faster algorithms to solve tough maths puzzles, in Nature – News, October 5, 2022. https://www.nature.com/articles/d41586-022-03166-w

One thought on “AI Learning To Invent Algorithms”

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.