Most of my career in software can be summed up to a first approximation to be “making software run faster”. There are many ways to speed up run time, including my own favorite, “figure out how to solve the problem without any software”. : – )
Inevitably, ChatGPT and friends have been given a shot at this game, too.
This isn’t a stupid idea, far from it. A lot of what experts like me do when we are optimizing code is searching through things that have worked in the past, or that should work on general principles. And, in some cases we automate this mindless process, using code to basically generate many variations that do the same thing, looking for the fastest.
This winter, researchers at the University of Stirling report a study that augments this kind of search with ChatGPT [1]. They ask ChatGPT 3.5 to generate 5 examples of code that does the same thing as a sample. Presumably, the results come from Java code on the Internet, or more precisely, the AI’s prediction of what the Internet would say.
Naturally, many of the answers aren’t legal code. But this is what happens for any generative search method. In fact GPTChat does a bit better than random search, so, ‘yay!’
More important, some of the answers are not only legal and correct, but improve the performance of the original code. Overall, the AI augmented search found more improvements, though it did not find the best improvement.
The researchers note that the machine learning augmented search was narrower than the random search. Unsurprisingly, the prompts had a huge effect on the results.
One interesting finding was that more detailed prompts found fewer improvements than “medium” prompts. The trick is to give enough information but not too much, lest the AI be constrained too narrowly.
The researchers note that the benefits of the improved code probably should be balanced against the cost of developing and using the gigantic machine learning model [3]. Expending vast amounts of energy, emissions, and money may find some rewrites that speed up a piece of code. But it will be important to balance the impact of the speedup against the cost of finding them.
- Alexander E. I. Brownlee, James Callan, Karine Even-Mendoza, Alina Geiger, Carol Hanna, Justyna Petke, Federica Sarro, and Dominik Sobania. Enhancing Genetic Improvement Mutations Using Large Language Models. In Search-Based Software Engineering, 2024, 153-159. https://link.springer.com/chapter/10.1007/978-3-031-48796-5_13
- Alexander E.I. Brownlee, James Callan, Karine Even-Mendoza, Alina Geiger, Carol Hanna, Justyna Petke, Federica Sarro, and Dominik Sobania, Enhancing Genetic Improvement Mutations Using Large Language Models. arXiv arXiv:2310.19813, 2023. https://arxiv.org/abs/2310.19813
- University of Stirling, AI study creates faster and more reliable software, in University of Stirling – News, December 11, 2023. https://www.stir.ac.uk/news/2023/12/university-of-stirling-ai-study-creates-faster-and-more-reliable-software/