“AutoAnnotate: Reinforcement-learning-based code annotation for high-level synthesis,” a paper resulting from a project at the Red Hat Collaboratory at Boston University, was selected as one of four Best Papers at the 25th International Symposium on Quality Electronic Design (ISQED’24). ISQED is a widely recognized and established conference in the field of electronic design, with submissions from prestigious organizations across academia, industry, and government. Receiving a Best Paper award means that “Autoannotate” was ranked in the top 2% of all papers submitted. The authors are Hafsah Shahzad and Martin Herbordt of Boston University and Ulrich Drepper, Sanjay Arora, and Ahmed Sanaullah of Red Hat Research.
The paper represents an important milestone in the research project “Practical programming of FPGAs with open source tools.” The tooling being researched as part of the project aims to substantially improve developer productivity by leveraging machine learning, which in turn reduces the effort required to generate high-quality software and hardware for our target compute platforms.
Specifically, the goal of the project is to build practical and extensible frameworks that leverage machine learning techniques, such as reinforcement learning and graph neural networks, to automatically improve the quality of binaries generated by FPGA and CPU compilers. This includes, but is not limited to, compiler pass reordering, compiler flag tuning, and code annotation. Through these approaches, the project aims to significantly improve generated binary size, performance and other metrics for output quality, without requiring developers to manually modify the source code and/or compiler.
About AutoAnnotate and high-level synthesis
High-level synthesis (HLS) is a process through which applications written in software programming languages can be compiled down to functionally equivalent hardware designs. However, the resulting automatic hardware designs may not be of high quality, because the nuances of hardware design are abstracted by the software programming language.
Code annotations are a simple yet powerful solution to this problem, because they can guide the HLS compiler to more effectively optimize the code. The challenge with code annotations, however, is that the possible design space is huge. Knowing which annotations to apply and where to apply them requires a high level of expertise, and the list of annotations and their effects can change even for different versions of the same compiler. The consequence of improperly applied annotations is a reduction in output quality, incorrect functionality, or even a failure to compile.
AutoAnnotate presents an extensible framework for automatically applying annotations to HLS codes using reinforcement learning. It supports multiple compilers, each with its own sets of annotations, and automatically validates output hardware to ensure functional correctness. Through the use of machine learning for design space exploration, AutoAnnotate can effectively discover annotations that improve user-defined metrics of hardware quality. And if something changes, we can easily retrain the model. Results across a number of benchmarks, detailed in the paper, demonstrated orders of magnitude improvements in performance over unannotated code. The paper also demonstrated the value of combining code annotations with effectively structured input code.