AutoAnnotate: Reinforcement Learning basedCode Annotation for High Level Synthesis

April 18, 2024

Hafsah Shahzad, Boston University; Ahmed Sanaullah, Red Hat; Sanjay Arora, Red Hat; Uli Drepper, Red Hat; and Martin Herbordt, Boston University

High Level Synthesis (HLS) allows custom hardware generation using developer-friendly programming languages. Often, however, the HLS compiler is unable to output high quality results. One approach is to pre-process the source code, e.g., to restructure the computational flow, or to insert compiler hints using annotations or pragmas. But while the latter approach appears to enhance programmability, it also requires developer expertise, both regarding hardware design patterns and even compiler internals: an incorrect annotation strategy can worsen performance or result in compilation deadlocks.

To address these challenges, this work presents AutoAnnotate, an automatic code annotation framework for HLS. It demonstrates the efficacy, novelty, and benefit of applying ML methods to code annotation. AutoAnnotate replaces the need for developer expertise by using Reinforcement Learning (RL) to determine the best set of annotations for a given input code. To demonstrate the effectiveness of this approach, we ran AutoAnnotate on a number of common FPGA benchmarks derived, e.g., from Rodinia and OpenDwarfs, with state-of-art HLS tools (AMD Vitis and Intel HLS). We obtained a geometric mean of 42× performance improvement for Vitis HLS and 3.42× for Intel HLS. We then hand optimized these codes using standard best practices and again applied AutoAnnotate, this time still achieving 32.3× performance improvement for Vitis HLS and 3.1× for Intel HLS. Interestingly, the best overall performance obtained by AutoAnnotate was generally with unoptimized codes.

Read the paper

This paper was presented at ISQED, April 3-5, San Franciso, CA, USA