Candidate: Gurshaant Singh Malik
Date: November 27, 2024
Time: 2:00 PM
Location: Online - contact the candidate for more information.
Supervisor: Kapre, Nachiket
All are welcome!
Abstract:
We can exploit FPGA configurability and maximize the performance of communication-intensive FPGA applications by designing specifically customized Network-on-Chips (NoCs) using Machine Learning (ML). As transistor density growth stalls, NoCs play an increasingly critical role in deployment of FPGA applications for modern-day use cases. Unlike ASICs, FPGA configurability allows the design of application-aware NoCs that can outperform statically configured NoCs in terms of both performance and efficiency. Conventional NoC design process is typically centered around universally-sound one-size-fits-all NoC design decisions and does not take the underlying application into account. In contrast, we present application aware designs that learn their NoC parameters by casting the NoC design space as a function of application performance using ML algorithms. Through this work, we observe that application-aware NoCs designed using ML algorithms such as MLE and CMA-ES can decrease routing latency by 2.5--10.2x, increase workload feasibility by 2--3x, increase injection rates by up to 3.1x. By leveraging GNNs trained using supervised learning, we can accelerate design time of such NoCs by up to 4.3x.