Nov 18 – 22, 2024
America/New_York timezone

Empowering AI Implementation: The Versatile SLAC Neural Network Library (SNL) for FPGA

Nov 21, 2024, 1:45 PM
15m
262A (Student Union)

262A

Student Union

Parallel Presentation RDC5: Trigger and DAQ RDC 05 - Trigger and DAQ Parallel Session

Speaker

Abhilasha Dave (SLAC National Lab)

Description

SLAC has developed a library based framework that enables the deployment of machine learning (ML) models on Field Programmable Gate Arrays (FPGAs) located at the edge of the data chain, near the instrumentation. It's called the SLAC Neural Network Library (SNL), utilizes Xilinx's High-Level Synthesis (HLS) and offers an API inspired by Keras for TensorFlow. By adopting a streaming data approach, SNL optimizes the data flow between neural network layers, minimizing the need for buffering and achieving high frame rates with low latency, critical for real-time applications in experimental environments.
A key feature of SNL is its ability to re-load neural network weights and biases after training without the need for re-synthesis. This allows for rapid updates to the deployed models, enhancing adaptability and performance in dynamic environments. Additionally, the framework supports network quantization, which helps optimize the use of FPGA digital signal processing (DSP) and memory resources, crucial for maximizing efficiency in resource-constrained edge computing scenarios.

Primary authors

Abhilasha Dave (SLAC National Lab) J.J. Russell (SLAC National Accelerator Laboratory (US)) Ryan Herbst (SLAC National Accelerator Laboratory (US))

Presentation materials