Workshop IX on Streaming Readout


Welcome to Workshop IX on Streaming Readout

The workshop will be held online from December 8-10, 2021

The purpose of this workshop is to bring together researchers interested in detector design, electronics, data acquisition, and analysis to present and discuss ways to realize a streaming readout system for future nuclear and particle physics experiments.

Streaming readout leverages the advances in electronics and computing to achieve continuous readout of all detector signals without requiring a "trigger" as was common in traditional readout schemes.  This enables detector signal processing, zero suppression, and extraction of signal information (amplitude, timing, integrated charge, etc.) using ASIC chips or FPGAs in parallel on the data streams with unbiased event selection and reconstruction downstream with online computer processing in clusters of CPUs or GPUs.

Almost all future experiments, particularly future EIC experiments, will use streaming readout in some form.  Furthermore, many existing experiments are converting to a streaming readout paradigm.  The purpose for this workshop is to present current work and plans, illustrate existing designs and available hardware, and to discuss signal processing, data formats, software libraries, and networking. The goal is to achieve a common streaming readout framework that can be used by many experiments.

Everyone is welcome to attend, listen, and participate in the discussion but we are very interested in the work you are doing in this area and invite you to give a brief (20 minute) presentation on your detector, expected data rate, and readout strategy.  Please submit an abstract, using the link to the left, or contact if you would be willing to give a presentation.

7 Dec 2021: Registration for the workshop is now closed. Everyone registered for the workshop should have received an email with the zoom link for the sessions. If you registered and have not received this email, please contact us directly via email.


A read only version of the live notes google doc that was collected throughout the workshop can be found here.

Wednesday Recordings (passcode: 1=Hc57AD)

Thursday Recordings (passcode: j*#a?c4+)

Friday Recordings (passcode: 29BG3@b?)

Here is a link to the previous workshop held at MIT in April this year.


  • Alberto Lucchesi
  • Alexandre Camsonne
  • Ana Gainaru
  • Anders Pedersen
  • Apar Agarwal
  • Aristeidis Fkiaras
  • Austin Schmier
  • Ayman Al-bataineh
  • Balint Joo
  • Benjamin Mintz
  • Benjamin Moritz Veit
  • Brooke Russell
  • Cameron Dean
  • Carl Timmer
  • Carl Zorn
  • Carlo Tintori
  • Chris Cuevas
  • Christopher Crawford
  • Dalius Baranauskas
  • Damien Neyret
  • Daniel Tapia Takaki
  • David Abbott
  • David Emschermann
  • David Lawrence
  • David Rohr
  • Deepak Samuel
  • Dmitry Romanov
  • Dorothea vom Bruch
  • Douglas Hasell
  • Dylan Rankin
  • Ed Jastrzembski
  • Eduard Atkin
  • Eli Dart
  • Elke-Caroline Aschenauer
  • Enrico Gamberini
  • Eric Pouyoul
  • Esko Mikkola
  • Ethan Cline
  • Evgeny Shulga
  • Filippo Costa
  • Flavio Pisani
  • Florian Grötschla
  • Greg Kibilko
  • Gustav R. Jansen
  • Hamlet Mkrtchyan
  • Hao Xu
  • Hao-Ren Jheng
  • Isar Mostafanezhad
  • Ivica Friščić
  • Jacopo Pazzini
  • Jan Bernauer
  • Jeff Landgraf
  • Jin Huang
  • Joachim Schambach
  • Joe Osborn
  • John COMISH
  • John Lajoie
  • Kenneth Read
  • Kevin Flood
  • Kostas Alexopoulos
  • Laura Cappelli
  • leo greiner
  • Lynn Wood
  • Marcel Demarteau
  • Marco Battaglieri
  • Marco Boretto
  • Marco Locatelli
  • Mariangela Bondi
  • Mario Cromaz
  • Markus Diefenthaler
  • Martin Purschke
  • Martin Zemko
  • Matteo Lupi
  • Michael Goodrich
  • Ming Liu
  • Miroslav Finger
  • NIko Neufeld
  • Noah Oblath
  • Ola Groettvik
  • Paola Garosi
  • Patrick Moran
  • Rick Archibald
  • Robert Varner
  • Roland Sipos
  • Sergey Furletov
  • Stefano Levorato
  • Tea Bodova
  • Thomas Cormier
  • Tommaso Colombo
  • Torre Wenaus
  • Vardan Gyurjyan
  • Vitaly Shumikhin
  • Volodymyr Aushev
  • William Gu
  • Yaping Wang
  • Yasser Corrales Morales
  • Yatish Kumar
  • Yihui Ren
  • Yuri Venturini
  • Zhenyu Ye
    • 09:00 12:00
      Detector Status Updates 1
      Convener: Joachim Schambach (Oak Ridge National Laboratory)
      • 09:00
        Welcome 15m
        Speaker: Dr Joachim Schambach (Oak Ridge National Laboratory)
      • 09:15
        ALICE Streaming Readout 25m
        Speaker: Filippo Costa (CERN)
      • 09:45
        LHCb: Trigger-less Readout at 40 MHz 25m
        Speakers: Aristeidis Fkiaras (CERN), Niko Neufeld (CERN)
      • 10:15
        The readout system of the DUNE experiment 25m

        The Deep Underground Neutrino Experiment (DUNE) is a neutrino experiment under construction with a near detector at Fermilab and a far detector at the Sanford Underground Research Facility that will observe neutrinos produced at Fermilab. The far detector electronics are streaming out a constant rate of ADC data at 2 MHz. As the signals can be very small, no zero suppression is applied. The new technical design of the DAQ system for the experiment has major differences compared to its prototypes. Its interfaces with the front-end electronics rely on high-speed I/O cards that are hosted in commodity servers. Data is received over DMA to memory buffers dedicated to the devices, from where it is serialized and moved to a parallel processing pipeline that implements data driven algorithms like hit-finding or calibration and error handling. After that, data is stored in high-performance software buffers. The parallelized request-response domain answers to data requests, based on unique identifiers (e.g.: timestamp) in the front-end frames. The latency buffer lookup mechanism copies out the requested data and forms responses with additional metadata. This experiment must also be capable of persisting, upon a specific request, incoming data for up to 100 seconds, with a throughput of 1.5 TB/s, for an aggregate size of 150 TB. The modular nature of the apparatus enables splitting the problem into 150 identical units operating in parallel, each at 10 GB/s. These numbers correspond to one detector super module. Two are planned for initial construction and four is the final configuration of the experiment. In order to maintain the performance requirements of such a system, a generic, modular and scalable readout system was designed and developed.

        Speaker: Roland Sipos (CERN)
      • 10:45
        Virtual Coffee Break 15m
      • 11:00
        A novel continuous readout for the NA62 data acquisition system 25m
        Speaker: Marco Boretto (CERN)
      • 11:30
        Free-running data acquisition system for the AMBER experiment 25m

        Triggered data acquisition systems provide only limited possibilities of triggering methods. In our paper, we propose a novel approach that completely removes the hardware trigger and its logic. It introduces an innovative free-running mode instead, which provides unprecedented possibilities to physics experiments. We would like to present such system, which is being developed for the AMBER experiment at CERN. It is based on an intelligent data acquisition framework including FPGA modules and advanced software processing. The system provides a triggerless mode that allows more time for data filtering and implementation of more complex algorithms. Moreover, it utilises a custom data protocol optimized for needs of the free-running system. The filtering procedure takes place in a server farm playing the role of the highlevel trigger. For this purpose, we introduce a high-performance filtering framework providing optimized algorithms and load balancing to cope with excessive data rates. Furthermore, this paper also describes the filter pipeline as well as the simulation chain that is being used for production of artificial data, for testing, and validation.

        Speaker: Mr Martin Zemko (CERN)
    • 12:00 12:15
      Group Photo 15m
    • 12:15 13:30
      Lunch 1h 15m
    • 13:30 17:05
      Detector Status Updates 2
      Convener: Dr Douglas Hasell (MIT)
      • 13:30
        The readout system of the CBM experiment 25m
        Speaker: Dr David Emschermann (GSI)
      • 14:00
        CLAS12 Streaming Readout 25m
        Speaker: Patrick Moran (MIT)
      • 14:30
        Streaming Mode Data Acquisition and Data Processing at the Jefferson Lab 25m
        Speaker: Vardan Gyurjyan
      • 15:00
        Virtual Coffee Break 30m
      • 15:30
        Simple and Scalable Streaming: The GRETA Data Pipeline 25m
        Speaker: Mario Cromaz (LBNL)
      • 16:00
        sPHENIX Streaming Readout 25m
        Speaker: Martin Purschke (BNL)
      • 16:30
        Discussion on Detector Updates 30m
        Speaker: Dr Douglas Hasell (MIT)
    • 09:30 15:30
      DAQ, Future & Test Plans: Data Acquisition, Future & Test Plans
      Conveners: David Abbott (Jefferson Lab), Jin Huang (Brookhaven National Lab)
      • 09:30
        TriDAS SRO Framework 25m

        Trigger and Data Acquisition System, or TriDAS for short, is a triggerless streaming readout software developed initially for the NEMO neutrino telescope project. Since then, thanks to its scalability and modularity, the system has been adapted to collect data from different phases of the project and also from other detectors. In summer 2020, TriDAS was used to implement a prototype streaming readout data-taking with the CLAS12 detector, at JLAB. These days, a new TriDAS version is under development for the integration in the ERSAP microservices architecture. This implementation sketches preliminary studies toward the streaming readout for the EIC project.

        Speaker: Ms Laura Cappelli (INFN - CNAF)
      • 10:00
        Scalable Online Processing for Trigger-less DAQ 25m
        Speaker: Jacopo Pazzini
      • 10:30
        EJFAT - A Joint ESnet / JLAB prototype load balancer for large scale DAQ processing 25m

        Jlab and ESnet jointly identified a need for terabit scale data processing of DAQ data from new large scale accelerator facilities. Most DOE instruments are evolving to provide 100's of gigabits to many terabits of raw data, which needs to be processed in hundreds of hardware accelerated DSP equipped servers for event extraction and data set recording. In this talk we describe the implications on IP based protocols used to transport this data, as well as a trigger and event aware load balancer that can sort data with nanosecond trigger granularity and direct it to a flexible cloud of processing elements. The load balancer can be implemented in hardware with a combination of FPGAs and terabit data center top of rack switches.

        Speaker: Yatish Kumar (ESNET)
      • 11:00
        Break 15m
      • 11:15
        Development of FELIX for DAQ System for Nuclear and Particle Physics experiments 25m
        Speaker: Hao Xu (Brookhaven National Laboratory)
      • 11:45
        Machine Learning for HF identification in sPHENIX and EIC Streaming Readout 30m
        Speaker: Cameron Dean (Los Alamos National Laboratory)
      • 12:20
        Lunch 1h 10m
      • 13:30
        INTERSECT 25m
        Speaker: Ben Mintz (ORNL)
      • 14:00
        Real-time Machine Learning at BNL CSI 25m
        Speaker: Yihui Ren (Brookhaven National Laboratory)
      • 14:30
        Discussion: challenges and opportunities in high noise/background mitigation 30m
        Speakers: Jan Bernauer (Stony Brook University), Jin Huang (Brookhaven National Lab)
      • 15:00
        Closeout 5m
        Speaker: Joe Osborn (ORNL)