The Snowmass 2021 Workshop on Quantum Computing for High-Energy Physics aims to bring HEP scientists currently working on or interested in quantum computing applications to HEP and showcase state-of-the-art algorithms, data analysis, and simulation applications. The goal of the workshop is to identify areas to pursue as a community, in search of the quantum advantage.
Connection details : https://ornl.zoomgov.com/j/1612415009?pwd=bjFrSjY4Sk1ZVWsvQW02aGJYbHdVZz09
and on this page.
Using Tensor Lattice Field Theory (for a recent review see arXiv:2010.06539),
we construct a gauge-invariant transfer matrix for compact scalar electrodynamics in arbitrary dimensions. We propose a noise-robust way to implement Gauss's law. We discuss quantum simulation experiments with Rydberg atoms where the electric field Hilbert space is approximated by a spin-1 triplet. We propose two different quantum simulators for this model obtained by assembling arrays of Rydberg atoms with ladder structures. We compare observables associated with the real-time evolution of (small) systems for the simulators and the target model. We briefly discuss recent experimental progress.
The Schwinger model exhibits several features which are also present in quantum chromodynamics (QCD), such as confinement and spontaneous chiral symmetry breaking. Using the Schwinger model, I will discuss the real-time dynamics of the string breaking mechanism, non-equilibrium dynamics, and the preparation of thermal states. I will present numerical results using both simulators and quantum devices from IBM.
The interpretation of measurements from high energy collisions at experiments like the Large Hadron Collider (LHC) relies heavily on the performance of full event generators, specifically their accuracy and speed in simulating complex multi-particle final states. With the rapid and continuous improvement in quantum computers, these devices present an exciting opportunity for high energy physics. Dedicated quantum algorithms are needed to exploit the potential that quantum computers can provide. In this talk, I will present general and extendable quantum computing algorithms for the simulation of the parton shower in a high energy collision. The algorithms utilise the quantum nature of the parton shower calculation, and the quantum device’s ability to remain in a quantum state throughout the computation, to efficiently perform the simulation. Furthermore, it will be shown that reframing the parton shower in the quantum walk framework dramatically improves the performance of the parton shower simulation, increasing the number of shower steps that can be simulated, whilst reducing the required Quantum Volume on the device. These algorithms are the first step towards simulating a full and realistic high energy collision event on a quantum computer.
The Standard Model of Particle Physics, encapsulating the vast majority of our understanding of the fundamental nature of our Universe, is at its core a gauge theory. In order to harness the full potential of quantum computers, an efficient implementation of the Hamiltonian of gauge theories on quantum processors is a mandatory first step. This is no simple task due to the redundancies present in any gauge theory, as well as the finite number of degrees of freedom inherent to any simulation. In this talk, I present a novel gauge-redundancy free formulation of U(1) gauge theories that allows for such a resource-efficient implementation. The representation minimally violates the canonical commutation relations while achieving per-mille accuracy in the energies of the low-lying states.
The light-front quantization provides a natural framework for digital quantum simulation of quantum field theory. In our work (2002.04016, 2105.10941), we demonstrated this by developing quantum algorithms based on simulating time evolution and adiabatic state preparation. We discussed various ways of encoding physical states in the quantum computer and provided resource estimates for Yukawa model in 1+1D and for QCD in 3+1D. We explained how recently developed optimal and nearly-optimal oracle-based quantum simulation algorithms can be used for quantum simulation of QFT in the second-quantized formulation. We also discussed the measurement of various static observables. This work laid the basis for our research on near-term simulation algorithms, based on the Variational Quantum Eigensolver and Basis Light-Front Quantization (2011.13443, 2009.07885). Having much in common with ab initio quantum chemistry and nuclear theory, the BLFQ formulation provides an ideal framework for benchmarking NISQ devices and testing existing algorithms on physically relevant problems such as the calculation of hadronic spectra and parton distribution functions.
Oscillating neutrino beams exhibit quantum coherence over distances of thousands of kilometers. Their unambiguously quantum nature suggests an appealing test system for direct quantum simulation. Such techniques may enable presently analytically intractable calculations involving multi-neutrino entanglements, such as collective neutrino oscillations in supernovae, but only once oscillation phenomenology is properly re-expressed in the language of quantum circuits. Here we resolve outstanding conceptual issues regarding encoding of arbitrarily mixed neutrino flavor states in the Hilbert space of an n-qubit quantum computer. We introduce algorithms to encode mixing and oscillation of any number of flavor-mixed neutrinos, both with and without CP-violation, with an efficient number of prescriptive input parameters in terms of sub-rotations of the PMNS matrix in standard form. Examples encoded for an IBM-Q quantum computer are shown to converge to analytic predictions both with and without CP-violation.
In the context of high-energy physics, perturbation theory is the most widely used strategy for extracting accurate theoretical predictions. However, higher-order contributions require the evaluation of complicated multi-loop Feynman integrals, which constitute a serious bottleneck in current computational frameworks. In this talk we present the first application of a quantum algorithm to multi-loop Feynman integrals. We introduce an efficient modification of Grover's algorithm to select all causal configurations of internal propagators. Causal configurations arise naturally in the Loop-Tree duality (LTD), and lead to integrand representations that are more stable numerically than the corresponding Feynman representation. Moreover, causal configurations can also be interpreted in graph theory as acyclic directed graphs.
The search for supersymmetric particles is one of the major goals of the Large Hadron Collider (LHC). Supersymmetric top (stop) searches play a very important role in this respect, but the unprecedented collision rate to be attained at the next high luminosity phase of the LHC poses new challenges for the separation between any new signal and the standard model background. In this talk, I show a novel application of the zoomed quantum annealing machine learning approach to classify the stop signal versus the background, and implement it in a quantum annealer machine. This approach together with the preprocessing of the data with principal component analysis may yield better results than conventional multivariate approaches.
Quantum computers have the potential to speed up certain computational tasks. A possibility this opens up within the field of machine learning is the use of quantum techniques that may be inefficient to simulate classically but could provide superior performance in some tasks. Machine learning algorithms are ubiquitous in particle physics and as advances are made in quantum machine learning technology there may be a similar adoption of these quantum techniques.
In this work a quantum support vector machine (QSVM) is implemented for signal-background classification of B meson decays. We investigate the effect of different quantum encoding circuits, the process that transforms classical data into a quantum state, on the final classification performance. We show an encoding approach that achieves an average Area Under Receiver Operating Characteristic Curve (AUC) of 0.848 determined using quantum circuit simulations. For this same dataset the best classical method tested, a classical Support Vector Machine (SVM) using the Radial Basis Function (RBF) Kernel achieved an AUC of 0.793. Using a reduced version of the dataset we then ran the algorithm on the IBM Quantum ibmq_casablanca device achieving an average AUC of 0.703. As further improvements to the error rates and availability of quantum computers materialise, they could form a new approach for data analysis in high energy physics.
We study the quantum counterpart of Support Vector Machines, namely Quantum Support Vector Machines (QSVΜ), and a Quantum Machine Learning (QML) architecture that combines a classical encoder neural network and a Variation Quantum Circuit (VQC) into a single model. That is, a Neural Network Variational Quantum Circuit (NNVQC) for the binary classification of High Energy Physics data. Specifically, we focus on the identification of the Higgs boson in the $t\bar{t}H(b\bar{b})$ channel. Quantum computing approaches can potentially tackle this computationally expensive task by leveraging the so-called quantum feature maps to encode classical data into quantum states. Recent proposals based on the kernel trick assume a one-feature-to-one-qubit mapping of the data. The limited number of available qubits on Noisy Intermediate-Scale Quantum (NISQ) devices imposes the need for feature compression on complex datasets. The challenge is to maintain sufficient information to achieve a high classification accuracy while performing an effective reduction.
This contribution assesses the effect of different data compression and dimensionality reduction techniques with respect to quantum machine learning algorithms. We develop five distinct Auto-Encoder architectures, including a Variational and an end-to-end Sinkhorn Auto-Encoder with a classical classification neural network attached to its latent space. The latent spaces produced with optimal hyperparameters and data normalisation were passed to a QSVM that was used to perform the $t\bar{t}H(b\bar{b})$ classification. The QSVM performance is improved for some of the considered Auto-Encoder latent spaces. The classification power of the NNVQC and of its classical counterparts are comparable.
The training and performance of the quantum models is affected by noise inherent to NISQ devices. We investigated the influence of different types of quantum hardware noise and we concluded that the tested QML models are suitable for operation on current NISQ devices.
The identification of jets coming from heavy-flavour quarks, namely $b$- and $c$-quarks, is an important and non-trivial task at the LHC experiments. The classification of jets coming from $b$- and $\bar{b}$-quarks at the LHCb experiment allows to perform physics measurements, such as the forward-central charge asymmetry, to constrain the Standard Model predictions and/or find possible signals of New Physics. While recently Machine Learning algorithms have played an important role in exploiting the jet substructure, there is room for improvement in the jet identification by exploiting the particles correlations. In this paper, we present a brand new approach to identify the charge of jets produced by $b$-quarks, based on Quantum Machine Learning techniques. Data are embedded in a quantum circuit through a quantum feature map, a training procedure is performed, and the measurements of final state observables are mapped to a binary classification label. The models are trained and evaluated using LHCb Open Data obtained from LHCb detailed simulations and the tagging performance is compared with the Muon Tagging algorithm used so far at LHCb, and a classical Deep Neural Network model.
Quantum Machine Learning is among others the most promising application on near-term quantum devices which possess the potential to combat problems faster than traditional computers. Classical Machine Learning (ML) is taking up a significant role in particle physics to speed up detector simulations. Generative Adversarial Networks (GANs) have scientifically proven to achieve a similar level of accuracy compared to Monte Carlo based simulations while decreasing the computation time by orders of magnitude. In this research we are going one step further and apply quantum computing to GAN-based detector simulations. Given the practical limitations of current quantum hardware in terms of number of qubits, connectivity, and coherence time, we perform initial tests with a simplified GAN model running on quantum simulators. The model is a classical-quantum hybrid ansatz. It consists of a quantum generator, defined as a parameterised circuit based on single and two qubit gates, and a classical discriminator network. Our initial qGAN prototype focuses on a one-dimensional toy-distribution, representing the energy deposited in a detector by a single particle. It employs three qubits and achieves high physics accuracy thanks to hyper-parameter optimisation. Furthermore, we study the influence of hardware noise on the qGAN training and inference. A second qGAN is developed to simulate 2D images with a 64-pixel resolution, representing the energy patterns in the detector. Different quantum ansatzes are studied. We obtained the best results using a tree tensor network architecture with six qubits. Additionally, we discuss challenges and potential benefits of quantum computing as well as our plans for future development.
The Large Hadron Collider is a very complex machine providing millions of collisions per second. Simulating events to compare theory and data requires a lot of computing power, in particular for the event generation and the whole analysis toolchain. Machine-learning techniques may provide new avenues to optimize the computing power. This talk presents a novel quantum generator in the context of generative adversarial networks for Monte Carlo event generation that is able to learn underlying distributions of observable and generate a larger sample out of a smaller training sample. The proposed quantum algorithm has been deployed on real quantum hardware of two different types and shows good results with very shallow circuits, which is of great advantage in the current era of noisy intermediate-scale quantum computers.
CERN has recently started its Quantum Technology Initiative in order to investigate the use of quantum technologies in High Energy Physics (HEP). A three-year roadmap and research programme has been defined in collaboration with the HEP and quantum-technology research communities. In this context, initial pilot projects have been set up at CERN in collaboration with other HEP institutes worldwide on Quantum Computing and Quantum Machine Learning in particular. These projects, are studying basic prototypes of quantum algorithms, which are being evaluated by LHC experiments for different types of workloads. This talk will provide an overview of recent results obtained by the different studies, including applications in areas ranging from accelerator beams optimization to data analysis and classification.
Currently, the vast amount of data presents a challenge for high-energy physics experiments, and most data most be discarded, keeping only data which passes templated triggers. Since we do not know the form new physics will take, these templated triggers may be excluding interesting events. This problem will only be exacerbated in the future as the size, intensity, and complexity of the apparatus increase. The advent of quantum computing, and specifically quantum random access memory (QRAM), will allow experiments to store exponential amounts of data. In this contribution, I will outline current efforts to efficiently implement a QRAM protocol capable of storing 3^n_{qbit} bits of classical information in qubit spin correlations
High-energy physics is replete with hard computational problems and it is one of the areas where quantum computing could be used to speed up calculations. We present an implementation of likelihood-based regularized unfolding on a quantum computer. The inverse problem is recast in terms of quadratic unconstrained binary optimization (QUBO), which has the same form of the Ising hamiltonian and hence it is solvable on a programmable quantum annealer. We tested the method using a model that captures the essence of the problem, and compared the results with a baseline method commonly used in precision measurements at the Large Hadron Collider (LHC) at CERN. The unfolded distribution is in very good agreement with the original one. We also show how the method can be extended to include the effect of nuisance parameters representing sources of systematic uncertainties affecting the measurement.
Microwave-optical quantum transducers that convert quantum information between microwave and optical frequencies with high fidelity play a crucial role in long-distance quantum networks and have promising breakthroughs in quantum sensing. High-efficiency and low-noise quantum transduction in the quantum level remains challenging in the current designs and demonstrations. At Fermilab we have developed bulk Nb superconducting cavities with record-high 2 second photon lifetime (Q=$10^{11}$), which represents a significant improvement compared to previous efforts. Coupling these SRF cavities with nonlinear optical resonators will herald a powerful quantum internet network. We are exploring coherent resonance hybrid systems and bi-directional quantum technology to up/down- convert the information to/from the optical regime. These novel quantum systems with very low parasitic losses are designed to achieve an optimal overlap of optical and microwave fields and maximize conversion efficiency at very low $mK$ temperatures, as well as high fidelity in quantum states transportation.
This work generalizes the quantum amplitude amplification (Grover’s) and amplitude estimation algorithms to work with non-Boolean oracles, leading to two new algorithms. Unlike Boolean oracles, the eigenvalues of a non-Boolean oracle are not restricted to be ±1. 1) The non-Boolean amplitude amplification algorithm preferentially amplifies the amplitudes of the eigenstates based on a given objective function. 2) The quantum mean estimation algorithm estimates the expected value of a unitary operator under a given state with a quadratic speedup over the corresponding classical algorithm. 3) I will briefly discuss how these algorithms allow for training quantum neural networks in an inherently quantum manner.
I provide explicit circuits implementing the Kitaev–Webb algorithm for the preparation of multi-dimensional Gaussian states on quantum computers. While asymptotically efficient due to its polynomial scaling, I show that circuits implementing the preparation of one-dimensional Gaussian states and those subsequently entangling them to reproduce the required covariance matrix
differ substantially in terms of both the gates and ancillae required. The operations required for the preparation of one-dimensional Gaussians are sufficiently involved that generic exponentially-scaling state-preparation algorithms are likely to be preferred in the near term for many states of interest. Conversely, polynomial-resource algorithms for implementing multi-dimensional rotations quickly become more efficient for all but the very smallest states, and their deployment will be a key part of any direct multidimensional state preparation method in the future.
Quantum Technologies and especially Quantum Computing are strongly evolving fields in HEP. There are several European Initiatives, for example the CERN QTI – Quantum Technology Initiative, as well as initiatives on national level. In Germany, DESY, in its role as the German national HEP hub, had established a Quantum Technology Task Force about one year ago. This Task Force is in the process of building “DESY Quantum.”, an institutional organization of various activities for Quantum Technologies at DESY, which comprises Quantum Computing, Quantum Sensing as well as Quantum Materials and tailor-made Quantum Devices with DESY’s brilliant photon sources. Following the pioneering in-depth work in Quantum Computing in HEP Theory, the presentation will report on DESY’s activities in lattice gauge calculations, error mitigation algorithms as well as methods to analyze and optimize quantum gate expressivity.
To investigate the fundamental nature of matter and its interactions, particles are accelerated to very high energies and collided inside detectors, producing a multitude of other particles that are scattered in all directions. As charged particles traverse the detector, they leave signals of their passage. The problem of track reconstruction is to recover the original trajectories from these signals. This challenging data analysis task will become even more demanding as the luminosity of future accelerators increases, leading to collision events with a more complex structure. We identify four fundamental routines present in every local tracking method and analyse how they scale in the context of a standard tracking algorithm. We show that for some of these routines we can reach a lower computational complexity with quantum search algorithms. To the best of our knowledge, this constitutes the first theoretical proof of a quantum advantage for a state-of-the-art track reconstruction method.