The problem: Experiments involving elementary particle collisions in the Large Hadron Collider require a finely tuned and orchestrated system of particle detectors and algorithms to collect information about events, which is essential for obtaining high-quality results. The loss of any link in the system can raise doubts about the validity of the data, and the experiment will need to be repeated, incurring enormous additional expenses.
The solution: Scientists have proposed a novel AI-based approach to optimise the system for running the experiments. Rather than tuning each of its elements separately, they propose optimising all design parameters simultaneously. This will ensure maximum accuracy of the data, making it ready for immediate analysis and, consequently, reducing the costs of the experiments.
An international collaboration including researchers from the HSE Faculty of Computer Science Alexey Boldyrev, Fedor Ratnikov, and Denis Derkach has proposed a new approach to designing detectors used for experiments in elementary particle physics. The novel approach, which involves differentiable programming and deep neural networks, will optimise the instruments’ performance and enhance the scientific value of the experimental results. The study findings have been published in Reviews in Physics.
Contemporary elementary particle physics heavily depends on experiments conducted with particle accelerators to gain fresh insights into the laws of nature and to refine the values of fundamental constants. In these experiments, particles such as protons, electrons, and ions are accelerated to tremendous speeds and energies before being collided with one another. A specialised system of detectors records the outcomes of these collisions, and the resulting data is then processed and analysed.
The primary challenge for physics today is to finalise the Standard Model, a modern theory describing the structure and interactions of elementary particles. In the future, this endeavour is expected to contribute to the Theory of Everything, which aims to harmonise Albert Einstein's General Theory of Relativity (which explains the behaviour of stars and galaxies and the structure of space and time) with quantum mechanics (which describes the behaviour of subatomic elementary particles at the micro level).
Particle accelerators are valuable tools that enable researchers to replicate in experimental settings the events which occurred in the early stages of the universe. For example, the Large Hadron Collider (LHC) allows scientists to observe particle interactions with remarkable precision. In the future, they will help researchers comprehend the full scope of interactions among matter, space, and energy.
HSE physicists and computer scientists work mainly with the detectors of the Large Hadron Collider beauty experiment (LHCb) , which are specifically designed for investigating the physics of charm b-quarks. In doing so, the scientists are exploring physics beyond the Standard Model, potentially paving the way towards discovering the Theory of Everything.
The LHCb employs several types of detectors:
The costs of the LHCb detector system and experiments are estimated to be approximately $84 million.
Researchers face resource constraints, and an experiment's outcome is objectively determined by the weakest and least precise element within the system. Individual software and data processing algorithms have been developed for each detector, and it is crucial to effectively coordinate the setup and optimisation of all detectors used for the experiment, alongside the corresponding data-processing algorithms. Every component of the system must maintain a consistently high level of accuracy and sensitivity; any deficiency in the performance of a single detector can undermine the efforts invested in optimising the others.
The requirements for detectors, algorithms, software, and hardware are established beforehand, drawing from the experience and expertise of the scientists who design the experiment. Subsequently, digital models are constructed to simulate the properties of individual detectors, and these models are integrated into a unified digital representation of the experiment, making it possible to assess the expected accuracy of the outcomes under the specified conditions.
While this approach consistently works, it does come with a notable drawback. Although it is possible to assess the expected quality of outcome for specific detector setups, identifying changes that could be made to this intricate system to enhance the outcome remains a challenge.
'Particle interaction with matter is fundamentally a probabilistic process governed by the laws of the quantum world. To address uncertainties, one needs a substantial number of iterations in interaction modelling, and modelling even a single configuration of a complex detector involves costly and extensive computations. Thus, an alternative approach is required to determine the optimal detector characteristics,' explains Alexey Boldyrev, Research Fellow at the HSE Laboratory of Methods for Big Data Analysis and participant in the LHCb (CERN) and MODE collaborations.
For a straightforward experiment, the optimisation problem is resolved by exhaustively testing every possible combination of physical configurations and algorithms. However, for a complex setup, it would involve sifting through numerous options for various attributes of both the detectors and the software, and thus require an excessive amount of computing resources. Effective automation is therefore essential for choosing the optimal configuration.
The complexity and interconnectedness of computational algorithms for data processing also necessitate a holistic consideration of the entire system. Furthermore, the very architecture of contemporary algorithms imposes certain requirements for detector configurations, rendering the traditional approach of building software add-ons for pre-optimised detectors suboptimal. It is essential to optimise the algorithmic support for experimental data collection and processing concurrently with optimising the detector parameters.
MODE (Machine-learning Optimised Design of Experiments), a collaboration of scientists that includes HES researchers, has proposed a comprehensive approach to the challenge of system optimisation. Within this framework, scientists seek optimal configurations for each detector component using deep neural networks and differentiable programming, enabling simultaneous optimisation of all experimental parameters. Rather than adjusting each element independently, a coordinated orchestration of all components within the experimental system occurs, ensuring the maximum precision and data readiness for immediate analysis.
The novel approach enables the highest achievable scientific yield from experiments, thereby resulting in cost reduction.
Machine learning algorithms are designed to seek out the optimal solutions to complex multidimensional problems. However, their implementation requires the descriptions of an experiment's individual components to be integrated into a sophisticated, yet meticulously defined unified system.
The differentiable model has demonstrated remarkable effectiveness in optimising multidimensional complex systems for elementary particle physics experiments. In such models, the impact of each of the numerous parameters can be analytically computed within a given subsystem, with the result being automatically incorporated into all other components of the entire system.
Furthermore, the use of neural networks facilitates automatic adaptation of data processing algorithms to current detector configurations. Consequently, the problem of comprehensive physical detector optimisation can be reframed as a standard machine learning optimisation problem.
The members of the MODE collaboration have examined the ways in which the typical components of modern experimental setups can be represented through automatically differentiable models, while also demonstrating how a multitude of components can be integrated into a comprehensive system for experiment optimisation.
'Consistent application of the proposed approach will necessitate the development of differentiable versions of stochastic models for physical simulation of the experimental data. However, even now, using the already developed components of this approach, we have been able to analyse how the complexity of a new electromagnetic calorimeter—a critically important detector in the LHCb physical experiment—will impact the accuracy of future measurements. The old calorimeter has almost exhausted its resource and needs to be replaced anyway. These findings will guide us in selecting the optimal configuration of the new detector based on the cost-to-quality ratio of expected scientific outcomes,' explains Fedor Ratnikov, Leading Research Fellow at the HSE Laboratory of Methods for Big Data Analysis and participant in the MODE collaboration and the LHCb collaboration at CERN.
In their paper, the scientists discuss the advantages of incorporating automatic differentiation into a comprehensive computational methodology for particle physics experiments. Their approach acknowledges that the optimal configuration of the entire system of detectors and algorithms used for the experiment will be different from a mere combination of independently optimised components. The authors of the paper emphasise that the implementation of the proposed method will rely on cloud infrastructure to provide scalable and manageable computational resources.
The global advancement of technologies has already placed numerous industries and their employees in an environment akin to that of major scientific laboratories. Biologists and medical professionals with pharmacological and agricultural companies investigate living organisms whose behaviour is shaped not only by intricate genomic interactions, but also by environmental factors eliciting a spectrum of genetic responses. Manufacturers of aircraft, ships, consumer electronics, robotic assembly lines, and urban water treatment systems all grapple with sophisticated technological installations that tend to degrade and malfunction over time. And literally everyone seeks to optimise the operational modes of both electronic and mechanical components, while also minimising the production and operational costs of these systems. According to the HSE researchers, the optimisation scheme described in their paper holds value for any knowledge-intensive industry.
'These approaches can extend beyond the design of physics experiments and find broader application in various industries. In the near future, the proposed methods have the potential to substantially reduce both equipment and operational costs. The development and transfer of these technologies are our team’s key objectives,' comments Denis Derkach, Head of the Laboratory of Methods for Big Data Analysis, HSE Faculty of Computer Science.
IQ