This event is part of the Laser-Plasma Accelerator Seminars. Click here for more information, including data protection.
We are proud to announce the 1st LPA Special Workshop with its focus on "Control Systems and Machine Learning".
Big data and machine learning techniques have the potential to revolutionize research on intense laser-matter interaction and laser-plasma acceleration. With many groups adopting these technologies, we propose to survey and potentially consolidate on-going development efforts. The LPA online workshop will feature plenary talks, invited contributions and discussion sessions regarding specific implementations of control systems, data acquisition and machine learning in laser accelerator laboratories. Furthermore, we invite submissions for a limited virtual poster presentations and a limited number of contributed talks. Also, as a unique feature of the online format, the workshop will be preceded by a number of seminar talks and lectures as part of the LPA Online Seminars.
Among the questions which the workshop shall address are:
Scientists use the LCLS X-Ray free electron laser to take crisp pictures of atomic motions, watch chemical reactions unfold, probe the properties of materials and explore fundamental processes in living things. In the fall of 2022 the LCLS-II superconducting linac will be commissioned and the X-Ray shot rate will increase from 120Hz to 1MHz. Correspondingly, the raw data volume will increase from 2GB/s to 200GB/s. We will discuss the data acquisition and analysis techniques used to handle this, as well as real-time data reduction techniques used to lower the recorded data volume to 20GB/s.
The PALLAS project aims to develop an laser-plasma injector (LPI) prototype delivering 150-200 MeV, 10-50 pC, 1 mm.mrad, at 10 Hz with reliability and control performance at the level of conventional RF accelerator. The LPI is driven by the 40 TW laser of the Université Paris-Saclay LaseriX facility. The project is built as accelerator test facility with state of the art accelerator control command system, data acquisition system and open data sharing.
We present the approach and design of the control command and data acquisition system (CCSA) for PALLAS currently in development and deployment for some subsystems. PALLAS is composed by around 100 different network components from ‘simple’ motor to high quality (and therefore high bandwidth) camera. And to qualify this LPI prototype, control command system have to manage this large variety of instrument, with pertinent and consistent data archiving whereas data acquisition system have to handle a consolidated raw data bandwidth of ~10Gbs coming from multiple source with non trivial time coordination.
To reach these goals, the CCSA is based on the open source distributed control system Tango Controls [1] for the accelerator part and ElliOOs [2] system for the laser. The graphical user interfaces are based on webpage thanks to Ada Web Server [3] and the online monitoring of the laser system and accelerator subsystems is based on Grafana [4].
The data collected by CCSA will be described with details on the data timestamping and readiness for application for machine learning will be discussed. The definition of the project's data management plan will be reviewed from an open science perspective [5] for the community.
<span style="font-size:75%">
[1] Tango controls, https://www.tango-controls.org/
[2] ElliOOs, monitoring and control library based on a distributed multi-client and multi-server architecture used by Amplitude Laser
[3] An Ada-Based Framework to Develop Web-Based Applications, https://www.adacore.com/gnatpro/toolsuite/ada-web-server
[4] Grafana, open source monitoring and analytics platform https://grafana.com/oss/grafana/
[5] CNRS data management plan,link Recommendations for Services in a FAIR data ecosystem
</span>
The control, management and analysis of data arising from laser-plasma experiments are key issues that, when well addressed, enable rapid insights into underlying physics and time-critical decision making for a given investigation. This issue now demands more attention if we are to take full advantage of developments in high-repetition rate, high-power laser technology.
Over the past decade, at Strathclyde and supported by work at the Central Laser Facility, we have developed a series of data control and analysis libraries (DARB, LPI-Py and BISHOP) which have supported a number of experiments. We introduce and review those developments and highlight potential future directions for these projects including advancements in data analysis, visualisation and machine-learning guided experiments.
Gemini, at the Central Laser Facility, provides access for academic and industrial users to perform a variety of cutting-edge experiments. As well as two beams, each with 12 J in 40 fs, we provide mechanical and electrical services to support experiments. These now include systems to support data acquisition, analysis, and storage, as well as experiment control. This has allowed us to conduct experiments involving active feedback.
This talk will provide an overview of these systems and how they work together to help us deliver experiments. We will then take a brief look at planned control systems for EPAC, the CLF’s next-generation laser facility, which is currently under construction.
In this talk I will describe development of the BELLA Center control system, from requirements to implementation on all of our beamlines. The control system, named GEECS (Generalized Equipment and Experiment Control System), monitors, logs, and controls equipment distributed across a network. It was designed to be a complete software package that is straightforward to install, use, and develop. It is modular and scalable, and built using LabView graphical object oriented programming (GOOP). It has been the control system used for the experiments at the BELLA center for over 10 years. I will explain why we went down the path we did, give my perspectives on what has been successful and what changing requirements mean for the future of controls at the BELLA center.
Open and standardized data formats have numerous advantages, including streamlining collaboration, improving software interoperability, and lowering access barriers. However, most data in high energy density physics (HEDP) are currently stored in idiosyncratic instrument or laboratory-specific formats. Analysis software is likewise often written for use by a single author (often duplicating effort) and is rarely openly available or well documented. We will present efforts to develop open-source, user friendly analysis software for HEDP as part of the PlasmaPy project, as well as an effort to develop a standardized data format for proton radiography data (“Pradformat”).
Laser-driven energetic proton accelerators have the potential to provide compact sources of MeV energy, low emittance, sub-picosecond duration proton beams for a variety of applications. The primary impediment to their wider adoption is the challenge of shot-to-shot reproducibility and tuning of the parameters to optimize desirable proton beam qualities in a multi-dimensional parameter space. Recent developments in laser technology and control systems, making available multi-Hz delivery of joule-class, relativistically-intense laser pulses with automated control, combined with online diagnostics have enabled the automated scanning of parameters space, quantification of uncertainty and use of feedback loops for optimization of desirable outputs (e.g. proton beam maximum energy). Bayesian optimization has already demonstrated impressive gain in x-ray generation when used in conjunction with a laser wakefield accelerator [1]. Here, we discuss the preliminary results from experiments expanding this tool to laser-driven proton acceleration and challenges facing the adaption of this gaussian-process-regression-based Bayesian optimizer to the sharply varying parameter space of laser-solid interactions.
[1] R. Shalloo et al. Nature Comms, 11 (2020)
We present the ongoing experimental work on the ML for tuning of the RF gun and linac based on the associated diagnostics.
Our machine is based on Tango control system, currently operating at 10Hz, it was designed to control the machine (magnets, laser, RF, diags etc.), and perform dedicated physical measurements. This architecture allows us to easily integrate ML methods and tools to measure, predict and improve the transported electron beam quality.
We discuss the tuning of the machine with use of surrogate model based on numerical simulations completed with the experimental data.
We present a procedure for optimization of a laser pulse duration to the shortest possible value using a feedback control loop between the FC SPIDER from APE and DAZZLER from Fastlite. New SPIDER software was developed in collaboration with APE. It was integrated into the laser control system and it enables real-time measurement running on real-time hardware with pulse reconstruction time of less than 25 ms. SPIDER measurement is published live using EPICS 3.14. This solution uses Channel Access protocol and is also linked to an archiver. Data is read from EPICS using a custom developed LabVIEW library (LabIOC - developed in collaboration with Observatory Sciences). A combination of Gradient Descent and a Differential Genetic Algorithm provides an optimization by changing three DAZZLER parameters: GDD, TOD and FOD. The optimization algorithm is written as a function in Python and then implemented into LabVIEW code through a LabVIEW Python node. Optimization steps are performed at a laser repetition rate 3.3 Hz and new values of the aforementioned three parameters are saved to a text file that are uploaded to DAZZLER every shot. Although complete implementation is not yet fully tested, simulations show several problems with algorithm speed and convergence.
As high-intensity ultra-short PW-class laser systems, operating at high repetition rates, become a reality, laser-driven ion sources will become more suited for a variety of their potential applications. For many applications, shot-to-shot reproducibility and tuning of the beam parameters for a desirable proton source is crucial. The laser-driven ion beam depends on both laser and target conditions, making the optimisation of the beam challenging, given the multidimensional parameter space and operation at low repetition rate to date (∼100 shots per experiment). Preliminary results are presented here from a recent experiment at Astra (CLF) in which optimisation of a proton beam has been achieved at up to 5Hz, through tuning the spectral phase of the laser. Automated control of the laser and data acquisition, accompanied by online diagnostics with live feedback into a Bayesian optimisation algorithm, allowed for optimisation of laser-driven ions at high repetition rates. This method, assisted by machine learning techniques, provides a potential framework for optimisation at new pettawatt high repetition rate light sources.
Laser Wakefield Acceleration (LWFA) is a process by which high gradient plasma waves are excited by a laser leading to the acceleration of electrons. The process is highly nonlinear leading to difficulties in developing 3 dimensional models for a priori, and/or ab initio prediction.
Recent experiments at the Rutherford Appleton Laboratory’s (RAL) Central Laser Facility (CLF) in the United Kingdom using the 5Hz repetition rate Astra-Gemini laser have produced new results in LWFA research, inviting analysis of data with unprecedented resolution. Additionally, data driven modeling, scaling laws and models can be extended into new ranges or refined with less bias.
We will present results of training deep neural networks to learn latent representations of experimental diagnostic data and validate the latent space by comparing the distribution of beam divergences and other metrics of randomly generated spectra against the distribution in the training data. We will discuss the ability of the model to generalize results to different conditions. This work will use architectures which rely on reparameterization using a small dense network connected to a larger, generative, convolutional neural network.
Experimental scientists have always a need for high flexibility in realizing their setups. That poses a huge challenge for the engineers who try to catch up and have the goal to minimize the variety of devices or to standardize the interfaces between the components. This area of conflict mostly covers motion tasks, machine safety and environmental conditioning.
This talk touches some of the solutions that are currently under development or already in use at HZDR. Scientists and engineers work hand in hand to find the best matching system architecture.
One of the presented implementations is the software Motion-Control that is able to combine 2 types of industrial motion-controllers to one flexible user-interface.
Another solution is a device called piezo-rack, that was designed and built for connecting a high number of piezo/pico motors into one cabinet rack. The software interface is completely in our hand – we already integrated it into Karabo, EPICS and LabVIEW and have a TCP/IP interface for external control.
In the field of machine safety and environmental conditioning, we typically use reliable and safe industrial techniques like PLCs by Siemens communicating with industrial standardized protocols like OPC/UA or Profinet.
Tango Controls has been adopted as main system for supervisory control and data acquisition at the Center for Advanced Laser Applications (CALA) in recent years. As an open-source, free and software independent toolkit it is highly customizable and applicable for almost any measurement device. In its current implementation at CALA the main laser system, as well as each experimental cave, has independently operating database servers that supervise the measurement instruments in a decentralized manner.
The developed Tango Controls architecture allows communication between experimental devices and laser instruments, while enabling the inclusion of security features. Such security features help prevent the destruction of experimental equipment and laser components. Furthermore, Tango Controls allows for a simple and streamlined integration process of new measurement instruments.
In this poster, the overall design of Tango-Controls as well as its specific implementation in CALA will be presented.
The Centre for Advanced Laser Applications (CALA) in Munich is home to the ATLAS-3000 high power laser dedicated to research on laser particle acceleration and applications thereof. The laser and each experimental area are running control systems based on Tango controls. In addition to the hardware control, this is used to record experimental data in an automated fashion with every laser shot. As the laser shots are executed via software, the system emits a software trigger to acquire data on slow diagnostics, as well as an electrical trigger for hardware-triggered devices. In this poster, the current design of this data archiving system including file formats, call hierarchy, timings and some example diagnostics will be presented.
In recent research, reinforcement learning algorithms have been shown capable of solving complex control tasks, also showing potential for beam control, and in the optimization and automation of tasks in accelerator operation.
As part of the Helmholtz AI project "Machine Learning Toward Autonomous Accelerators" -- a collaboration between DESY and KIT -- reinforcement learning applications for the automatic control of an electron linear accelerators are investigated. In this contribution, we present first steps taken toward developing a framework for training reinforcement learning agents in simulation environments on specific tasks and applying these agents on an actual particle accelerator. In the future, this framework will allow for fast application of reinforcement learning to a multitude of optimization tasks on particle accelerators, eventually enabling autonomous operation to improve reproducibility and machine availability.
Laser-plasma acceleration (LPA) promises compact sources of high-brightness electron beams for science and industry. However, transforming LPA into a technology to drive real-world applications remains a challenge. In this talk, we discuss how the design and operational principles of the LUX experiment allow us to adopt data-driven approaches to understanding and improving the performance of laser-plasma accelerators. The basis of this development is the deep integration of the machine into a control system that enables real-time monitoring and active stabilization at 1 Hz. In consequence, stable and reproducible conditions can be maintained over many hours of operation and thousands of individual events, which opens the path for applying machine learning techniques to analyze and control the experiment. Featured results include the use of Bayesian optimization to autonomously tune the accelerator to improve the electron bunch quality, and the demonstration of a predictive model that precisely links the LPA stability to fluctuations of the drive laser pulse. Our findings provide guidelines for the development of the KALDERA laser system and highlight the potential of active stabilization at kHz repetition rates.
Deep learning can be used to replace time-consuming diagnostic analysis. With further development, experimental data from high-rep-rate capable laser facilities can be analyzed accurately and at rates fast enough to enable self-driving experiments. The general process for developing neural networks for analyzing data from a proton beam diagnostic will be discussed.
Abstract: While the latest technologies have enabled unprecedented laser intensities, the highest energy laser facilities usually operate in single-shot mode, namely firing a few shots a day. On the other side, the development of laser systems with higher repetition rates but lower peak power is always of fundamental interest, particularly for applications. Applications of the particle beams and radiation sources from laser-plasma interactions usually demand more than single-shot mode, and most applications do not require record high beam energies. Besides, having high-repetition-rate operation capabilities in both the laser systems and the plasma targets allows the use of statistical methods to assist the experiments. A popular control system for this purpose is an adaptive optical system, which usually consists of an adaptive optic, a measurement device, and a controller. Statistical methods such as optimization algorithms can therefore be applied in these closed-loop systems to achieve real-time improvement in experiments.
The open standard for particle-mesh data (openPMD) is a community meta-data standard. openPMD adds scientific self-description in a machine-actionable format to data sets, which allows sharing data processing frameworks, chaining data through simulations and designing long analysis pipelines.
Developed in the open as a FAIR data standard, openPMD is based on portable and scalable file formats such as HDF5 and ADIOS2, among others. As of today, openPMD has been adopted in many popular modeling and analysis projects in laser-plasma physics, particle accelerator physics and beyond.
In this talk, we will present the community principles, existing implementations, recent research results for file and file-less analysis, and possible integration directions.
The smallest laboratories in our field often have the highest repetition-rate experiments and the opportunity for the fattest data pipelines. One shortcut to achieving high-repetition-rate data analysis and storage, while retaining high-fidelity of datasets for post-analysis, is to implement an automated global shot counter in the lab and stamp it on every bit of scientific data. Data can then be stored separately, without interdependency on other devices, and this data can be centralized and analyzed later. Small teams can also benefit from leveraging existent modular control systems such as EPICS, and self-describing data formats such as HDF5. For an iterative and decoupled approach to improving data infrastructure, "sidekick systems" mimicking laboratory infrastructure to various fidelity can be built at a very low cost. We have demonstrated construction of such data systems with undergraduates at California State University Channel Islands.
This work is supported by Lawrence Livermore National Laboratory, LLC under Subcontract No. B645313, and LDRD 21-ERD-015 and DOE Early Career SCW1651. LLNL is under Prime Contract No. DE-AC52-07NA27344 with the DOE/NNSA.
Flexible data structures are critical when designing control systems and data storage for high-repetition-rate experiments, and they must take into account the full lifetime of the experiment from facility preparation to data analysis. We present lessons learned from our experience creating software to control high-rep-rate laser experiments at UCLA. We overview the main motivations and key considerations for the software before discussing how we incorporated flexible data structures into all stages of the experimental process.
We show a cloud-based platform for experimental data storage, management, and sharing. The platform has a UI that runs on a containerized Pythonic web application hosted by a container service. It is fronted by a lightweight authentication portal for a username and password. The experimental data is stored on a database service and object storage service. The container server, database, object storage, and authentication is managed by a cloud provider and therefore, requires minimal intervention and configuration, and can rapidly scale to accommodate high throughput and large volumes of data. The platform is accessible through a web browser where one can perform web-based data entry as well as data visualization and download. The lightweight web-app can be customized to include more functionality by scientists comfortable with Python. Data is available for further post-processing and machine learning through the cloud platform's high-speed internal internet backbone and its SDKs and APIs. We show an ad-hoc use-case where experimental data stored on the platform is post-processed using a hosted Jupyter Notebook and used for downstream machine learning. This platform is built using infrastructure-as-code for version control and extensibility.
Experiments have per definition an unknown outcome and need adaption and improvement during their runtime. Especially user experiments are frequently changed or even just temporarily set up. On the other hand, policies develop towards open science what is undeniably useful in our field since drive lasers/facilities and experiments are cutting-edge but always have their peculiarities, hence general reproduction of results at other facilities is limited to their capabilities, manpower, funding and research programs.
If experiments could be stored in a generic data format with all data and metadata, that format could allow for analysis by the authoring group – as they would do without. However, that format could immediately serve for FAIR data storage without further effort for the user group. Especially interoperability and re-usability would be ensured because the authors of the data would have no other (privileged) access than everyone. Interoperability also enables machine learning-based data analysis, strengthening the potential outcome of the experiment. Having that ML processing online, live feedbacks are conceivable if the facility provides the required means. If many facilities would employ that format, users could re-use their analysis workflows, increasing productivity and enabling cross-facility studies.
We will illustrate this idea and discuss challenges.
Experimental research of the laser plasma acceleration processes and properties of beams is important and at the same time requires a lot of resources to perform highly complex experiments and analyse the obtained data. A surrogate of an experiment provides novel insights into the experimental study of laser plasma acceleration and promises optimization of the experiments. A reliable surrogate model could provide an overview on the gained diagnostics from the point of data stability that was observed and optimization of the next experiment conducting. In our work, we developed a surrogate model that predicts distribution of target diagnostics by the input set of parameters that relate to experimental facility. We developed a conditional invertible neural network for experimental data obtained from two processes of acceleration, the target normal sheath acceleration (TNSA) as well as laser wakefield acceleration (LWFA). These processes correspond to completely different physical phenomena and applicability to both of them demonstrates the large domain of use for the chosen method. We found a significant benefit of conditional invertible neural networks is its ability to learn an approximation of data-dependent posterior distribution resolving even ambiguous configurations. Achieved results demonstrate that the model is able to resolve ambiguities of the data through the prediction of a posterior distribution and provide insights on the gained experimental results.
Laser wakefield acceleration (LWFA) has demonstrated to be a small-scale alternative for accelerating electrons. With the discovery and experimental realization of the so-called blowout regime, quasi-monoenergetic electron bunches could be produced. Progress in LWFA beam quality and stability has always been tied to improvements in machine control and experimental diagnostics. One such diagnostic technique is the so-called wave-breaking radiation. This broadband radiation is emitted during the self-injection process in which electrons are accelerated. Although the measurement of this wave-breaking radiation makes it possible to determine the spatial origin of the electron injection and the amount of injected electrons, there is so far only a limited application as a diagnostic method, as the characteristic spectral signatures corresponding to self-injection are hard to derive. We will be tackling that challenge by introducing a ML-based diagnostic that, for the very first time, translates this broadband radiation into injected charge per time. An invertible neural network had been successfully trained to solve this task based on synthetic data. Besides its high accuracy, we are also able to learn more about the actual radiation signatures that are induced during self-injection.
A key problem in experimental diagnostics of laser-plasma interactions is identifying the interaction scenario using only limited measurement data. Using numerical simulations for training ML models to recognize interactions via measurements has the potential to become a powerful paradigm for complex experimental arrangements. This is because such ML-based diagnostics does not require either description or presence of notable signals, which would commonly be used for comparison of experimental and theoretical data. One of the difficulties lies in reaching ML model invariance, i.e. making the model tolerant to the transition between simulated and experimental data. These can differ due to limitations of measurement techniques or the simplifications and assumptions used in simulations. In this work we study the applicability of various approaches to improving ML-model invariance, including noise toleration via artificial contamination of simulated results and data augmentation via physics-governed composability of simulated outcomes. We assess the prospects in view of the following long-standing problems in laser-plasma physics: (1) laser-solid interaction characterization via generated high-order harmonic spectra [1, 2]; (2) peak intensity determination from the interaction with particle beams [2]; (3) diagnostics of tight-focusing of short laser pulses.
[1] A. Gonoskov et. al. Sci. Rep. 9 (1), 1-15 (2019)
[2] Y. Rodimkov et al., Sensors 21 (21), 6982 (2021)
[3] Y. Rodimkov et al., Entropy 23 (1), 21 (2021)
Numerical simulations of complex systems such as Laser-Plasma acceleration are computationally very expensive and have to be run on large-scale HPC systems. Offline analysis of experimental data is typically carried out by expensive grid scans or optimisation of particle-in-cell code like PIConGPU modelling the corresponding physical processes. Neural Network based surrogate models of this simulation drastically speeds up the analysis due to fast inference times promising in vivo analysis of experimental data. The quality of that surrogate model, in terms of generalisation, depends on the stiffness of the problem along with the amount and distribution of training data. Unfortunately, the generation of training data is very storage-intensive especially for high-fidelity simulations in the upcoming exascale era. We therefore need to rethink the training of surrogate models to tackle memory- and space limitations of current HPC systems. This is achieved by translating continuous learning from Computer Vision to surrogate modeling while additional regularization terms are introduced to foster the generalisation of the surrogate model. The training of the neural network is carried out simultaneously to a concurrent PIConGPU simulation without the need to write training data to disk. The IO system for moving data via streaming methods from the simulation to the concurrently running training task is built with the openPMD-api and ADIOS2. A proof-of-principle is demonstrated by training of a 3d convolutional autoencoder that learns a compressed representation laser wakefield acceleration performed by PIConGPU via streaming.
We present a novel method to efficiently implement Machine Learning methods within Particle-in-Cell (PIC) simulation codes. Such codes are vital to fully understand the kinetic processes involved in Laser Wakefield acceleration and constitute a key tool to comprehend experimental setups and their diagnostics data. However, their computational cost prevents large parameter scans in 3D simulations. We showcase our preliminary implementation within the OSIRIS PIC code, with a Compton scattering AI-based module that employs models directly trained from analytical data. We compare and show how the code can leverage the advantage of Machine Learning methods. These results offer a very promising avenue for future applications of Machine Learning methods with PIC codes.
In this talk, I'd like to present modern machine learning tools for estimating the posterior of the inverse problem exposed in a beam control setting. That is, given an experimental beam profile, I'd like to demonstrate tools that help to estimate which simulation parameters might have produced a similar beam profile with high likelihood.
We summarize preliminary findings bound to optimize a xray beamline located at a synchrotron accelerator. With this, we hope to tackle the challenge to characterize beam quality with minimal invasion as possible. The basis of my discussion will be a surrogate model that emulates experimental conditions of beam profile knife-edge scans. We hope that this discussion is of interest to this accelerator physics community at LPA.
Standard computer simulations for indirect drive inertial confinement fusion, without platform-specific corrections, often show discrepancy with experiments. In this talk, we present a machine learning based method for training models that correct for this discrepancy.
We combine simulation and experimental data via a technique called “transfer learning” to produce a model that is predictive of NIF experiments from a wide variety of campaigns, and becomes more accurate as more experimental data are acquired. This model has been used to predict the outcome of recent DT experiments at the NIF with progressively increasing accuracy.
This data-driven model can play a valuable role in future design exploration by providing empirically realistic sensitivities to design parameters. We illustrate how transfer learned corrections to simulation predictions could guide us toward high performing designs more efficiently than simulations alone.
Prepared by LLNL under Contract DE-AC52-07NA27344. LLNL-ABS-824223.
We will be reviewing recent machine learning techniques from the perspective of compact Laser-particle accelerators (electron and ions). High-fidelity simulations of the involved physical phenomena are carried out by computationally-expensive particle-in-cell simulations which are used for planning of experiments as well as subsequent analysis. We will be discussing methods for surrogate modeling and reduced-order modeling for reducing the computational complexity and storage footprint of the simulations. From an experimental perspective, one important task relates to the recovery of the initial physics conditions using simulations that mimic the experiment. In addition, advanced spectral diagnostics provide promising novel insights into time-dependent processes, e.g. inside the plasma. Analysis of such data frequently touches inverse problems (e.g. phase retrieval) that occur in e.g. laser diagnostics, analysis of plasma expansion via pump-probe experiments (SAXS reconstruction), or novel experimental diagnostics such as coherent-transition-radiation. Modern data-driven methods promise fast solutions and quantify uncertainty even of ambiguous inverse problems, while the reliability of these methods on out-of-distribution data has to be considered. Finally, one can derive novel synthetic diagnostics about the state of the machine and experimental setup that can be used for automatic experimental parameter tuning via machine learning techniques during live data acquisition.
The High Power Laser System (HPLS) at ELI-NP / IFIN-HH operation produces a large quantity of data. The laser beam profile images collected from the diagnostics bench help characterize the quality of the system’s operation. The present work focuses on the problem of image classification in order to augment the beam profile qualitative analysis. This leads to the application of machine learning models to various data provided by the laser systems used at the ELI-NP facility. In this sense, the proposed approach focuses on a comparison of machine learning algorithm’s performances on the problem of abnormality detection on laser beam profiles. The main goal of the study is to deliver a reliable classification model by creating and comparing different models based on supervised machine learning methods. For this, classification models based on convolutional neural networks and support vector machines were trained and validated on datasets consisting of images recorded by the system’s benchmark cameras during operation time, resulting in three models with a validated accuracy of over 90%. These classifiers were then compared based on performance metrics selected to fit the studied problem. The results of this comparison brought forward a laser beam profile classifier with performances such as accuracy, f-score and true positives rate of over 95%. The output of the classifier can help laser system operators in the process of better aligning and focusing the laser beam.
Radiation reaction, the recoil of a charge upon emitting radiation, is the subject of ongoing theoretical and experimental research, particularly in highly intense electromagnetic fields in which quantum effects become significant. In such environments, a QED treatment of radiation reaction is required. Various suitable theories have been proposed but have yet to be validated experimentally.
This work considers experiments in which electrons are accelerated to relativistic energies using a laser wakefield accelerator, before colliding with a tightly focused, counter-propagating laser pulse. This allows electric fields strengths up to some fraction of the critical field (Schwinger limit) to be accessed. In these experiments, the electron beam may be chirped and have a bunch length comparable to the Raleigh range of the colliding laser. We show that these properties may affect the radiative losses experienced by the electron beam but are extremely difficult to measure in practice.
We have developed a Bayesian inference method which can retrieve the parameters that govern the collision between an electron bunch and laser pulse, including the electron phase space distribution, for different models of radiation reaction. The errors on the inferred parameters and on the predictions made by each model follow naturally from the Bayesian framework we have utilized. We demonstrate the strong effect of the inferred parameters on our ability to ascertain whether experimental data supports a given model and provide a quantitative comparison of different models of radiation reaction.
Bayesian optimization has proven to be an efficient method to optimize expensive-to-evaluate systems such as a Laser Wakefield Accelerator (LWFA). However, depending on the cost of single observations, multi-dimensional optimizations of one or more objectives (Pareto optimization) may still be prohibitively expensive. Multi-fidelity optimization remedies this issue by including multiple, cheaper information sources such as low-resolution approximations in numerical simulations. Acquisition functions for multi-fidelity optimization are typically based on exploration-heavy algorithms that are difficult to combine with optimization towards multiple objectives. Here we show a technique that enables expected hypervolume improvement policy (EHVI) to be used as a Multi-objective Multi-fidelity acquisition function. We incorporate the evaluation cost either via a two-step evaluation or within a single acquisition function with an additional fidelity-related objective. This permits simultaneous multi-objective and multi-fidelity optimization, which allows to accurately establish the Pareto set and front at a fractional cost. Our method has the potential to allow Pareto optimization of LWFA beams by intelligently using cheaper PIC simulations to predict best input parameters for more expensive simulations. The presented methods are simple to implement in existing, optimized Bayesian optimization frameworks and thus also allow for an immediate extension to batch optimization. The techniques can also be used to combine different continuous and/or discrete fidelity dimensions.