Machine Learning and Software R&D
Data reconstruction is a process of extracting high level, abstract information, such as the energy and flavor of an interacting neutrino (only 2 values!), from raw, granular data such as an image of millions of pixels.
Our group is actively engaged with and lead the LArTPC data reconstruction effort at all levels, from analysis of raw waveforms to 3D geometrical and calorimetric pattern recognitions using machine learning (ML) techniques.
Our goal is to develop an innovative, high quality data reconstruction algorithms to maximize the physics output using advanced computing hardwares.
LArTPC is a high resolution (mm/pixel) particle imaging detector and its data is visually intuitive to interpret by particle physicists. When seeing data, our eyes and brains automatically identify and correlate many physics features at different scales to draw a high level conclusion, like "oh I see a muon neutrino interaction."
Writing an image processing algorithm for something visually intuitive is, however, non-trivial. There has been a similar challenge in the computer vision community, namely an algorithm to identify a cat or dog (see this TED talk by Fei-Fei Li, a Stanford CS professor). Recently this problem is addressed by an advancement of ML techniques, in particular Deep Neural Networks (DNNs), and an accuracy by algorithms have surpassed human average in a large public image dataset. Today DNNs are used in many areas including facial recognition, self-driving car, and even playing a Go.
The SLAC team leads ML techniques R&D for LArTPC experiments. Terao led the first demonstration of DNNs on LArTPC image analysis, and also study an important topic such as the network response discrepancies between synthetic and real data. The goal of the team is to develop a ML-based, high-quality data reconstruction chain. Our group is ideally positioned for this innovation backed up by members with ML expertise and years-long history of LArTPC data reconstruction software development.
Particle Imaging in 3D
Does LArTPC image particle in 2D or 3D?
The answer is both. There are two types of LArTPCs: wire-based and pixel-based . Although they are both for particle imaging detectors, the former is cheaper and outputs multiple 2D projection images of particle trajectories while the latter records 3D particle trajectory in a raw output. For the wire-based LArTPCs, 2D images are created with different 3D-to-2D projection angles such that one can reconstruct the original 3D particle trajectories by analyzing correlation of pixels across 2D images.
Usher has been leading the reconstruction of such 3D points, called point cloud, for the wire-based LArTPC detectors in experiments including MicroBooNE, ProtoDUNE, and ICARUS. Our group primarily focus on data reconstruction in 3D, for which we need to reconstruct 3D point representations, called point cloud, from 2D images in case of the wire-based LArTPC detectors. Usher has been leading 3D In short, the former produces multiple 2D images with different projection angles.
Machine Learning for Data Reconstruction
R&D of ML algorithms is within the group's core efforts. We have developed multiple DNNs to perform data reconstruction tasks such as vertex finding, pixel clustering, particle type prediction and energy regression, etc.. Our goal is to effectively combine those DNNs into a single-path data transformation from input image to extraction of high level physics information (e.g. neutrino flavor and energy).
Training the chained model, the DNNs are forced to learn hierarchical correlation of physical features (i.e. reconstruction outputs) and draw a logical conclusion such that a human physicist would. This reduces the "black-box" aspect of a neural network. When mistake is made, reconstructed information is available to inspect how the conclusion is drawn. Further, thanks to the nature of ML algorithms, the chained DNNs have a well defined optimization routine by minimizing combined errors. The optimization of a chain is a difficult task, if not impossible, when combining human-engineered algorithms.
Machine Learning for Domain Discrepancies
How do we train ML algorithms? We take an advantage of sophisticated simulation softwares which can generate a large statistics of training set with labels (i.e. "answers" for algorithms' tasks). This allows us to use supervised training scheme. However, simulation is never perfect. A discrepancy between real and simulated data domains can cause unexpected behavior of algorithms (e.g. poorer performance).
We explore two techniques to mitigate this issue including generative models and adversarial training techniques. For example, a generative adversarial network (GAN) has a capability to learn a distribution that maps between two domains, and can be used to make simulation image look like real data. A domain adversarial training technique is a general technique that can be used to suppress an algorithm to key on any domain-specific information, hence avoiding discrepancies in the performance across different domains. can be applied to learn dis to mitigate this possible issue.
Building a Machine Learning Hub
ML is a denominator across many research areas, from physical to social science, from data analytics to operations and controls of a system. We actively engage with researchers outside to get inspired and accelerate our development. Some large efforts we have lead include the followings.