We review the current status of neutrino oscillation experiments, mainly focusing on T2(H)K, NOA, and DUNE. Their capability to probe high energy physics is found in the precision measurement of the CP phase and . In general, neutrino mass models predict correlations among the mixing angles that can be used to scan and shrink their parameter space. We updated previous analysis and presented a list of models that contain such structure.

1. Introduction

The upcoming sets of long-baseline neutrino experiments will establish a new standard in the search for new physics. Two distinct directions arise; the phenomenological approach consists of seeking new unobserved phenomena that are present in a large class of models. They were extensively studied in the literature and are subdivided into 3 main groups: Nonstandard Interactions (NSI) searches [114], Light Sterile Neutrinos [1519], and Nonunitarity [2028]. The second approach is more theory based and was less explored. It focuses on correlations among neutrino mixing angles predicted by high energy models. This opens the possibility of testing models that contain almost no low-energy phenomenological effects different from the Standard Model.

Since the discovery of neutrino oscillations, a plethora of models was realized to try to explain the origin of the neutrino masses. The first proposal was the see-saw mechanism [2934] which tried to explain the smallness of neutrino masses () through a heavy mass scale () . Another possible path uses loop mechanisms, in which neutrino masses can be suppressed at zeroth [35] or even first order [36]. Nevertheless, such theories usually do not explain the structure of the oscillation parameters, as they are merely free parameters.

This changes by the addition of discrete symmetry that controls the pattern of the leptonic mass matrix [3739]; for a review on the subject see, e.g., [40, 41]. They can predict relations among the neutrino mixing angles [4253] which can be used to constrain the parameter space of such theories [54].

This manuscript is divided into seven sections: In Section 2 we describe current and future neutrino oscillation experiments: T2K, NOA, and DUNE and their simulation. In Section 3 we briefly discuss the statistical analysis and methods used to scan the parameter space. In Section 4 we present the sensitivity to neutrino mixing parameters expected in each experiment. In Section 5 we review the possibilities of using the - correlation in long-baseline experiments by updating previous analysis of two models [55, 56]. In Section 6 we review the possibility of using the - correlation by combining long-baseline experiments with reaction measurements of . In Section 8 we present a summary of the results.

2. Long-Baseline Experiments and Their Simulation

Here we choose focusing on four experimental setups; two of them are already running: T2K [57] and NOA [58]; and two had their construction approved: DUNE [59] and T2HK [60]. Their sensitivity to the two most unknown parameters of the leptonic sector, the CP violation phase and the atmospheric mixing angle, makes them ideal to probe correlations among the mixing angles. As shown in [54], they can be used to shrink the parameter space of predictive models. A short description of each experiment can be found below and in Table 1.

(1) T2K. The Tokai to Kamiokande (T2K) experiment [57, 61] uses the Super-Kamiokande [62] as a far detector for the J-Park neutrino beam, which consists of an off-axis (by a angle) predominantly muon neutrino flux with energy around 0.6 GeV. The Super-Kamiokande detector is a 22.5 kt water tank located at 295 from the J-Park facility. It detects neutrino through the Cherenkov radiation emitted by a charged particle created via neutrino interaction. There is also a near detector (ND280); thus the shape of the neutrino flux is well known, and the total normalization error reaches for the signal and for the background. T2K is already running and its current results can be found in [63] and reach POT of flux for each neutrino/antineutrino mode, which corresponds to 10% of the expected approved exposure. There are also plans for extending the exposure to POT.

(2) NOA. The NuMI off-axis appearance (NOA) [58, 64, 65] is an off-axis (by a angle) that uses a neutrino beam from the Main Injector of Fermilab’s beamline (NuMI). This beam consists of mostly muon neutrinos with energy around 2 GeV traveling through 810 km until arriving at the 14 kt Liquid Scintillator far detector placed at Ash River, Minnesota. The far and near detectors are highly active tracking calorimeters segmented by hundreds of PVP cells and can give a good estimate of the total signal and background within an error of and of total normalization error, respectively. The planned exposure consists of a POT that can be achieved in 6 years of running time, working in in the neutrino mode and in the antineutrino mode. NOA is already running; current results can be found in [66, 67].

(3) DUNE. The Deep Underground Neutrino Experiment (DUNE) [59, 6871] is a long-baseline next generation on-axis experiment also situated in Fermilab. It flux will be generated at the LBNF neutrino beam to target a 40 kt Liquid Argon time chamber projection (LarTPC) located 1300 km away from the neutrino source at Sanford Underground Research Facility (SURF). The beam consists of mostly muon neutrinos of energy around 2.5 GeV and expects a total exposure of POT running 3.5 years in neutrino mode and 3.5 years in antineutrino mode. The near and far detectors are projected to obtain a total signal (background) normalization uncertainty of 4% (10%). The experiment is expected to start taking data around 2026.

(4) T2HK. The Tokai to Hyper-Kamiokande (T2HK) [60, 7275] is an upgrade of the successful T2K experiment at J-Park. It uses the same beam as its predecessor T2K, an off-axis beam from the J-Park facility 295 km away from its new far detector: two water Cherenkov tanks with 190 kt of fiducial mass each. The expected total power is POT to be delivered within 2.5 yrs of neutrino mode and 7.5 yrs of antineutrino mode in order to obtain a similar number of both neutrino types. The new design includes improvements in the detector systems and particle identification that are still in development. For simplicity, we take similar capability as the T2K experiment and will assume a 5% (10%) of signal (background) normalization error. The first data taking is expected to start with one tank in 2026 and the second tank in 2032.

In order to perform simulation of any neutrino experiment, the experimental collaboration uses Monte Carlo Methods, which can be performed through several event generators like GENIE [76], FLUKA [77], and many others. See PDG [78] for a review. Such technique requires an enormous computational power and detector knowledge, as it relies on the simulation of each individual neutrino interaction and how its products evolve inside of the detector. A simpler, but faster, simulation can be accomplished by using a semianalytic calculation of the event rate integral [79]: is the number of detected neutrinos with energy between and . describes the flux of neutrino arriving at the detector. is the oscillation probability and the detection cross section of the detection reaction.

, also known as migration matrix, describes how the detector interprets neutrino with energy being detected at energy and summarizes the effect of the Monte Carlo simulation of the detector into a single function. A perfect neutrino detector is described by a delta function, , while a more realistic simulation can use a Gaussian function: where parametrizes the error in the neutrino energy detection or a migration matrix provided by the experimental collaboration.

The public available software GLoBES [79, 80] follows this approach and is commonly used to perform numerical simulation of neutrino experiments. There is also another tool, the NuPro package [81] that will be publicly released soon. All the simulations in this manuscript are performed using GLoBES.

3. Statistical Analysis and Probing Models: A Brief Discussion

We are interested in a rule to distinguish between two neutrino oscillation models that can modify the spectrum of detected neutrinos in a long-baseline neutrino experiment. From the experimental point of view, one may apply a statistical analysis to quantitatively decide between two (or more) distinct hypotheses given a set of data points .

Each model () will define a probability distribution function (p.d.f.), , where the statistic test function depends on the real data points and the model parameters , . The best fit of a model is defined as the values of the model parameters that maximize the p.d.f. function: . Thus, one can reject model , over model by some certain confidence level if is a constant that depends on the probability test, the number of parameters, and the confidence level .

From the theoretical point of view, the real data points were not yet measured; this means that in order to find the expected experimental sensitivity we need to produce pseudo-data points by adding an extra assumption on which model is generating the yet-to-be-measured data points. That means there are various ways of obtaining sensitivity curves; each of them is described in Table 2.

Although one can always generate the pseudo-data points using any desired model at any point in its parameter space, the usual approach is to assume that the data points are generated by the standard 3 neutrino oscillation (Standard-3) model with parameters given by current best fit values. We will use this approach in the work. Current best fit values are described in Table 3 and were taken from [82].

3.1. Frequentist Analysis

The chi-square test [78, 83, 84] is the most common statistical analysis chosen to test the compatibility between the experimental data and the expected outcome of a given neutrino experiment. It is based on the construction of a Gaussian chi-squared estimator () so that . This means that the best fit values are obtained by the set of values that globally minimizes the function . For long-baseline neutrino oscillation experiments the chi-square function can be divided into three factors:where in the simplest case reduces to Poissonian Pearson’s statistic is the number of observed neutrinos in the bin . It represents the pseudo-data points generated by a given model. () is the signal (background) observed neutrino as expected by a given model and depends on the model parameters. comprises the experimental uncertainties and systematics. For in (5), it is given byHere, () is the total normalization error in the signal (background) flux. Finally, contains all the prior information one wishes to include in the model parameters. In this work we will assume unless stated otherwise.

The exponential nature of the chi-squared estimator makes it straightforward to find the confidence levels for the model parameters. It suffices to define the functionwhere is the chi-squared function assuming model calculated in its best fit and is the chi-squared function assuming model minimized over all the desired free parameters. Thus, the confidence levels are obtained by finding the solutions of are all the fixed parameters of model and are the constants that define the probability cuts and depend on the number of parameters in and the confidence probability. For intervals and one parameter, .

Notice that is in fact a function of the parameters one assumes to generate the pseudo-data points, which we call True Values and denote as , and the parameters of the model we wish to test, which we call Test Values and denote as .

4. Measurement of Oscillation Parameters in Long-Baseline Experiments

The main goal of long-baseline experiments is to measure with high precision the two most unknown oscillation parameters: the CP phase and the atmospheric mixing angle through the measurement of the neutrino/antineutrino survival and transition of neutrinos from the beamline. Many authors studied the power of long-baseline experiments to obtain the neutrino mixing parameters [8593]. Particularly, only the transition is sensitive to and described, to first order in matter effects, by the probability function below.where , , , and . is the Fermi constant and is the electron density in the medium. is the neutrino energy and is the baseline of the experiment and they are chosen to obey in order to enhance the effect of the CP phase. The antineutrino probability is obtained by changing and . Thus, the difference between neutrino and antineutrino comes from matter effects and the CP phase. It turns out that the T2HK is the most sensitivity to as it has a bigger statistic and lower matter effect and can reach difference between CP conservation and maximal CP-nonconservation [73], in contrast with DUNE’s [68]. In Figure 1 we plotted the expected allowed regions of versus at for each experiment. We assumed the true value of the parameters as those given in Table 3. The black region is the current 90% CL region and the black points are the best fit points. T2HK is the most sensitive experiment in reconstructing both parameters, followed by DUNE. NOA and T2K are the first experiments to measure a difference between matter and antimatter in the leptonic sector but cannot measure the CP phase with more than 3. Notice that the experiments cannot discover the correct octant of at ; that is, they cannot tell if (High Octant) or (Lower Octant) unless they are supplemented by an external prior. This effect is independent of the value of as can be observed in Figure 2(a) where we plotted the reconstruction of given a fixed true value of of each experiment. The black line corresponds to current best fit and the gray area is the 1 region. The -like pattern of the region shows that given any true value of there is region in the correct octant and in the wrong octant. Nevertheless, the octant can be obtained if one incorporates a prior to the angle [9498] and future prospects on the measurement of by reactor experiments will allow both DUNE and T2HK to measure the octant if the atmospheric angle is not all inside the region [99].

For completeness, we show in Figure 2(a) the reconstruction of the given a fixed true value of . The black line represents current best fit and the gray area shows the region. We do not show the plots for NOA or T2K as they cannot reconstruct the CP phase at . The sensitivity is a little bit worse around maximum CP violation or but in general it does not change much when one varies .

5. and Correlation and Probing Models

In spite of being with relatively low energy (<few GeV), neutrino experiments can be a tool to probe high energy physics. Many neutrino mass models predict relations such as neutrino mass sum rules [41, 100106] that can be probed in neutrinoless double beta decay [107] and relations among the neutrino mixing parameters. To name but a few examples we cite [4244, 108]. They can be put to test by a scan of the parameter space much like what was done by the LHC in search for new physics. Thus, inspired by the precision power of future long-baseline neutrino experiments, it was shown in [54] that models that predict a sharp correlation between the atmospheric angle and the CP phase can be used to put stringent bounds on parameters of such models.

In general, a predictive neutrino mass model is constructed by imposing a symmetry on the Lagrangian and can be parametrized by a set of free parameters , , which can be translated into the usual neutrino mixing parameters from the neutrino mass matrix; that is,Because of the symmetry on the Lagrangian, not all possible mass matrices are allowed to be generated and the free parameters may not span the entire space of the mixing parameters and . Thus, in principle, it is possible to probe or even exclude a model if the real best fit falls into a region that the model cannot predict. As an example, in Figure 3 we plot the allowed parameter space of two discrete symmetry based models, the Warped Flavor Symmetry (WFS) model [45] (a) and the Revamped Babu-Ma-Valle (BMV) model [109] (b). The black curves represent currently unconstrained (Standard-) 90% CL regions for the neutrino parameters and the black point shows the best fit value, while the blue region represents the allowed parameter space of the two models. Notice that even for the 3 range the model can only accommodate a much smaller region than the unconstrained one. This is a reflex of the symmetries forced upon those models by construction; in WFS a maximal CP phase implies , and the smaller the CP violation is, the farther away from the atmospheric angle is, while in BMV a maximal CP phase implies a Lower Octant atmospheric mixing and it cannot fit a .

By using this approach, a full scan of the parameter space was performed for those two models, in [55] for the WFS model and in [56] for the Revamped model.

We show in Figure 4 an updated version of their results. The colored regions represent regions of the parameter space in which the model cannot be excluded with more than for DUNE (red) and T2HK (cyan) experiment; both T2K and NOA cannot probe the CP phase with more than 3; thus, they cannot exclude the model alone.

This means that if future long-baseline experiments measure a specific combination of and as its best fit that does not fall into the colored regions, they may be able to exclude the model. Therefore, those kinds of analysis are guidelines to decide which model can or cannot be tested given the future results of DUNE and T2HK and are worth performing in any model that contains predictive correlations among the CP phase and the atmospheric mixing, like [4244, 110] and many others. It is also worth mentioning that combination of long-baseline measurements and reactors can greatly improve the sensitivity of the analysis.

6. and the Atmospheric Octant

The analysis in the last section can be extended to include another type of correlation that tries to explain the smallness of the reactor angle . A general approach common in many models [4653] imposes a given symmetry on the mass matrix that predicts , which is later spontaneously broken to give a small correction to the reactor angle. It turns out that in order to generate nonzero one automatically generates corrections to other mixing angles .

This can be easily observed by considering a toy model that predicts the tri-bimaximal mixing matrix:Any consistent small correction to the mixing matrix should maintain its unitary character. Particularly, we can set a correction in the planes via the matrixNotice that . If we change the mixing matrix (notice that the correction cannot produce a nonzero ) by then . The general case can be described by an initial mixing matrix that is later corrected by a rotation matrix :All the possible combinations of corrections from tri-bimaximal, bimaximal, and democratic mixing were considered in [111]. Particularly, one can investigate a general correlation of to the nonmaximality of the atmospheric angle:where is a function of the correction . Long-baseline experiments alone are not too sensitive to changes in the reactor angle; nevertheless, it was shown in [112] that it is possible to use such correlation to probe the parameter space of such models by combining long-baseline and reactor experiments.

This can be accomplished in a model-independent approach by series expanding (14):This encompasses both the uncorrelated (Standard-3) case if one sets and assumes as a free parameter and the small correction case by setting and . In Table 4 we present many models that contain this kind of correlation and their possible parameters values for and .

In Figure 6 we update the potential exclusion regions where models of the form can be excluded for each value of at 3 by DUNE + reactors and T2HK + reactors. The true value of is set to the central value of Table 3 and its error is assumed to be . The colored regions represent the regions that cannot be excluded with more than 3. There we can see that models that contain strong correlations () or weak correlations () can be excluded from any set of atmospheric angles.

The general case for any is presented in Figure 6(a) for DUNE and for T2HK in Figure 6(b) for three values of : 0.43 (green), 0.5 (cyan), and 0.6 (red). The region shrinks greatly as the true value of the atmospheric angle goes away from the maximal mixing .

7. Going Beyond Flavor Models

Albeit flavor symmetry models are very common in the literature, mixing angles correlations are by no means exclusive to this class. Since long-baseline experiments are sensitive to the most unknown leptonic parameters, the possibility of using such correlations was studied not only in long-baseline but also in any neutrino experiment. The most common class is high energy model containing Nonstandard Interactions [114]; in particular, any model that produces nonstandard 4-point Fermi interaction between electron and the neutrinos can, in principle, be probed by experiments that contain matter interactions, as well as studies in special mixing matrix ansatz such as Golden Ratio and other symmetries [41, 47, 91, 111, 113118]. Moreover, one can find assumptions on neutrino mass sum rules that can be tested [104106, 119] and generalized CP symmetry schemes [120125]. General class of models such as grand unifying theories (GUT) and large extra dimensions (LED) was studied in [126128]. Cosmology can also present ways of testing predictive neutrino mass models in leptogenesis [129] and even baryogenesis [130].

8. Summary

The state of the art of long-baseline neutrino oscillation experiments is T2(H)K, NO, and DUNE. They will be capable of reaching very good precision in the reactor and atmospheric mixing angle and will measure for the first time the CP violation phase. This will create an opportunity to put at test a plethora of neutrino mass models that predict values and correlations among the parameters of the PMNS matrix [5456, 110, 131, 132].

Here we briefly discuss the fitting approach that quantifies the ability of long-baseline experiments to exclude predictive high energy models. Two types of correlations can be used: The - correlation is found in many models containing a variety of symmetries [4245]. Nevertheless, each model in the market may contain a different correlation, and most models are still in need to be analyzed. On the other hand, the - correlation can only be probed by combining long-baseline with reactor experiments, as the former are not sensible enough to variations. However, we can take a model-independent approach [112] that covers most models that try to explain the smallness of the angle trough an spontaneous symmetry breaking [4653]. We present a set of Figures 4, 5, and 6 containing the potential exclusion regions of each model here analyzed that can be used as a benchmark when the future experiments start to run.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this article.


Pedro Pasquini was supported by FAPESP Grants 2014/05133-1, 2015/16809-9, and 2014/19164-6 and FAEPEX Grant no. 2391/17 and, also, by the APS-SBF collaboration scholarship.