Abstract

The discovery of neutrino masses through the observation of oscillations boosted the importance of neutrinoless double beta decay (). In this paper, we review the main features of this process, underlining its key role from both the experimental and theoretical point of view. In particular, we contextualize the in the panorama of lepton number violating processes, also assessing some possible particle physics mechanisms mediating the process. Since the existence is correlated with neutrino masses, we also review the state of the art of the theoretical understanding of neutrino masses. In the final part, the status of current experiments is presented and the prospects for the future hunt for are discussed. Also, experimental data coming from cosmological surveys are considered and their impact on expectations is examined.

1. Introduction

In 1937, almost ten years after Paul Dirac’s “The quantum theory of electron” [1, 2], Majorana proposed a new way to represent fermions in a relativistic quantum field theory [3] and remarked that this could be especially useful for neutral particles. A single Majorana quantum field characterizes the situation in which particles and antiparticles coincide, as it happens for the photon. Racah stressed that such a field could fully describe massive neutrinos, noting that the theory by Majorana leads to physical predictions essentially different from those coming from Dirac theory [4]. Two years later, Furry [5] studied within this scenario a new process similar to the “double beta disintegration,” introduced by Goeppert-Mayer in 1935 [6]. It is the double beta decay without neutrino emission, or neutrinoless double beta decay (). This process assumes a simple form; namely,The Feynman diagram of the process, written in terms of the particles we know today and of massive Majorana neutrinos, is given Figure 1.

The main and evident feature of the transition is the explicit violation of the number of leptons and, more precisely, the creation of a pair of electrons. The discovery of would therefore demonstrate that lepton number is not a symmetry of nature. This, in turn, would support the exciting theoretical picture that leptons played a part in the creation of the matter-antimatter asymmetry in the Universe.

In the attempt to investigate the nature of the process, various other theoretical possibilities were considered, beginning by postulating new superweak interactions [7, 8]. However, the general interest has always remained focused on the neutrino mass mechanism. In fact, this scenario is supported by two important facts:(1)On the theoretical side, the triumph of the Standard Model (SM) of electroweak interactions in the 1970s [911] led to formulating the discussion of new physics signals using the language of effective operators, suppressed by powers of the new physics mass scale. There is only one operator that is suppressed only by one power of the new mass scale and violates the global symmetries of the SM or, more precisely, the lepton number: it is the one that gives rise to Majorana neutrino masses [12] (see also [1316]).(2)On the experimental side, some anomalies in neutrino physics, which emerged throughout 30 years, found their natural explanation in terms of oscillations of massive neutrinos [17]. This explanation was confirmed by several experiments (see [18, 19] for reviews). Thus, although oscillation phenomena are not sensitive to the Majorana nature of neutrinos [20], the concept of neutrino mass has changed its status in physics, from the one of hypothesis to the one of fact. This, of course, strengthened the case for light massive neutrinos to play a major role for the transition.For these reasons, besides being an interesting nuclear process, is a also a key tool for studying neutrinos, probing whether their nature is the one of Majorana particles and providing us with precious information on the neutrino mass scale and ordering. Even though the predictions of the lifetime still suffer from numerous uncertainties, great progresses in assessing the expectations for this process have been and are being made. These will be discussed later in this review.

About the Present Review. In recent years, several review papers concerning neutrinoless double beta decay have been written. They certainly witness the vivid interest of the scientific community in this topic. Each work emphasizes one or more relevant aspects such as the experimental part [2125], the nuclear physics [26, 27], the connection with neutrino masses [28, 29], and other particle physics mechanisms [3033]. The present work is not an exception. We mostly focus on the first three aspects. This choice is motivated by our intention to follow the theoretical ideas that describe the most plausible expectations for the experiments. In particular, after a general theoretical introduction (Sections 2 and 3), we examine the present knowledge on neutrino masses in Section 4 and the status of expectations from nuclear physics in Section 5. Then we review the experimental situation (Section 6) and emphasize the link between neutrinoless double beta decay and cosmology (Section 7).

A more peculiar aspect of this review is the effort to follow the historical arguments, without worrying too much about covering once more well-known material or about presenting an exhaustive coverage of the huge recent literature on the subject. Another specific characteristic is the way the information on the neutrino Majorana mass is dealt with. In order to pass from this quantity to the (potentially measurable) decay rate, we have to dispose of quantitative information on the neutrino masses and on the matrix elements of the transition, which in turn requires the description of the nuclear wave functions and of the operators that are implied. Therefore, our approach is to consider the entire available information on neutrino masses and, in particular, the one coming from cosmology. We argue that the recent progresses (especially those coming from the Planck satellite data [34]) play a very central role for the present discussion. On the other side, the matrix elements have to be calculated (rather than measured) and are thus subject to uncertainties which are difficult to assess reliably. Moreover, the adopted methods of calculation do not precisely reproduce other measurable quantities (single beta decay, two-neutrino double beta decay, etc.). We thus prefer to adopt a cautious/conservative assessment of the theoretical ranges of these matrix elements.

We would like to warn the reader that other attitudes in the discussion are surely possible, and it is indeed the case for some of the mentioned review works. Using less stringent limits from cosmology and disregarding the uncertainties from nuclear physics are equivalent to assuming the most favorable situation for the experiments. This could be considered beneficial for the people involved in experimental search for the neutrinoless double beta decay. However, we prefer to adhere to a more problematic view in the present work, simply because we think that it more closely reflects the present status of facts. Considering the numerous experiments involved in the field, we deem that an updated discussion on these two issues has now become quite urgent. This will help us to assess and appreciate better the progresses expected in the close future, concerning the cosmological measurements of neutrino masses and perhaps also the theoretical calculations of the relevant nuclear matrix elements.

2. The Total Lepton Number

No elementary process where the number of leptons or the number of hadrons varies has been observed yet. This suggests the hypothesis that the lepton number and the baryon are subject to conservations laws. However, we do not have any deep justification for which these laws should be exact. In fact, it is possible to suspect that their validity is just approximate or circumstantial, since it is related to the range of energies that we can explore in laboratories. (Notice also that the fact that neutral leptons (i.e., neutrinos or antineutrinos) are very difficult to observe restricts the experimental possibilities to test the total lepton number.)

In this section, we discuss the status of the investigations on the total lepton number in the SM and in a number of minimal extensions, focusing on theoretical considerations. In particular, we introduce the possibility that neutrinos are endowed with Majorana mass and consider a few possible manifestations of lepton number violating phenomena. The case of the will instead be addressed in the rest of this work.

2.1. and Symmetries in the SM

The SM in its minimal formulation has various global symmetries, including and , which are called “accidental.” This is due to the specific particle content of the model and to the hypothesis of renormalizability. Some combinations of these symmetries, like, for example, “,” are conserved also nonperturbatively. This is sufficient to forbid the transition completely in the SM. In other words, a hypothetical evidence for such a transition would directly point out to physics beyond the SM. At the same time, the minimal formulation of the SM implies that neutrinos are massless, and this contradicts the experimental findings. Therefore, the question of how to modify the SM arises, and this in turn poses the related burning question concerning the nature of neutrino masses.

2.2. Majorana Neutrinos

In 1937, Majorana proposed a theory of massive and “real” fermions [3]. This theory contains less fields than the one used by Dirac for the description of the electron [1, 2] and, in this sense, it is simpler. Following the formalism introduced in 1933 by Fermi when describing the decay [36], the condition of reality for a quantized fermionic field can be written aswhere is the charge conjugation matrix, while is the Dirac conjugate of the field. In particular, Majorana advocated a specific choice of the Dirac -matrices, such that , which simplifies various equations. The free particle Lagrangian density formally coincides with the usual one:

Following Majorana’s notations, the decomposition of the quantized fields into oscillators iswhere is the relative orientation between the spin and the momentum (helicity). We adopt the normalization for the wave functions: , and for the oscillators: . For any value of the momentum, there are 2 spin (or helicity) states:Figure 2 illustrates the comparison between the particle content of both a Dirac and a Majorana field in the case (rest frame).

Evidently, a Majorana neutrino is incompatible with any transformation, for example, or the weak hypercharge (i.e., however broken in the vacuum). In general, will be violated by the presence of Majorana mass.

In the SM, the neutrino field appears only in the combinationwhere is the so-called chiral projector (Table 1). It is then possible to implement the hypothesis of Majorana in the most direct way by defining the real field:In fact, we can conversely obtain the SM field by a projection:

2.3. Ultrarelativistic Limit and Massive Neutrinos

The discovery that parity is a violated symmetry in weak interactions [37, 38] was soon followed by the understanding that the charged current (which contains the neutrino field) always includes the left chiral projector [3941] (see Sections 2.2 and 3.1).

It is interesting to note the following implication. Within the hypothesis that neutrinos are massless, the Dirac equation becomes equivalent to two Weyl equations [42] corresponding to the Hamiltonian functions:where are the three Pauli matrices and the two signs apply to the neutral leptons that, thanks to the interaction, produce charged leptons of charge , respectively. In other words, we can define these states as neutrinos and antineutrinos, respectively. Moreover, by looking at (9), one can see that the energy eigenstates are also helicity eigenstates. More precisely, the spin of the neutrino (antineutrino) is antiparallel (parallel) to its momentum. See Figure 3 for illustration.

The one-to-one connection between chirality and helicity holds only in the ultrarelativistic limit, when the mass of the neutrinos is negligible. This is typically the case that applies for detectable neutrinos, since the weak interaction cross sections are bigger at larger energies. However, these remarks do not imply in any way that neutrinos are massless. On the contrary, we know that neutrinos are massive.

A consequence of the chiral nature of weak interactions is that if we assume that neutrinos have the type of mass introduced by Dirac, we have a couple of states that are sterile under weak interactions in the ultrarelativistic limit. Conversely, the fact that the left chiral state exists can be considered a motivation in favor of the hypothesis of Majorana. In fact, this does not require the introduction of the right chiral state, as instead required by the Dirac hypothesis. Most importantly, it should be noticed that in the case of Majorana mass it is not possible to define the difference between a neutrino and an antineutrino in a Lorentz invariant way.

2.4. Right-Handed Neutrinos and Unified Groups

The similarity between and is perceivable already within the SM. The connection is even deeper within the so-called Grand Unified Theories (GUTs), that is, gauge theories with a single gauge coupling at a certain high energy scale. The standard prototypes are [47] and [48, 49]. GUTs undergo a series of symmetry-breaking stages at lower energies, eventually reproducing the SM. They lead to predictions on the couplings of the model and suggest the existence of new particles, even if theoretical uncertainties make it difficult to obtain reliable predictions. The possibilities to test these theories are limited, and major manifestations could be violations of and .

The matter content of GUTs is particularly relevant to the discussion. In fact, the organization of each family of the SM suggests the question whether right-handed neutrinos (RH) exist along with the other 7 RH particles (Figure 4). This question is answered affirmatively in some extensions of the SM. For example, this is true for gauge groups that also include a factor, on top of the usual factor. In the gauge group, which belongs to this class of models, each family of matter includes the 15 SM particles plus 1 RH neutrino.

It should be noted that RH neutrinos do not participate in SM interactions and can therefore be endowed with a Majorana mass , still respecting the SM gauge symmetries. However, they do participate in the new interactions, and, more importantly for the discussion, they can mix with the ordinary neutrinos via the Dirac mass terms, . Therefore, in presence of RH neutrinos, the SM Lagrangian (after spontaneous symmetry breaking) will include the termswhere and . It is easy to understand that, at least generically, this framework implies that the lepton number is broken.

Let us assume the existence of RH neutrinos, either embedded in a unified group or not, and let us suppose that they are heavy (this happens, e.g., if the scale of the new gauge bosons is large and the couplings of the RH neutrinos to the scalar bosons implementing spontaneous symmetry breaking of the new gauge group are not small). In this case, upon integrating away the heavy neutrinos from the theory, the light neutrinos will receive Majorana mass, with size inversely proportional to the mass of the RH ones [1316]. This is the celebrated Type I Seesaw Model. In other words, the hypothesis of heavy RH neutrinos allows us to account for the observed small mass of the neutrinos. Unfortunately, we cannot predict the size of the light neutrino mass precisely, unless we know both and .

In principle, RH neutrinos could also be quite light. An extreme possibility is that some of them have masses of the order of eV or less and give rise to new flavor oscillations observable in terrestrial laboratories [5052]. This could help to address some experimental anomalies [53, 54]. However, it has been known for long [55, 56] that the presence of eV neutrinos would also imply large effects in cosmology, both in the number of relativistic species and in the value of the neutrino mass. These effects are not in agreement with the existing information from cosmology (see Section 4.3) and, for this reason, we will not investigate this hypothesis further (we refer the interested reader to the various discussions on the impact of eV neutrinos on the ; see, e.g., [5759]).

In view of the evidences of neutrinos masses, theories like are particularly appealing, since they offer a natural explanation of light Majorana neutrino masses. However, a complete theory able to link in a convincing way fermion masses (including those of neutrinos) and to provide us with reliable predictions of new phenomena, such as , does not exist yet. Despite the fact that many attempts were made in the past, it seems that this enterprise is still in its initial stages.

2.5. Leptogenesis

Although particles and antiparticles have the same importance in our understanding of particle physics, we know that the Universe contains mostly baryons rather than antibaryons (the lepton number in the Universe is probed much less precisely; while we know that cosmic neutrinos and antineutrinos are abundant, it is not easy to measure their asymmetry which, according to standard cosmology, should be very small; however, we expect to have the same number of electrons and protons to guarantee the overall charge neutrality). In 1967, Sakharov proposed a set of necessary conditions to generate the cosmic baryon asymmetry [60]. This has been the beginning of many theoretical attempts to “explain” these observations in terms of new physics.

In the SM, although and are not conserved separately at the nonperturbative level [6163], the observed value of the Higgs mass is not big enough to account for the observed baryon asymmetry [64, 65]. New violations of the global or are needed.

An attractive theoretical possibility is that RH neutrinos not only enhance the SM endowing neutrinos with Majorana mass, but also produce a certain amount of leptonic asymmetry in the Universe. This is subsequently converted into a baryonic asymmetry thanks to violating effects, which are built-in in the SM. It is the so-called Leptogenesis mechanism, and it can be wittingly described by asking the following question: do we all descend from neutrinos? The initial proposal of Leptogenesis dates back to 1980s [66], and there is a large consensus that this type of idea is viable and attractive. Subsequent investigators showed that the number of alternative theoretical possibilities is very large and, in particular, that there are other possible sources of violations besides RH neutrinos. Conversely, the number of testable possibilities is quite limited [67].

We believe that it is important to stay aware of the possibility of explaining the baryon number excess through Leptogenesis theories. However, at the same time, one should not overestimate the heuristic power of this theoretical scheme, at least within the presently available information.

2.6. Neutrino Nature and Cosmic Neutrino Background

The Big Bang theory predicts that the present Universe is left with a residual population of ~56 nonrelativistic neutrinos and antineutrinos per cm3 and per species. It constitutes a Cosmic Neutrino Background (CB). Due to their very low energy, (9) does not hold for these neutrinos. This happens because at least two species of neutrinos are nonrelativistic. The detection of this CB could therefore allow understanding which hypothesis (Majorana or Dirac) applies for the neutrino description.

Let us assume having a target of 100 g of 3H. Electron neutrinos can be detected through the reaction [69, 70] In the standard assumption of a homogeneous Fermi-Dirac distribution of the CB, we expect ~8 events per year if neutrinos are Majorana particles and about half if the Dirac hypothesis applies [71]. Indeed, in the former case, the states with positive helicity (by definition, antineutrinos) will act just as neutrinos, since they are almost at rest. Instead, in the latter case, they will remain antineutrinos and thus they will not react.

It can be noticed that the signal rate is not prohibitively small, but the major difficulty consists in attaining a sufficient energy resolution to keep at a manageable level the background from beta decay. We will not discuss further the feasibility of such an experiment, and refer to [70, 71] for more details.

3. Particle Physics Mechanisms for

In this section, we focus on one of the most appealing lepton number violating process, the . The exchange of light Majorana neutrinos is up to now the most appealing mechanism to eventually explain the . Some reasons justifying this statement were already mentioned, but here a more elaborate discussion is proposed. In particular, we review the basic aspects of the light neutrino exchange mechanism for and compare it to other ones. Moreover, the possibilities of inferring the size of neutrino masses from a hypothetical observation of and of constraining (or proving the correctness) some alternative mechanisms with searches at the accelerators are also discussed.

3.1. The Neutrino Exchange Mechanism

The definition of a key quantity for the description of the neutrino exchange mechanism needs to be introduced. It is the propagator of virtual Majorana neutrinos. Due to the reality condition, (3) can lead to new types of propagators that do not exist within the Dirac theory. In fact, in this case, we can use the antisymmetry of the charge conjugation matrix and getwhere denotes the usual propagator, and

In the low energy limit (relevant to decay processes) the interaction of neutrinos is well described by the current-current four-fermion interactions, corresponding to the Hamiltonian densitywhere is the Fermi coupling, and we introduced the current for , that decreases the charge of the system (its conjugate, , does the contrary). In particular, the leptonic currentdefines the ordinary neutrino with “flavor” . In order to implement the Majorana hypothesis, one can use (7) and introduce the field . Nothing changes in the interactions if one substitutes the field with the corresponding field , since the chiral projector selects only the first piece, .

Let us assume that the field is a mass eigenstate. A contribution to the transition arises at the second order of the Fermi interaction. Let us begin from the operatorBy contracting the neutrino fields, the leptonic part of this operator becomeswhile the ordinary propagator, sandwiched between two chiral projectors, reduces toThe momentum represents the virtuality of the neutrino, whose value is connected to the momenta of the final state electrons and to those of the intermediate virtual nucleons. In particular, since the latter are confined in the nucleus, the typical 3 momenta are of the order of the inverse of the nucleonic size, namely,whereas the energy () is small. The comparison of this scale with the one of neutrino mass identifies and separates “light” from “heavy” neutrinos for what concerns .

The most interesting mechanism for is the one that sees light neutrinos as mediators. It is the one originally considered in [5] and it will be discussed in great detail in the subsequent sections. In the rest of this section, instead, we examine various alternative possibilities.

We have some hints, mostly of theoretical nature, that the light neutrinos might have Majorana mass. However, the main reason for the hypothesis that the receives its main contribution from light Majorana neutrinos is the fact that experiments point out the existence of 3 light massive neutrinos.

3.2. Alternative Mechanisms to the Light Neutrino Exchange
3.2.1. Historical Proposals

A few years after the understanding of the oscillation [9799], which led Pontecorvo to conjecture that also neutrino oscillations could exist [17], alternative theoretical mechanisms for the other than the neutrino exchange were firstly advocated. In 1959, Feinberg and Goldhaber [7] proposed the addition of the following term in the effective Lagrangian density:where is the electron mass and an unspecified dimensionless coupling. Similarly, after the hypothesis of superweak interactions in weak decays [100, 101], the importance for of operators like the one of (20) was stressed by Pontecorvo [8]. He also emphasized that the size and the origin of these operators could be quite independent from the neutrino masses.

3.2.2. Higher Dimensional Operators

The SM offers a very convenient language to order the interesting operators leading to violation of and . It is possible to consider effective (nonrenormalizable) operators that respect the gauge symmetry but that violate and/or [12, 102]. Here, we consider a few representative cases (a more complete list can be found in [103, 104]), corresponding to the following terms of the Lagrangian and Hamiltonian densities:The matter fields (fermions) in the equation are written in the standard notation of Table 1; is the Higgs field, while the constrains on the masses are  TeV,  TeV, and  TeV. In particular (i)the first (dimension-5) operator generates Majorana neutrino masses, and the bound on derives from neutrino masses  eV;(ii)the dimension-6 operator leads to proton decay and this implies the tight bound on the mass ;(iii)the dimension-9 operator contributes to the ; its role in the transition can be relevant if the scale of lepton number violation is low.

Summarizing, if one assumes that the scale of new physics is much higher than the electroweak scale, it is natural to expect that the leading mechanism behind the is the exchange of light neutrinos endowed with Majorana masses. It is also worthy to note that if light sterile neutrinos, dark matter, or, generally, other light states are added, more operators may be required. A large effective mass could also come from small adimensional couplings , for example, .

The number of possible mechanisms that eventually can lead to the above effective operators is also very large. One possible (plausible) origin of the dimension-5 operator is discussed in Section 2.4. However, other cases are possible and the same is true for the other operators.

3.2.3. Heavy Neutrino Exchange

Let us now consider the case of heavy RH neutrino exchange mechanism. The corresponding operator gives rise to the effective Hamiltonian density (for heavy neutrinos, the propagator of (18) is proportional to ):It is evident that this is a dimension-9 operator and it has in front a constant with mass dimension , since indicates the relevant heavy neutrino mass. It has to be noted that such a definition can be used in an effective formula, but a gauge model requires expressing in terms of the single RH neutrino masses and of the mixing between left-handed neutrinos and heavy neutrinos:In particular, the mixings are small if is large since . This suggests a suppression of the above effective operator with the cube of , whereas the light neutrino exchange mechanism leads to a milder suppression, linear in (if the mixing matrices have specific flavor structures, deviations from this generic expectation are possible). However, it is still possible that RH neutrinos are heavy, but not “very” heavy. Actually, this was the first case to be considered [13], and it could be of interest both for direct searches at accelerators (see Section 3.4) and for the . In fact, in this case, the mixing is not strongly suppressed and RH neutrinos can give an important contribution to the transition [105]. However, two remarks on this case are in order. As it was argued in [106], in order to avoid fine tunings on the light neutrinos, the masses of RH neutrinos should not be much larger than about 10 GeV. Moreover, in the extreme limit in which the mass becomes light (i.e., it is below the value in (19)) and Type I Seesaw applies, the contribution of RH neutrinos cancels the one of ordinary neutrinos [107, 108].

3.2.4. Models with RH Currents

Another class of models of great interest are those that include RH currents and intermediate bosons. In the language of SM, the neutrino exchange leads to a core operatorwhere is a mass scale and identify the fields of the usual bosons. When we consider virtual bosons, this may eventually lead to the usual case. In principle, it is possible to replace the usual bosons with the corresponding bosons of a new gauge group. In this hypothesis, the RH neutrinos play a more important role and are no longer subject to restrictions of the mixing matrix, as those of (23). However, the resulting dimension-9 operator is suppressed by 4 powers of the masses of the new gauge bosons.

Evidently, new RH gauge bosons with masses accessible to direct experimental investigation are of special interest (see Section 3.4). Since to date we do not have any experimental evidence, this possibility will not be emphasized in the following discussion. Anyway, investigations at the LHC are currently in progress and the interpretation of some anomalous events (among the collected data) as a hint in favor of relatively light bosons has already been proposed [109111].

3.3. From to Majorana Mass: A Remark on “Natural” Gauge Theories

In a well-known work, Schechter and Valle [112] employ the basic concepts of gauge theories to derive some important considerations on the . In particular, their argument proceeds as follows:(1)If the is observed, there will be some process (among elementary particles) where the electron-, up-, and downfields are taken twice. This “black box” process in [112] (Figure 5) effectively resembles the one caused by the dimension-9 operator in (21).(2)Using bosons, it is possible to contract the two quark pairs and obtain something like the operator in (24).(3)Finally, the electron- and the -fields can be converted into neutrino fields. A contribution to the Majorana neutrino mass is therefore obtained.(4)The possibility that this contribution could be canceled by others is barred out as “unnatural.”

This argument works in the “opposite direction” with respect to ones presented so far. Instead of starting from the Majorana mass to derive a contribution for the , it shows that from the observation of the , it is possible to conclude the existence of the Majorana mass. The result could be seen as an application (or a generalization) of Symanzik’s rule as given by Coleman [113]: if a theory predicts -violation, it will not be possible to screen it to forbid only a Majorana neutrino mass.

The size of the neutrino masses is not indicated in the original work, but a straightforward estimation of the diagram of Figure 5 shows that they are so small that they have no physical interest, being of the order of  eV [114]. However, what can be seen as a weak point of the argumentation is the concept of “natural theory,” whose definition is not discussed in [112] but simply proclaimed. In fact, it is possible to find examples of models where the exists but the Majorana neutrino mass contribution is zero [106], in accordance with the claim of Pontecorvo [8] but clashing with the expectations deriving from that of [112].

We think that the (important) point made in [112] is valid not quite as a theorem (a word that, anyway, the authors never use to indicate their work). We rather believe it acts mostly as a reminder that any specific theory that includes Majorana neutrino masses will have various specific links between these masses, , and possibly other manifestations of -violation. We see as a risk the fact that, due to the impossibility of avoiding the issue of model dependence, we will end up with the idea that we can accept “petition of principles.”

3.4. Role of the Search at Accelerators

There is the hope that the search for new particles at the accelerators might reveal new physics relevant to the interpretation or in some way connected to the . This is a statement of wide validity. For example, the minimal supersymmetric extension of the SM is compatible with new -violating phenomena taking place already at the level of renormalizable operators [115]. Also the hypothesized extradimensions at the TeV scale might be connected to new -violating operators [116]. Or even, models where the smallness of the neutrino mass is explained through loop effects imply typically new particles that are not ultraheavy [117]. Notice that these are just a few among the many theoretical possibilities to select which, unfortunately, lack clear principles.

The recent scientific literature tried at least to exploit some minimality criteria, and the theoretical models that received the largest attention are indeed those discussed above. A specific subclass, named SM [118], is found interesting enough to propose a dedicated search at the CERN SPS [118, 119], aiming to find rare decays of the ordinary mesons into heavy neutrinos. Other models that foresee a new layer of gauge symmetry at accessible energies and, more specifically, those connected to left-right gauge symmetry [120] might instead lead to impressive -violation at accelerators [121123]. This should be quite analogous to the process itself and that could be seen as manifestations of operators similar to those in (24).

We would like just to point out that, in both cases, in order to explain the smallness of neutrino masses, very small adimensional couplings are required. Although this position is completely legitimate, in front of the present understanding of particle physics, it seems fair to say that this leaves us with some theoretical question to ponder.

4. Present Knowledge of Neutrino Masses

In this section we discuss the crucial parameter describing the if the process is mediated by light Majorana neutrinos (as defined in Section 3.1). We take into account the present information coming from the oscillation parameters, cosmology, and other data. On the theoretical side, we motivate the interest for a minimal interpretation of the results.

4.1. The Parameter

We know three light neutrinos. They are identified by their charged current interactions; that is, they have “flavor” . The Majorana mass terms in the Lagrangian density are described by a symmetric matrix:The only term that violates the electronic number by two units is , and this simple consideration motivates the fact that the amplitude of the decay has to be proportional to these parameters, while the width has to proportional to its squared modulus. We can diagonalize the neutrino mass matrix by mean of a unitary matrixwhere the neutrino masses are real and nonnegative. Thus, we can definewhere the index runs on the 3 light neutrinos with given mass. This parameter is often called “effective Majorana mass” (it can be thought of as the “electron neutrino mass” that rules the transition, but keeping in mind that it is different from the “electron neutrino mass” that rules the decay transition).

The previous intuitive argument in favor of this definition is corroborated by calculating the Feynman diagram of Figure 1. Firstly, it has to be noted that the electronic neutrino is not a mass eigenstate in general. Then, substituting (26) into (25), we see that we go from the flavor basis to the mass basis by settingTherefore, in the neutrino propagators of Figure 1, we will refer to the masses (that in our case are “light”) while, in the two leptonic vertices, we will have . Taking the product of these factors, we get the expression given in (27).

It should be noted that the leptonic mixing matrix as introduced above differs from the ordinary one used in neutrino oscillation analyses. Indeed, the latter is given after rotating away the phases of the neutrino fields and observing that oscillations depend only upon the combination . This matrix contains only one complex phase which plays a role in oscillations (the “CP-violating phase”). Instead, in the case of , the observable is different. It is just . Here, there are new phases that cannot be rotated away and that play a physical role. These are sometimes called “Majorana phases.” Their contribution can be made explicit by rewriting (27) as follows:We can now identify of (29) with the mixing matrix used in neutrino oscillation analyses (note that the specific choice and the symbols for these phases may differ among authors).

Before proceeding in the discussion, some remarks are in order:(i)It is possible to adopt a convention for the neutrino mixing matrix such that the 3 mixing elements are real and positive. However, in the most common convention, is defined to be complex.(ii)Only two Majorana phases play a physical role, the third one just being matter of convention.(iii)It is not possible even in principle to reconstruct the Majorana mass matrix simply on experimental bases, unless we find another observable which depends on Majorana phases.Furthermore, a specific observation on the Type I Seesaw Model is useful. Let us consider the simplest case with only and one heavy neutrino that mix with this state. The Majorana mass matrix is of the formOne should not be misled, concluding that in this case (and, generally, in the Type I Seesaw) is zero. In fact, as it is well known, the masses of the light neutrinos (in this case, of ) arise when one integrates away the heavy neutrino state, gettingAs discussed in [106], we obtain in this one-flavor case the nonzero contributionThe second factor is the direct contribution of the heavy neutrino (this formula agrees with the naive scaling expected from the heavy neutrino contribution; but in specific three-flavor models it is possible, at least in principle, that heavy neutrinos give a large and even dominating contribution to the decay rate [106]). The quantity depends on the nuclear structure and it is of the order of (100 MeV)2 and thus (32) is valid if we assume .

In the above discussion, we have emphasized the three-flavor case. The main reason for this is evidently that we know about the existence of only 3 light neutrinos. It is possible to test this hypothesis by searching for new oscillation phenomena, by testing the universality of the weak leptonic couplings and/or the unitarity of the matrix in (28), by searching directly at accelerators new and (not too) light neutrino states, and so forth. However, we believe that it is fair to state that, to date, we have no conclusive experimental evidence or strong theoretical reason to deviate from this minimal theoretical scheme. We will adopt it in the proceeding of the discussion. In this way, we can take advantage of the precious information that was collected on the neutrino masses to constrain the parameter and to clarify the various expectations.

4.2. Oscillations

In [35], a complete analysis of the current knowledge of the oscillation parameters and of neutrino masses can be found. Although the absolute neutrino mass scale is still unknown, it has been possible to measure, through oscillation experiments, the squared mass splittings between the three active neutrinos. In Table 2, the parameters relevant to our analysis are reported. The mass splittings are labeled by and . The former is measured through the observation of solar neutrino oscillations, while the latter comes from atmospheric neutrino data. The definitions of these two parameters are the following: Practically, regards the splitting between and , while refers to the distance between the mass and the mid-point of and masses.

The sign of can be determined by observing matter enhanced oscillations as explained within the MSW theory [125, 126]. It turns out, after comparing with experimental data, that [127]. Unfortunately, determining the sign of is still unknown and it is not simple to measure it. However, it has been argued (see, e.g., [128]) that, by carefully measuring the oscillation pattern, it could be possible to distinguish between the two possibilities, and . This is a very promising perspective in order to solve this ambiguity, which is sometimes called the “mass hierarchy problem.” In fact, standard names for the two mentioned possibilities for the neutrino mass spectra are “Normal Hierarchy” () for and “Inverted Hierarchy” () for .

The oscillation data are analyzed in [35] by writing the leptonic (PMNS) mixing matrix in terms of the mixing angles , , and and of the CP-violating phase according to the (usual) representation where . Note the usage of the same phase convention and parameterization of the quark (CKM) mixing matrix even if, of course, the values of the parameters are different. With this convention, it is possible to obtain (29) by defining

Table 2 shows the result of the best fit and of the range for the different oscillation parameters. It can be noted that the values are slightly different depending on the mass hierarchy. This comes from the different analysis procedures used during the evaluation, as explained in [35]. Therefore, throughout this work. the two neutrino mass spectra are treated differently from one another, since we used these hierarchy-dependent parameters. The uncertainties are not completely symmetric around the best fit point, but the deviations are quite small, as claimed by the authors themselves in the reference. In particular, the plots in the paper show Gaussian likelihoods for the parameters determining . In order to later propagate the errors, we decided to neglect the asymmetry, which has no relevant effects on the presented results. We computed the maximum between the distances of the best fit values and the borders of the range (fourth column of Table 2) and we assumed that the parameters fluctuate according to a Gaussian distribution around the best fit value, with a standard deviation given by that maximum.

Thanks to the knowledge of the oscillation parameters, it is possible to put a first series of constraints on . However, as already recalled, since the complex phases of the mixing parameters in (29) cannot be probed by oscillations, the allowed region for is obtained letting them vary freely. The expressions for the resulting extremes (i.e., the maximum and minimum values due to the phase variation) can be found in Appendix A. We adopt the graphical representation of introduced in [129] and refined in [18, 130]. It consists in plotting in bilogarithmic scale as a function of the mass of the lightest neutrino, for both the cases of and . The resulting plot is shown in Figure 6(a). The uncertainties on the various parameters are propagated using the procedures described in Appendix B. This results in a wider allowed region, which corresponds to the shaded parts in the picture.

4.2.1. Mass Eigenstates Composition

The standard three-flavor oscillations involve three massive states that, consistently with (28), are given by the following (note that in this case we are in the ultrarelativistic limit; see Section 2.3):Thus, it is possible to estimate the probability of finding the component of each mass eigenstate . This probability is just the squared module of the matrix element , since the matrix is unitary. The result is graphically shown in Figure 7. As already mentioned, since hierarchy-dependent parameters were used, the flavor composition of the various eigenstates slightly depends on the mass hierarchy. It is worth noting that the results also depend on the possible choices of , while they do not depend on the eventual Majorana phases. Table 3 reports the calculation for the cases and () and best fit value for the () according to [35].

4.3. Cosmology and Neutrino Masses
4.3.1. The Parameter

The three-light neutrino scenario is consistent with all known facts in particle physics including the new measurements by Planck [34]. In this assumption, the physical quantity probed by cosmological surveys, , is the sum of the masses of the three light neutrinos:Depending on the mass hierarchy, is it possible to express as a function of the lightest neutrino mass and of the oscillation mass splittings. In particular, in the case of , one getswhile, in the case of ,

It can be useful to compute the mass of the lightest neutrino, given a value of . This can be convenient in order to compute as a function of instead of (in Appendix C, an approximate (but accurate) alternative method for the numerical calculation needed to make this conversion is given). In this way, is expressed as a function of a directly observable parameter.

The close connection between the neutrino mass measurements obtained in the laboratory and those probed by cosmological observations was outlined long ago [131]. Furthermore, the measurements of have recently reached important sensitivities, as discussed in Section 7.

In Figure 6(b), an updated version of the plot ( versus ) originally introduced in [132] is shown. Concerning the treatment of the uncertainties, we use again the assumption of Gaussian fluctuations and the prescription reported in Appendix B.

4.3.2. Constraints from Cosmological Surveys

The indications for neutrino masses from cosmology have kept changing for the last 20 years. A comprehensive review on the topic can be found in [137]. In Figure 8 the values for given in [133136] are shown. The scientific literature contains several authoritative claims for a nonzero value for but, being different among each other, these values cannot be all correct (at least) and this calls us for a cautious attitude in the interpretation. Referring to the most recent years, two different positions emerge: on one side, we find claims that cosmology provides us with a hint for nonzero neutrino masses; on the other, we have very tight limits on .

In the former case, it has been suggested [135, 138] that a total nonzero neutrino mass around 0.3 eV could alleviate some tensions present between cluster number counts (selected both in X-ray and by Sunyaev-Zeldovich effect) and weak lensing data. A sterile neutrino particle with mass in a similar range is sometimes also advocated [139, 140]. However, evidence for nonzero neutrino masses in either the active or sterile sectors seems to be claimed in order to fix the significant tensions between different data sets (cosmic microwave background (CMB) and baryonic acoustic oscillations (BAOs) on one side and weak lensing, cluster number counts, and high values of the Hubble parameter on the other).

In the latter case, the limit on is so stringent that it better agrees with the spectrum, rather than with one (see the discussion in Section 7.1) (actually, it has been shown in [141] that the presence in the nuclear medium of -violating four-fermion interactions of neutrinos with quarks from a decaying nucleus could account for an apparent incompatibility between the searches in the laboratory and the cosmological data; in fact, the net effect of these interactions (not present in the latter case) would be the generation of an effective “in-medium” Majorana neutrino mass matrix with a corresponding enhancement of the rate). The tightest experimental limits on are usually obtained by combining CMB data with the ones probing smaller scales. In this way, their combination allows a more effective investigation of the neutrino induced suppression in terms of matter power spectrum, both in scale and redshift. Quite recently, a very stringent limit,  meV ( CL), was set by Palanque-Delabrouille and collaborators [136]. New tight limits were presented after the data release by the Planck Collaboration in 2015 [34]. Some of the most significant results are reported in Table 4. The bounds on indicated by these post-Planck studies are quite small, but they are still larger than the final sensitivities expected, especially thanks to the inclusion of other cosmological data sets probing smaller scales (see, e.g., [142, 143] for review works). Therefore, these small values cannot be considered surprising and, conversely, margins of further progress are present.

In our view, this situation should be considered as favorable since more proponents are forced to carefully examine and discuss all the available hypotheses. In view of this discussion, in Section 7, we consider two possible scenarios and discuss the implications from the cosmological investigations for the in both cases.

4.4. Other Nonoscillations Data

For the sake of completeness, we mention other two potential sources of information on neutrinos masses. They are(i)the study of kinematic effects (in particular of supernova neutrinos),(ii)the investigation of the effect of mass in single beta decay processes. The first type of investigations, applied to SN1987A, produced a limit of about 6 eV on the electron antineutrino mass [144, 145]. The perspectives for the future are connected to new detectors, or to the existence of antineutrino pulses in the first instants of a supernova emission. The second approach, instead, is presently limited to about 2 eV [146, 147], even having the advantage of being obtained in controlled conditions, that is, in laboratory. Its future is currently in the hands of new experiments based on a 3H source [148] and on the electron capture of 163Ho [149151], which have the potential to go below the eV in sensitivity.

4.5. Theoretical Understanding

Theorists have not been very successful in anticipating the discoveries on neutrino masses obtained by means of oscillations. The discussion within gauge models clarified that it is possible or even likely to have neutrino masses in gauge models (compare with Section 2.4). However, a large part of the theoretical community focused for a long time on models such as “minimal ”, where the neutrino masses are zero, emphasizing the interest in proton decay search rather than in neutrino mass search. On top of that, we had many models that aimed to predict, for example, the correct solar neutrino solution or the size of before the measurements, but none of them were particularly convincing. More specifically, a lot of attention was given to the “small mixing angle solution” and the “very small scenario” that are now excluded from the data.

Moreover, it is not easy to justify the theoretical position where neutrino masses are not considered along the masses of other fermions. This remark alone explains the difficulty of the theoretical enterprise that theorists have to face. For the reasons mentioned in Section 2.4, the models are quite attractive to address a discussion of neutrino masses. However, even considering this specific class of well-motivated Grand Unified groups, it remains difficult to claim that we have a complete and convincing formulation of the theory. In particular, this holds for the arbitrariness in the choice of the representations (especially that of the Higgs bosons), for the large number of unknown parameters (especially the scalar potential), for the possible role of nonrenormalizable operators, for the uncertainties in the assumption concerning low scale supersymmetry, for the lack of experimental tests, and so forth. Note that, incidentally, preliminary investigations on the size of in did not provide a clear evidence for a significant lower bound [152]. Anyway, even the case of an exactly null effective Majorana mass does not increase the symmetry of the Lagrangian and thus does not forbid the , as remarked in [153].

Here, we just consider one specific theoretical scheme, for illustration purposes. This should not be considered a full fledged theory, but rather it attempts to account for the theoretical uncertainties in the predictions. The hierarchy of the masses and of the mixing angles has suggested the hypothesis that the elements of the Yukawa couplings and thus of the mass matrices are subject to some selection rule. The possibility of a selection rule has been proposed in [154] and, since then, it has become very popular.

Immediately after the first strong evidences of atmospheric neutrino oscillations (1998) specific realizations for neutrinos have been discussed in various works (see [155] for references). These correspond to the neutrino mass matrixwhere the flavor structure is dictated by a diagonal matrix that acts only on the electronic flavor and suppresses the matrix elements , , and (twice). The dimensionful parameter (the overall mass scale) is given by  meV. We thus have a matrix of coefficients with elements that are usually treated as random numbers of the order of 1 in the absence of a theory. A choice of that suggested values of and in the correct region (before their measurement) is or [155]. Within these assumptions, the matrix element in which we are interested isFinally, we note that the SM renormalization of the elements of the neutrino mass matrix is multiplicative. The effect of renormalization is therefore particularly small for (see, e.g., equation (17) of [156] and the discussion therein). In other words, the value (or values close to this one) should be regarded as a stable point of the renormalization flow.

Let us conclude repeating that, anyway, there are many reasons to consider the theoretical expectations with detachment, and the above theoretical scheme is not an exception to this rule. It is very important to keep in mind this fact in order to properly assess the value of the search for the and to proceed accordingly in the investigations.

5. The Role of Nuclear Physics

is first of all a nuclear process. Therefore, the transition has to be described properly, taking into account the relevant aspects that concern nuclear structure and dynamics. In particular, it is a second-order nuclear weak process and it corresponds to the transition from a nucleus to its isobar with the emission of two electrons. In principle, a nucleus can decay via double beta decay as long as the nucleus is lighter. However, if the nucleus can also decay by single beta decay, , the branching ratio for the will be too difficult to be observed due to the overwhelming background rate from the single beta decay. Therefore, candidate isotopes for detecting the are even-even nuclei that, due to the nuclear pairing force, are lighter than the odd-odd nucleus, making single beta decay kinematically forbidden (Figure 9). It is worth noting that, since the candidates are even-even nuclei, it follows immediately that their spin is always zero.

The theoretical expression of the half-life of the process in a certain nuclear species can be factorized aswhere is the phase space factor (PSF), is the nuclear matrix element (NME), and is an adimensional function containing the particle physics beyond the SM that could explain the decay through the neutrino masses and the mixing matrix elements .

In this section, we review the crucial role of nuclear physics in the expectations, predictions, and eventual understanding of the , also assessing the present knowledge and uncertainties. We are mainly restricted to the discussion of the light neutrino exchange as the candidate process for mediating the transition, but the mechanism of heavy neutrino exchange is also considered.

In the former case ( MeV, see (19)), the factor is proportional to :where the electron mass is taken as a reference value. In the scheme of the heavy neutrino exchange ( MeV), the effective parameter is insteadwhere the proton mass is now used, according to the tradition, as the reference value.

5.1. Recent Developments on the Phase Space Factor Calculations

The first calculations of PSFs date back to the late 1950s [157] and used a simplified description of the wave functions. The improvements in the evaluation of the PSFs are due to always more accurate descriptions and less approximations [158160].

Recent developments in the numerical evaluation of Dirac wave functions and in the solution of the Thomas-Fermi equation allowed calculating accurately the PSFs for both single and double beta decay. The key ingredients are the scattering electron wave functions. The new calculations take into account relativistic corrections, the finite nuclear size, and the effect of the atomic screening on the emitted electrons. The main difference between these calculations and the older ones is of the order of a small percent for light nuclei (), about 30% for Nd (), and a rather large one, 90%, for U ().

In [95, 161, 162], the most up to date calculations of the PSFs for can be found. The results obtained in these works are quite similar. Throughout this paper, we use the values from the first reference.

5.2. Models for the NMEs

Let us suppose that the decay proceeds through an -wave. Since we have just two electrons in the final state, we cannot form an angular momentum greater than one. Therefore, usually only matrix elements to final states are considered. These can be the ground state, , or the first excited state, . Of course, we consider as a starting state just state, since the double beta decay is possible only for even-even isobar nuclei.

The calculation of the NMEs for the is a difficult task because the ground and many excited states of open-shell nuclei with complicated nuclear structure have to be considered. The problem is faced by using different approaches and, especially in the last few years, the reliability of the calculations improved a lot. Here, a list of the main theoretical models is presented. The most relevant features for each of them are highlighted.

(i) Interacting Shell Model (ISM) [164, 165]. In the ISM only a limited number of orbits around the Fermi level is considered, but all the possible correlations within the space are included and the pairing correlations in the valence space are treated exactly. Proton and neutron numbers are conserved and angular momentum conservation is preserved. A good spectroscopy for parent and daughter nuclei is achieved.

(ii) Quasiparticle Random Phase Approximation (QRPA) [163, 166]. The QRPA uses a large valence space and thus it cannot comprise all the possible configurations. Typically, single particle states in a Woods-Saxon potential are considered. The proton-proton and neutron-neutron pairings are taken into account and treated in the BCS approximation (proton and neutron numbers are not exactly conserved).

(iii) Interacting Boson Model (IBM-2) [96]. In the IBM, the low-lying states of the nucleus are modeled in terms of bosons. The bosons are in either boson () or boson () states. Therefore, one is restricted to and neutron pairs transferring into two protons. The bosons interact through one- and two-body forces giving rise to bosonic wave functions.

(iv) Projected Hartree-Fock Bogoliubov Method (PHFB) [167]. In the PHFB, the NME are calculated using the projected-Hartree-Fock-Bogoliubov wave functions, which are eigenvectors of four different parameterizations of a Hamiltonian with pairing plus multipolar effective two-body interaction. In real applications, the nuclear Hamiltonian is restricted only to quadrupole interactions.

(v) Energy Density Functional Method (EDF) [168]. The EDF is considered to be an improvement with respect to the PHFB. The state-of-the-art density functional methods based on the well-established Gogny D1S functional and a large single particle basis are used.

The most common methods are ISM, QRPA, and IBM-2. In Figure 10, a comparison among the most recent NME calculations computed with these three models is shown. It can be seen that the disagreement can be generally quantified in some tens of percents, instead of the factors 2–4 of the past. This can be quite satisfactory. As it will be discussed in Section 5.3, the main source of uncertainty in the inference does not rely on the NME calculations anymore, but on the determination of the quenching of the axial vector coupling constant. For this reason, in the subsequent discussion, we will be restricted to one of the considered models, namely, the IBM-2 [96], without significant loss of generality.

5.3. Theoretical Uncertainties
5.3.1. Generality

Following (42), an experimental limit on the half-life translates into a limit on the effective Majorana mass:From the theoretical point of view, in order to constrain , the estimation of the uncertainties both on and is crucial. Actually, the PSFs can be assumed to be quite well known, the error in their most recent calculations being around 7% [95].

A convenient parametrization for the NMEs is the following [169]:where and are the axial and vector coupling constants of the nucleon, is the Gamow-Teller (GT) operator matrix element between initial and final states (spin-spin interaction), is the Fermi contribution (spin independent interaction), and is the tensor operator matrix element. The form of (46) emphasizes the role of . Indeed, mildly depends on and can be evaluated by modeling theoretically the nucleus. Actually, it is independent of if the same quenching is assumed both for the vector and axial coupling constants, as we do here for definiteness, following [170].

5.3.2. Is the Uncertainty Large or Small?

The main sources of uncertainties in the inference on are the NMEs. A comparison of the calculations from 1984 to 1998 revealed an uncertainty of more than a factor 4 [130]. A similar point of view comes out from the investigation of [171], where the results of the various calculations were used to attempt a statistical inference.

An important step forward was made with the first calculations of that estimated also the errors; see [172, 173]. These works, based on the QRPA model, assessed a relatively small intrinsic error of ~20%. The validity of these conclusions have been recently supported by the (independent) calculation based on the IBM-2 description of the nuclei [95, 96], which assesses an intrinsic error of 15% on . However, the problem in assessing the uncertainties in the NMEs is far from being solved. Each scheme of calculation can estimate its own uncertainty, but it is still hard to understand the differences in the results among the models (Figure 10) and thus give an overall error. Notice also that when a process “similar” to the is considered (single beta decay, electron capture, and ) and the calculations are compared with the measured rates, the actual differences are much larger than 20% [170]. This suggests that it is not cautious to assume that the uncertainties on the are instead subject to such a level of theoretical control.

Recently, there has been a lively interest in a specific and important reason of uncertainty, namely, the value of the axial coupling constant . This has a direct implication on the issue that we are discussing, since any uncertainty on the value of reflects itself into a (larger) uncertainty factor on the value of the matrix element . We will examine these arguments in greater detail in the rest of this section.

It is important to appreciate the relevance of these considerations for the experimental searches. If the value of the axial coupling in the nuclear medium is decreased by a factor , namely, , the expected decay rate and therefore the number of signal events will also decrease, approximatively as . This change can be compensated by increasing the time of data taking or the mass of the experiment. However, the figure of merit, namely, , which quantifies the statistical significance of the measurement, changes only with the square root of the time or of the mass, in the typical case in which there are also background events . For instance, if we have a decrease by ()% of the axial coupling, we will obtain the same measurement after a time that is larger by a factor of (). In other words, an effect that could be naively considered small has instead a big impact for the experimental search for the .

5.3.3. The Size of the Axial Coupling

It is commonly expected that the value measured in the weak interactions and decays of nucleons is “renormalized” in the nuclear medium towards the value appropriate for quarks [172, 173, 175]. It was argued in [170] that a further modification (reduction) is rather plausible. This is in agreement with what was stated some years before in [176], where the possibility of a “strong quenching” of (i.e., ) is actually favored. The same was also confirmed by recent study on single beta decay and [177]. It has to be noticed that, within the QRPA framework, the dependence of upon is actually milder than quadratic, because the model is calibrated through the experimental decay rates using also another parameter, the particle-particle strength [178].

There could be different causes for the quenching of . It was found that it can be attributed mainly to the following issues [170, 179]:(i)The limited model space (i.e., the size of the basis of the eigenstates) in which the calculation is done. This problem is by definition model dependent and it was extensively investigated in light nuclei in the 1970s [180183], when it was argued that . In heavy nuclei, the question of quenching was first discussed in [180]. In this case, was found to be even lower than , thus stimulating the statement that massive renormalization of occurs.(ii)The contribution of nonnucleonic degrees of freedom. This effect does not depend much on the nuclear model adopted, but rather on the mechanism of coupling to nonnucleonic degrees of freedom. It was extensively investigated theoretically in the 1970s [184186]. Recently, it has been investigated again within the framework of the chiral Effective Field Theory (EFT) [187]. It turns out that it may depend on momentum transfer and that it may lead in some cases to an enhancement rather than a quenching.(iii)The renormalization of the GT operator due to two-body currents. The first calculations for GT transitions for the operator based on the chiral EFT [187] showed the importance of two-body currents for the effective quenching of . This was later confirmed in independent works [188, 189] and, more recently, by the use of a no-core-configuration-interaction formalism within the density functional theory [179].

It is still not clear if the quenching in both the transitions ( and ) is the same. One argument which suggests that this is not unreasonable consists in noting that the can occur only through a GT () transition. Instead, the could happen through all the possible intermediate states, so it is possible to argue that the transitions through states with spin parity different from can be unquenched or even enhanced. Incidentally, it turns out that the dominant multipole in the transition is the GT one, thus making the hypothesis that the quenching in and is the same quite solid. Following [96], we adopt this as a working hypothesis in our discussion, however keeping in mind that some indications that the quenching might be different in the and transitions are present in other models [164, 189].

It would be extremely precious if these theoretical questions could be answered by some experimental data. It has been argued that the experimental study of nuclear transitions where the nuclear charge is changed by two units leaving the mass number unvaried, in analogy to the decay, could give important information. Despite the fact that the Double Charge Exchange reactions and processes are mediated by different interactions, some similarities between the two cases are present. These could be exploited to assess effectively the NME for the (and, more specifically, the entity of the quenching of ). In the near future, a new project will be started at the Laboratori Nazionali del Sud (Italy) [190] with the aim of getting some inputs to deepen our theoretical understanding of this nuclear process.

5.3.4. Quenching as a Major Cause of Uncertainty

In view of the above considerations, we think that currently the value of in the nuclear medium cannot be regarded as a quantity that is known reliably. It is rather an important reason of uncertainty in the predictions. In a conservative treatment, we should consider at least the following three cases:where the last formula includes phenomenologically the effect of the atomic number . It represents the worst possible scenario for the search. The parametrization as a function of comes directly from the comparison between the theoretical half-life for and its observation in different nuclei, as reported in [170]. From the comparison between the theoretical half-life for the process and the experimental value it was possible to extract an effective value for , thus determining its quenching. The assumption that depends only upon the atomic number is rather convenient for a cursory exploration of the potential impact of unaccounted nuclear physics effects on , but most likely it is also an oversimplification of the truth, as suggested by the residual difference between the calculated rates. Surely, it cannot replace an adequate theoretical modeling, that in the light of the following discussion has become rather urgent. Anyway, we stress that this is just a phenomenological description of the quenching, since the specific behavior is different in each nucleus and it somewhat differs from this parametrization [170].

The question of which is the “true value” of is still open and introduces a considerable uncertainty in the inferences concerning massive neutrinos. The implications are discussed in Sections 6.6 and 6.7.

5.4. The Case of Heavy Neutrino Exchange

As already discussed in Section 3, it is possible to attribute the decay rate to the same particles that are added to the SM spectrum to explain oscillations, for example, heavy neutrinos. In this context one can assume that the exchange of  MeV saturates the decay rate, also reproducing the ordinary neutrino masses. Heavy neutrino masses and mixing angles, compatible with the rate of , depend on the NMEs of the transition (compare, e.g., [105, 106]). Thus, nuclear physics has an impact also on the limits that are relevant to a direct search for heavy neutrinos with accelerators. Each scheme of nuclear physics calculation can estimate its intrinsic uncertainty. This is usually found to be small in modern computations (about 28% for heavy neutrino exchange [96]). In a conservative treatment, this uncertainty plus the already discussed unknown value of should be taken into account. It has to be noticed that if the is due to a point-like (dimension-9) operator, as for heavy neutrino exchange, two nucleons are in the same point. Therefore, the effect of a hard core repulsion, estimated for modeling the “short-range correlations,” plays an important role in the determination of the uncertainties. A significant step forward has been recently made, pushing down this source of theoretical error of about an order of magnitude [96].

The most updated NMEs for the via heavy neutrino exchange are evaluated within the frames of the IBM-2 [96] and QRPA [174] models. A comparison between these results is shown in Figure 11. It can be seen that the values obtained within the QRPA model are always larger than those obtained with the IBM-2. The difference is quite big for many of the nuclei and might be due to the different treatment of the intermediate states. Also, in this case, we use the NMEs evaluated with the IBM-2 model. This allows us to keep a more conservative approach by getting less stringent limits. Considering, for example, the case of 76Ge, we have

From the experimental point of view, the limits on indicate that the mixings of heavy neutrinos are small. Using the current values for the PSF, NME, and sensitivity for the isotope [84], we getwhere is the proton mass and the heavy neutrino masses are assumed to be GeV.

Figure 12 illustrates the case of a single heavy neutrino mixing with the light ones and mediating the transition. In particular, the plot shows the case of the mixing for 76Ge assuming that a single heavy neutrino dominates the amplitude. The two regimes of heavy and light neutrino exchange are matched as proposed in [191]. The colored bands reflect the different sources of theoretical uncertainty.

As it is clear from Figure 12, the bound coming from searches is still uncertain. It weakens by one order of magnitude if the axial vector coupling constant is strongly quenched in the nuclear medium.

The potential of the sensitivity to heavy neutrinos is therefore weakened and very sensitive to theoretical nuclear physics uncertainties. For some regions of the parameter space, even the limits obtained more than 15 years ago with accelerators are more restrictive than the current limits coming from search.

6. Experimental Search for the

The process described by (1) is actually just one of the forms that can assume. In fact, depending on the relative numbers of the nucleus protons and neutrons, four different mechanisms are possible:Here, () indicate the emission of an electron (positron) and stands for electron capture (usually a K-shell electron is captured).

The explicit violation of the number of electronic leptons , , , or appears evident in each process in (50). A large number of experiments has been and is presently involved in the search for these processes, especially of the first one.

In this section, we introduce the experimental aspects relevant to the searches and we present an overview of the various techniques. We review the status of the past and present experiments, highlighting the main features and the sensitivities. The expectations take into account the uncertainties coming from the theoretical side and, in particular, those from nuclear physics. The requirements for future experiments are estimated and finally the new constraints from cosmology are used as complementary information to that coming from the experiments.

6.1. The Signature

From the experimental point of view, the searches for a signal rely on the detection of the two emitted electrons. In fact, the energy of the recoiling nucleus being negligible, the sum of kinetic energy of the two electrons is equal to the -value of the transition. Therefore, if we consider these as a single body, we expect to observe a monochromatic peak at the -value (Figure 13).

Despite this very clear signature, because of the rarity of the process, the detection of the two electrons is complicated by the presence of background events in the same energy region, which can mask the signal. The main contributions to the background come from the environmental radioactivity, the cosmic rays, and the itself. In particular, the last contribution has the problematic feature of being unavoidable in presence of finite energy resolution, since it is originated by the same isotope which is expected to undergo .

In principle, any event producing an energy deposition similar to that of the decay increases the background level and hence spoils the experiment sensitivity. The capability of discriminating the background events is thus of great important for this kind of search.

6.2. The Choice of the Isotope

The choice for the best isotope to look for is the first issue to deal with. From one side, the background level and the energy resolution need to be optimized. From the other, since the live-time of the experiment cannot exceed some years, the scalability of the technique, that is, the possibility to build a similar experiment with enlarged mass and higher exposure, is also fundamental. This translates in a series of criteria for the choice of the isotope.

(i) High -Value (). This requirement is probably the most important, since it directly influences the background. The 2615 keV line of 208Tl, which represents the end-point of the natural gamma radioactivity, constitutes an important limit in terms of background level. should not be lower than ~2.4 MeV (the only exception is 76Ge, due to the extremely powerful detection technique (see Section 6.4)). The ideal condition would be to have it even larger than 3270 keV, the highest energy beta among the 222Rn daughters (238U chain), coming from 214Bi.

(ii) High Isotopic Abundance. This is a fundamental requirement to have experiments with sufficiently large mass. With the only exception of the 130Te, all the relevant isotopes have a natural isotopic abundance < 10%. This practically means that the condition translates into ease of enrichment for the material.

(iii) Compatibility with a Suitable Detection Technique. It has to be possible to integrate the isotope of interest in a working detector. The source can either be separated from the detector or coincide with it. Furthermore, the detector has to be competitive in providing results and has to guarantee the potential for the mass scalability.

This results in a group of “commonly” studied isotopes among all the possible candidate emitters. It includes 48Ca, 76Ge, 82Se, 96Zr, 100Mo, 116Cd, 130Te, 136Xe, and 150Nd. Table 5 reports the -value and the isotopic abundance for the mentioned isotopes.

From the theoretical side, referring to (42), one should also try to maximize both the PSF and the NME in order to get more strict bounds on with the same sensitivity in terms of half-life time. However, as recently discussed in [197], a uniform inverse correlation between the PSF and the square of the NME emerges in all nuclei (Figure 14). This happens to be more a coincidence than something physically motivated and, as a consequence, no isotope is either favored or disfavored for the search for the . It turns out that all isotopes have qualitatively the same decay rate per unit mass for any given value of .

In recent time, also another criterion is becoming more and more relevant. This is simply the availability of the isotope itself in view of the next generations of experiments, which will have a very large mass. In fact, once the isotope mass for an experiment will be of the order of some tons, a nonnegligible fraction of the annual world production of the isotope of interest could be needed. This is, for example, the case of 136Xe, where the requests from the experiments also “compete” with those from the new proposed dark matter ones. The consequences are a probable price increase and a long storage for the isotope that needs to be taken into account.

6.3. Sensitivity

In the fortunate event of a peak showing up in the energy spectrum, starting from the law of radioactive decay, the decay half-life can be evaluated aswhere is the measuring time, is the detection efficiency, is the number of decaying nuclei under observation, and is the number of observed decays in the region of interest. If we assume to know exactly the detector features (i.e., the number of decaying nuclei, the efficiency, and the time of measurement), the uncertainty on is only due to the statistical fluctuations of the counts:It seems reasonable to suppose Poisson fluctuations on . Since the expected number of events is “small,” the Poisson distribution differs in a nonnegligible way from the Gaussian. In order to quantify this discrepancy, we consider two values for , namely, and . In Table 6 we show the confidence intervals at for the counts both considering a purely Poisson distribution (with mean equal to ) and a Gaussian one (with mean and standard deviation ). Notice that, even if the number of counts is just 5, the Poisson and Gaussian distributions give almost the same relative uncertainties.

If no peak is detected, the sensitivity of a given experiment is usually expressed in terms of “detector factor of merit,” [25]. This can be defined as the process half-life corresponding to the maximum signal that could be hidden by the background fluctuations (at a given statistical CL). To obtain an estimation for as a function of the experiment parameters, it is sufficient to require that the signal exceeds the standard deviation of the total detected counts in the interesting energy window. At the confidence level , this means that we can writewhere is the number of events and Poisson statistics for counts is assumed. If one now states that the background counts scale linearly with the mass of the detector (this is reasonable since, a priori, impurities are uniform inside the detector but, of course, this might not be always the case; e.g., if the main source of background is removed with volume fiducialization), from (51) it is easy to find an expression for :where is the background level per unit mass, energy, and time, is the detector mass, is the FWHM energy resolution, is the stoichiometric multiplicity of the element containing the candidate, is the candidate isotopic abundance, is the Avogadro number and, finally, is the compound molecular mass. Despite its simplicity, (54) has the advantage of emphasizing the role of the essential experimental parameters.

Of particular interest is the case in which the background level is so low that the expected number of background events in the region of interest along the experiment life is of order of unity:This is called the “zero background” experimental condition and it is likely the experimental condition that next generation experiments will face. Practically, it means that the goal is a great mass and a long time of data taking, keeping the background level and the energy resolution as little as possible.

In this case, is a constant, (54) is no more valid, and the sensitivity is given byThe constant is now the number of observed events in the region of interest.

6.4. Experimental Techniques

The experimental approach to search for the consists in the development of a proper detector, able to reveal the two emitted electrons and to collect their sum energy spectrum (see Section 6.1) (additional information (e.g., the single electron energy or the initial momentum) can also be provided sometimes). The desirable features for such a detector are thus as follows.

(i) Good Energy Resolution. This is a fundamental requirement to identify the sharp peak over an almost flat background, as shown in Figure 15, and it is also the only protection against the (intrinsic) background induced by the tail of the spectrum. Indeed, it can be shown that the ratio of counts due to and those due to in the peak region can be approximated by [199]This expression clearly indicates that a good energy resolution is critical. But it also shows that the minimum required value actually depends on the chosen isotope, considered a strong dependence of (57) upon the half-life .

(ii) Very Low Background. Of course experiments have to be located underground in order to be protected from cosmic rays. Moreover, radio-pure materials for the detector and the surrounding parts, as well as proper passive and/or active shielding are mandatory to protect against environmental radioactivity. The longest natural radioactivity decay competing with is of the order of (109-1010) yr versus lifetimes 1025 yr.

(iii) Large Isotope Mass. Present experiments have masses of the order of some tens of kg up to a few hundred kg. Tons will be required for experiments aiming to cover the region (see Section 6.7).

It has to be noted that it is impossible to optimize the listed features simultaneously in a single detector. Therefore, it is up to the experimentalists to choose which one to privilege in order to get the best sensitivity.

The experiments searching for the of a certain isotope can be classified into two main categories: detectors based on a calorimetric technique, in which the source is embedded in the detector itself, and detector using an external source approach, in which source and detector are two separate systems (Figure 16).

6.4.1. Calorimetric Technique

The calorimetric technique has already been implemented in various types of detectors. The main advantages and limitations for this technique can be summarized as follows [25]:(+)large source masses are achievable thanks to the intrinsically high efficiency of the method. Experiments with masses up to ~200 kg have already proved to work and ton-scale detectors seem possible.(+)very high resolution is achievable with the proper type of detector (~0.1% FWHM with Ge diodes and bolometers).(−)severe constraints on detector material (and thus on the isotope that can be investigated) arise from the request that the source material has to be embedded in the structure of the detector. However, this is not the case for some techniques (e.g., for bolometers and loaded liquid scintillators).(−)the event topology reconstruction is usually difficult, with the exception of liquid or gaseous Xe TPC. However, the cost is paid in terms of a lower energy resolution.

Among the most successful examples of detectors using the calorimetric technique, we find the following:(i)Ge Diodes. The large volume, high-purity, and high energy resolution achievable make this kind of detector suitable for the search, despite the low of 76Ge.(ii)Bolometers. Macrocalorimeters with masses close to 1 kg, very good energy resolution (close to that of Ge diodes), are now available for many compounds including emitters. The most significant case is the search for the of 130Te with TeO2 bolometers.(iii)Xe Liquid and Gaseous TPC. The lower energy resolution is “compensated” by the capability of reconstructing the event topology.(iv)Liquid Scintillators Loaded with the Isotope. These detectors have a poor energy resolution. However, a huge amount of material can be dissolved and, thanks to the purification processes, very low backgrounds are achievable. They are ideal detectors to set very stringent limits on the decay half-life.

6.4.2. External Source Approach

Also in the case of the external source approach, different detection techniques have been adopted, namely, scintillators, solid state detectors, and gas chambers. The main advantages and limitations for this technique can be summarized as follows:(+)the reconstruction of the event topology is possible, thus making in principle the achievement of the zero background condition easier. However, the poor energy resolution does not allow distinguishing between events and events with total electron energy around . Therefore represent an important background source.(−)the energy resolutions are low (of the order of 10%). The limit is intrinsic and it is mainly due to the electron energy deposition in the source itself.(−)large isotope masses are hardly achievable due to self-absorption in the source. Up to now, only masses of the order of some tens of kg have been possible, but an increase to about 100 kg target seems feasible.(−)the detection efficiencies are low (of the order of 30%).

So far, the most stringent bounds come from the calorimetric approach which, anyway, remains the one promising the best sensitivities and it is therefore the chosen technique for most of the future projects. However, the external source detector type has provided excellent results on the studies of the . Moreover, in case of discovery of a signal, the event topology reconstruction could represent a fundamental tool for the understanding of the mechanism behind the .

6.5. Experiments: A Brief Review

The first attempt to observe the process dates back to 1948 [200, 201]. Actually, the old experiments aiming to set a limit on the double beta decay half-lives did not distinguish between and . In the case of indirect investigations through geochemical observation, this was not possible even in principle.

However, the importance that the was acquiring in particle in physics provided a valid motivation to continuously enhance the efforts in the search for this decay. On the experimental side, the considerable technological improvements allowed increasing the half-life sensitivity of several orders of magnitude ( was first observed in the laboratory in 82Se in 1987 [202] and in many other isotopes in the subsequent years; see [68] for a review on ). The long history of measurements up to about the year 2000 can be found in [203205]. Here, we concentrate only on a few experiments starting from the late 1990s.

Table 7 summarizes the main characteristics and performances of the selected experiments. It has to be noticed that, due to their different specific features, the actual comparison among all the values is not always possible. We tried to overcome this problem by choosing a common set of units of measurement.

6.5.1. The Claimed Observation

In 2001, after the publication of the experiment final results [74], a fraction of the Heidelberg-Moscow Collaboration claimed to observe a peak in the spectrum, whose energy corresponded to the 76Ge transition -value [206]. After successive reanalysis (by fewer and fewer people), the final value for the half-life was found to be  yr [207]. This claim and the subsequent papers by the same authors aroused a number of critical replies (see, e.g., [24, 130, 208, 209]). Many of the questions and doubts still remain unanswered. To summarize, caution suggests that we disregard the claim, made in [74, 206, 207], that the transition was observed.

Anyway, to date, the limit on the 76Ge half-life is more stringent than the reported value [84].

6.6. Present Sensitivity on

Once the experimental sensitivities are known in terms of , by using (45), it is possible to correspondingly find the lower bounds on .

Figure 17 shows the most stringent limits up to date. They come from 76Ge [84], 130Te [73], and 136Xe [81]. In particular, the combined sensitivity from the single experimental limits is taken from the corresponding references.

In Figure 17(a), the case (unquenched value) is assumed. The uncertainties on NME and PSF are taken into account according to the procedure shown in Appendix B, and they result in the broadening of the lines describing the limits. As the plot shows, the current generation of experiments is probing the quasi degenerate part of the neutrino mass spectrum.

The effect of the quenching of appears evident in Figure 17(b): the sensitivity for the same combined 136Xe experiment in the two cases of and differs by a factor 5. It is clear from the figure that this is the biggest uncertainty, with respect to all the other theoretical ones.

The single values for the examined cases are reported in Table 8.

6.7. Near and Far Future Experiments

It is also possible to extract the bounds on coming from the near future experiments starting from the expected sensitivities and using (45). The results are shown in Table 9. It can be seen that the mass region below 100 meV will begin to be probed in case of unquenched value for . But still we will not enter the region. In case is maximally quenched, instead, the situation is much worse. Indeed, the expected sensitivity would correspond to values of which we already consider probed by the past experiments.

Let us now consider a next generation experiment (call it a “mega” experiment) and a next-to-next generation one (an “ultimate” experiment) with enhanced sensitivity. To define the physics goal we want to achieve, we refer to [124].

The most honest way to talk of the sensitivity is in terms of exposure or of half-life time that can be probed. From the point of view of the physical interest, however, besides the hope of discovering the , the most exciting investigation that can be imagined at present is the exclusion of the case. This is the goal that most of the experimentalists are trying to reach with future experiments (see, e.g., [210]). For this reason, we require a sensitivity  meV. The mega experiment is the one that satisfies this requirement in the most favorable case, namely, when the quenching of is absent. Instead, the ultimate experiment assumes that is maximally quenched. We chose the 8 meV value because, even taking into account the residual uncertainties on the NME and on the PSF, the overlap with the allowed band for in the is excluded at more than . Notice that we are assuming that at some point the issue of the quenching will be sorted out. Through (45), we obtain the corresponding value of and thus we calculate the needed exposure to accomplish the task.

Referring to (56), if we suppose (detector efficiency of 100% and no fiducial volume cuts) and (all the mass is given by the candidate nuclei) and we assume one observed event (i.e., ) in the region of interest, we get the simplified equation:This is the equation we used to estimate the product (exposure), and thus to assess the sensitivity of the mega and ultimate scenarios. The key input is, of course, the theoretical expression of . The calculated values of the exposure are shown in Table 10 for the three considered nuclei: 76Ge, 130Te, and 136Xe. The last column of the table gives the maximum allowed value of the product that satisfies (55).

Figure 18 compares (in a schematic view) the masses of 76Ge and 136Xe corresponding to the present sensitivity [81, 84] to those of the “mega” and “ultimate” experiments assuming, for all three cases, the zero background condition and 5 years of data acquisition.

7. Interplay with Cosmology

Here, we want to assess the possibility of taking advantage of the knowledge about the neutrino cosmological mass to make inferences on some experiment results (or expected ones). In particular, we follow [211]. As already discussed in Section 4.3.2, we consider two possible scenarios. Firstly, we assume only upper limits on both and , without any observation of . Later, we imagine an observation of together with a nonzero measurement of (in both cases, we consider the unquenched value for the axial vector coupling constant).

7.1. Upper Bounds Scenario

The tight limit on in [136] was obtained by combining Planck 2013 results [140] with the one-dimensional flux power spectrum measurement of the Lyman- forest extracted from the BAO Spectroscopic Survey of the Sloan Digital Sky Survey [212]. In particular, the data from a new sample of quasar spectra were analyzed and a novel theoretical framework which incorporates neutrino nonlinearities self-consistently was employed.

The authors of [136] computed a probability for that can be summarized to a very a good approximation byStarting from the likelihood function with as derived from Figure in the reference, one can obtain the following limits:which are very close to those predicted by the Gaussian of (59). In particular, it is worth noting that, even if this measurement is compatible with zero at less than , the best fit value is different from zero, as expected from the oscillation data and as evidenced by (59). We want to remark that, despite the impact, relative impact of systematic versus statistical errors on the estimated flux power is considered and discussed [212]; it is anyway advisable to take these results from cosmology with the due caution.

The plot showing as a function of , which was already shown in Figure 6(b), is again useful for the discussion. A zoomed version of that plot (with linear instead of logarithmic scales for the axis) is presented in Figure 19(a). As already mentioned, the extreme values for after variation of the Majorana phases can be easily calculated (see Appendix A). This variation, together with the uncertainties on the oscillation parameters, results in a widening of the allowed regions. It is also worth noting that the error on contributes to the total uncertainty. Its effect is a broadening of the light shaded area on the left side of the minimum allowed value () for each hierarchy. In order to compute this uncertainty, we considered Gaussian errors on the oscillation parameters; namely,

It is possible to include the new cosmological constraints on from [136] considering the following inequality:where is the Majorana effective mass as a function of and is the associated error, computed as discussed in [124]. is the limit on derived from (59) for the CL . By solving (62) for , it is thus possible to get the allowed contour for considering both the constraints from oscillations and from cosmology. In particular, the Majorana phases are taken into account by computing along the two extremes of , namely, and , and then connecting the two contours. The resulting plot is shown in Figure 19(b).

The most evident feature of Figure 19 is the clear difference in terms of expectations for both and in the two hierarchy cases. The relevant oscillation parameters (mixing angles and mass splittings) are well known and they induce only minor uncertainties on the expected value of . These uncertainties widen the allowed contours in the upper, lower, and left sides of the picture. The boundaries in the rightmost regions are due to the new information from cosmology and are cut at various confidence levels. It is notable that, at , due to the exclusion of , the set of plausible values of is highly restricted.

The impact of the new constraints on appears to be even more evident by plotting as a function of the mass of the lightest neutrino. In this case, (62) becomesThe plot in Figure 20 globally shows that the next generation of experiments will have small possibilities of detecting a signal of due to light Majorana neutrino exchange. Therefore, if the new results from cosmology are confirmed or improved, ton or even multi-ton-scale detectors will be needed [124].

On the other hand, a signal in the near future could either disprove some assumptions of the present cosmological models or suggest that a different mechanism other than the light neutrino exchange mediates the transition. New experiments are interested in testing the latter possibility by probing scenarios beyond the SM [118, 122, 213].

7.2. Measurements Scenario

Here we consider the implications of the following nonzero value of [135]: We focus on the light neutrino exchange scenario and assume that is observed with a rate compatible with (1)the present sensitivity on ; in particular, we use the limit coming from the combined 136Xe-based experiments [81]; we refer to this as to the “present” case;(2)a value of that will be likely probed in the next few years; in particular, we use the CUORE experiment sensitivity [83], as an example of next generation of experiments; we refer to this as to the “near future” case.

For the sake of completeness, it is useful to recall a few definitions and relations. The likelihood of a simultaneous observation of some values for and (resp., with uncertainties and and distributed according to Gaussian distributions) can be written as follows:Recalling the relation between and the likelihood, namely, , we obtainwhich represents an elliptic paraboloid. Since we are dealing with a two-parameter , we need to find the appropriate prescription to define the confidence intervals. At the desired confidence level, we getand thus This defines the value for correspondent to the confidence level CL.

In order to write down the likelihood we need to evaluate the standard deviations both on and on . While the error on comes directly from the cosmological measurement, the one on has to be determined. It has two different contributions: one is statistical and comes from the Poisson fluctuations on the observed number of events (see Section 6.3), while the other comes from the uncertainties on the nuclear physics (see Section 5.3). Actually, a greater effect would rise if we took into account the error on , but here we assume the quenching is absent.

For a few observed events, let us say less than 10 events, the global error is dominated by the statistical fluctuations. The error on the nuclear physics becomes the main contribution only if many events (more than a few tens) are detected. Using the described procedure and for the present case, we find an uncertainty on of about 31 meV for 5 observed events, which reduces to 24 meV for 10 events. If we neglect the statistical uncertainty, for example, we put , the uncertainty becomes 14 meV. This means that the Poisson fluctuations effect is not negligible at all. Similarly, repeating the same work for the near future case, we obtain an uncertainty of 17 meV for 5 events, 13 meV for 10 events, and 8 meV for .

Let us now concentrate on the case of 5 observed events. If we cut the at the 90% CL and we consider the data previously mentioned, we obtain the bigger, solid ellipses drawn in Figure 21. This shows that, in the near future case, a detection of would allow saying nothing about the mass hierarchy or about the Majorana phases. Interestingly, if were actually discovered with a a little bit lower than the one probed in the present case, some conclusions about the Majorana phases could be carried out. In any case, in order to state anything precise about and the Majorana phases, even assuming the discovery of , the uncertainty on the quenching of the axial vector coupling constant has to be dramatically decreased.

If we repeat the same exercise assuming an observed number of events of 20, we obtain the smaller, dashed ellipses of Figure 21. In this case, an hypothetical observation coming from the present case is highly disfavored while, in the future case, even if nothing can be said about the hierarchy, some conclusions could be carried out regarding the Majorana phases.

This simple analysis shows that, thanks to the great efforts done in the NME and PSF calculations, it is most likely that the biggest contribution to the error will come from the statistical fluctuations of the counts. However, the theoretical uncertainty from the nuclear physics could make the picture really hard to understand because, up to now, it is a source of uncertainty of a factor 4–8 on .

7.3. Considerations on the Information from Cosmological Surveys

The newest results reported in Table 4 confirm and strengthen the cosmological indications of upper limits on , and it is likely that we will have soon other substantial progress. Moreover, the present theoretical understanding of neutrino masses does not contradict these cosmological indications. These considerations emphasize the importance of exploring the issue of mass hierarchy in laboratory experiments and with cosmological surveys. However, as already stated, a cautious approach in dealing with the results from cosmological surveys is highly advisable.

From the point of view of , these results show that ton- or multi-ton-scale detectors will be needed in order to probe the range of now allowed by cosmology. Nevertheless, if next generation experiments see a signal, it will likely be a signal of new physics different from the light Majorana neutrino exchange.

8. Summary

In this review, we analyzed the process under many different aspects. We assessed its importance to test lepton number, to determine the nature of neutrino mass, and to probe its values. Various particle physics mechanisms that could contribute to the were examined, although with the conclusion that from the theoretical point of view the most interesting and promising remains the light Majorana neutrino exchange. We studied the current experimental sensitivity, focusing on the critical point of determining the uncertainties in the theoretical calculations and predictions. In view of all these considerations, the prospects for the near future experimental sensitivity were presented and the main features of present, past, and future experiments were discussed. Finally, we stressed the huge power of cosmological surveys in constraining neutrino masses and consequently the process.

Appendix

A. Extremal Values of

Recalling the definition of (27) for the Majorana effective mass: it is possible to demonstrate that the extreme values assumed by this parameter due to free variations of the phases are (the proof shown here is based on the work reported in [129]) as follows:

A.1. Formal Proof

Regarding the first assertion, it is obvious that the sum of complex numbers has the biggest allowed module when those numbers have aligned phases. Since the physical quantities depend on , without any loss of generality, it is possible to choose the first term () to be real. It thus follows that also the other two terms must be real: this is equivalent to considering the sum of the modules of the single terms.

To prove the second statement, let us consider the general case , where are complex numbers. We want to minimize , by keeping fixed. Let us defineIt is worth noting that only one of the can be positive, at most. Therefore, it is possible to distinguish 4 cases: (i);(ii);(iii);(iv) for .In the first one, it is possible to show that . In fact, we can writeand, sincewe obtainSimilarly, and in the second and in the third cases, respectively. In the last case, it is necessary to observe that, if one of the , then . Therefore, only the case in which must be considered. In this case, goes from negative when to positive when . By continuity, this implies that a proper phase choice such that must exist. Thus, one can conclude also in this case that (by choosing ).

In synthesis, the single case analysis leads to This proves the original statement; since , for , .

A.2. Remarks on the Case

The three mixing elements are constrained by the unitarity: . This condition can be graphically pictured by using the inner region of an equilateral triangle with unitary height, where the distance from the th side corresponds to the value of (see [129] for details). The result is displayed in Figure 22.

The experimental constraints on the oscillation parameters make it possible to evaluate the elements and, therefore, to identify a point inside the triangle, which is placed at the center of the colored bar in Figure 22. The different colors of the bar correspond to the , , and regions.

At each vertex, the value of coincides with and with one of the mass eigenstates (). Then, the value of decreases moving from one vertex towards the inner part of the triangle, until it becomes zero inside the region delimited by the vertices defined by the conditions:In fact, if we consider, for example, the first condition, from (A.9) we haveSubstituting the possible values , and recalling that the condition to get is expressed by (A.3), we obtainThe same argument can be applied also for the other two conditions. It is therefore possible to identify a region inside the triangle where is zero. The experimental constraints on the oscillation parameters limit the possibility of , only to the case of . Of course, the position and the extension of this region depend on the lightest neutrino mass.

Instead of choosing one particular value for the lightest neutrino mass, it is more convenient to plot the superposition of the regions obtained for increasing values of this parameter. In Figure 22, in orange we show the region obtained varying from  eV, up to the 90% CL maximum value it can have considering the limit on from [136], according to (59). The gray region shows the superposition obtained when ; namely, we show what happens if it turns out that the cosmological mass is close to its lower limit (0.06 eV for the case).

The existence of a region implies that, in principle, could be forbidden just by particular combinations of the phases, even if the neutrino is a Majorana particle.

B. Error Propagation

It is convenient and usually appropriate to adopt statistical procedures that are as direct and as practical as possible. We are interested in the following situation. For any choice of the Majorana phases, the massive parameter that regulates can be thought of as . It is a function of the parameters that are determined by oscillation experiments up to their experimental error, , and of another massive parameter . Here a remark is necessary. When in the literature we found maximal or systematic uncertainties, in order to propagate their effects in our calculations, we decided to interpret them as the semiwidths of flat distributions and thus, dividing these numbers by , we could get the standard deviations of those distributions. Then, we considered those values as standard deviations for Gaussian fluctuations of the parameters around the given values.

For any fixed value of , and for the other parameters set to their best fit values , we can attach the following error to :When we want to consider the prediction and the error for a fixed value of another massive parameter , we have to vary also , keeping . Therefore, in this case, we find Of course, we will calculate by inverting (here, the symbol denotes the function and also its value; however, this abuse of notation is harmless in practice).

C. , Analytical Solution

Let us write in full generality the three-flavor relation for the mass probed in cosmology aswhere , , , and are masses, that is, nonnegative parameters. It is possible to obtain as a function of in the physical range simply by solving a quartic equation. Since we are interested in certain specific cases ( or ) we specify the discussion further.

When , corresponding to the case, it is convenient to write the quartic equation as whereIndeed, we see that this quartic equation has spurious solutions in this limit, for example, those for . Instead, we are interested in the one that (for ) readswith . In the case when , instead, which corresponds to the case, it is convenient to write the quartic equation as whereAgain, we see that this quartic equation has spurious solutions in the limit , for example, . We are interested in the one that in the case readswith .

Finally, we discuss useful approximate formulae for the specific parameterization suggested in [35]; namely, for the case and for the one.

In the latter case, the approximation obtained by (C.8), namely,is already excellent, being better than 3 μeV in the whole range of masses. Instead, (C.5) implies a maximum error that can reach 5 meV for . Although this is quite adequate for the present and near future sensitivity, it is possible to improve the approximation also in the case of by usingThis formula is obtained by linearly expanding in the relation that links and , (C.1), around the point . The error is remarkably small error and more than adequate for the present sensitivity: less than 0.2 meV.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

The authors wish to acknowledge extensive discussions with Professor F. Iachello and thank him for stimulating this study. Francesco Vissani also thanks E. Lisi.