Abstract

This paper introduces the neutrinoless double-beta decay (the rarest nuclear weak process) and describes the status of the research for this transition, both from the point of view of theoretical nuclear physics and in terms of the present and future experimental scenarios. Implications of this phenomenon on crucial aspects of particle physics are briefly discussed. The calculations of the nuclear matrix elements in case of mass mechanisms are reviewed, and a range for these quantities is proposed for the most appealing candidates. After introducing general experimental concepts—such as the choice of the best candidates, the different proposed technological approaches, and the sensitivity—we make the point on the experimental situation. Searches running or in preparation are described, providing an organic presentation which picks up similarities and differences. A critical comparison of the adopted technologies and of their physics reach (in terms of sensitivity to the effective Majorana neutrino mass) is performed. As a conclusion, we try to envisage what we expect round the corner and at a longer time scale.

1. Introduction

The double-beta decay is the rarest nuclear weak process. It takes place between two even-even isobars, when the decay to the intermediate nucleus is energetically forbidden due to the pairing interaction, which shifts the even-even and the odd-odd mass parabolas in a given isobaric chain; therefore, only due to the pairing interaction can the double-beta decay be observed. This is seen clearly in Figure 1. The two-neutrino decay conserves the lepton number and was originally proposed by Goeppert-Mayer in 1935 [1]. It is a second-order weak process, this is the reason of its low rate, and the first direct laboratory detection was only achieved as recently as 1987 [2]. Since then, it has been measured for a dozen of nuclei [3], with lifetimes in the range 1018–1022 y. The alternative is the neutrinoless double-beta decay (), proposed by Furry [4] after the Majorana theory of the neutrino [5]. The neutrinoless decay can only take place if the neutrino is a massive Majorana particle and demands an extension of the standard model of the electroweak interactions, because it violates the lepton number conservation. Therefore, the observation of the double-beta decay without emission of neutrinos will sign the Majorana character of the neutrino. The corresponding nuclear reactions are the following:

Currently, there is a number of experiments either taking place or expected for the near future—see, for example, [6, 7] and Section 7.3.—devoted to detect this process and to set up firmly the nature of neutrinos. Most stringent limits on the lifetime are of the order of 1025 y. A discussed claim for the existence of decay in the isotope 76Ge (see Section 7.1) declares that the half-life is about  y [8]. Furthermore, the decay is also sensitive to the absolute scale of the neutrino masses (if the process is mediated by the so-called mass mechanism), and hence to the mass hierarchy (see Section 2). Since the half-life of the decay is determined, together with the effective Majorana neutrino mass (defined later in Section 2), by the nuclear matrix elements for the process NME, its knowledge is essential to predict the most favorable decays and, once detection is achieved, to settle the neutrino mass scale and hierarchy.

Another process of interest is the resonant double-electron capture which could have lifetimes competitive with the neutrinoless double-beta decay ones only if there is a degeneracy of the atomic mass of the initial and final states at the eV level [9]. For the moment, high-precision mass measurements have discarded all the proposed candidates (see [10] for a recent update of the subject). As in the neutrinoless double-beta decay, the decay rate depends on the effective Majorana neutrino mass and the NME defined in Section 3.

2. Neutrinoless Double-Beta Decay and New Physics

The main feature of decay is just the violation of the lepton number. In the modern (standard model) perspective, this is as important as the violation of the baryon number. In a very general context, we can imagine this process as a mechanism capable to create electrons in a nuclear transition. It is pretty evident and well known that this transition is not necessarily due the exchange of Majorana neutrinos (mass mechanism) as a leading contribution, although its observation would prove that neutrinos are self-conjugate particles [11]. Many extensions of the standard model generate Majorana neutrino masses and offer a plethora of decay mechanisms, like the exchange of right-handed W-bosons, SUSY superpartners with R-parity violating, leptoquarks, or Kaluza-Klein excitations, among others, which have been discussed in the literature. Possibilities to disentangle at least some of the possible mechanisms (e.g., that related to the existence of right-handed currents) rely on the analysis of angular correlations between the emitted electrons (possible only in one of the future proposed searches), the study of the branching ratios of decays to ground and excited states, a comparative study of the decay and neutrinoless electron capture with the emission of a positron, and analysis of possible links with other lepton-flavor violating processes.

However, after the discovery of neutrino flavor oscillations (which prove that neutrinos are massive particles), the mass mechanism occupies a special place. It relates neatly the decay to important parameters of the neutrino physics, fixes clear experimental targets, and provides a clue to compare on equal footing experiments which present considerable differences from the methodological and technological points of view. In fact, as extensively discussed in Section 3, the lifetime of the decay is related to the so-called effective Majorana neutrino mass, defined by the following equation:

This crucial parameter contains the three neutrino masses , the elements of the first row of the neutrino mixing matrix , and the unknown CP-violating Majorana phases (only two of them have a physical meaning), which make cancellation of terms possible: could be smaller than any of the . Thanks to the information we have from oscillations, it is useful to express in terms of three unknown quantities: the mass scale, represented by the mass of the lightest neutrino , and the two Majorana phases. It is then common to distinguish three mass patterns: normal hierarchy, where , inverted hierarchy, where , and the quasidegenerate spectrum, where the differences between the masses are small with respect to their absolute values. We ignore Nature’s choice about the neutrino mass ordering at the moment, and the decay has the potential to provide this essential information. In fact, if it can be experimentally established that meV, one can conclude that the quasidegenerate pattern is the correct one and extract an allowed range of values. On the other hand, if lies in the range 20–50 meV, the pattern is likely inverted hierarchy, although the normal hierarchy cannot be excluded if the lightest neutrino mass sits on the far right of the allowed band. Eventually, if one could determine that meV but nonvanishing, the conclusion would be that the normal-hierarchy pattern holds. It turns out therefore that is important over two fronts: the comprehension of fundamental aspects of elementary particle physics and the contribution to the solution of hot astroparticle and cosmological problems, related to the neutrino mass scale and nature.

3. Formalism

The starting point for the description of the decay in the mass mode is the weak Hamiltonian: where is the leptonic current, and the hadronic—nuclear—counterpart is given in the impulse approximation by with the momentum transferred from hadrons to leptons, this is, .

In the nonrelativistic case, and discarding energy transfers between nucleons, we have where

The parameterization of the couplings by the standard dipole form factor—to take into account the finite nuclear size (FNS)—and the use of the CVC and PCAC hypotheses—for the magnetic and pseudoscalar couplings and —are those described in [12]. We take as values of the bare couplings and .

Due to the high momentum of the virtual neutrino in the nucleus—100 MeV—we can replace the intermediate state energy by an average value and then use the closure relation to sum over all the intermediate states. This approximation is correct to better than 90% [13]. We also limit our study to transitions to final states and assume electrons to be emitted in wave. Corrections to these approximations are of the order of 1% at most, due to the fact that in the other cases effective nuclear operators of higher orders are needed to couple the initial and final states. With these considerations, the expression for the half-life of the decay can be written as [14, 15] where , the effective Majorana neutrino mass, was introduced in (2.1), and is a kinematic factor (known also as phase-space factor)—dependent on the charge, mass, and available energy of the process, in the following denoted also as -value or simply . is the NME object of study in this section. As already discussed, the neutrino mass scale is directly related to the decay rate. The kinematic factor depends on the value of the coupling constant . Therefore, the NMEs obtained with different values cannot be directly compared. If we redefine the NME as: the new NMEs ’s are directly comparable no matter which was the value of employed in their calculation, since they share a common factor—the one calculated with . In this sense, the translation of ’s into half-lives is transparent.

The NME is obtained from the effective transition operator resulting of the product of the nuclear currents: where is the tensor operator. The functions can be labeled according to the current terms from which they come: whose explicit form can be found in [12].

Till recently, only and terms were considered. However, rough estimates of the value of these terms taking MeV give , , , and . Therefore, according to the figures, certainly cannot be neglected. Since the Gamow-Teller contribution will be the dominant one, and both the and have the same sign and opposite to , it seems sensible to keep all these terms in the calculation.

Integrating over , we get the corresponding operators in position space, which are called the neutrino potentials. Before radial integration, they look like where are the spherical Bessel functions, is the distance between nucleons, and , which makes the result dimensionless, is taken as , with fm.

Finally, the NME reads

Until very recently, the short-range correlations were taken into account in the calculation of the NME using the Jastrow prescription of [16, 17] as follows: with , where and .

However, there has been recent proposals [18] suggesting to use a more microscopic method—namely, the unitary correlation operator method (UCOM) [19]—to estimate the SRC, which leads to a much softer correction. A fully consistent calculation of the short-range effects made in [20], which regularizes the operator using the same prescription as that for the bare interaction, concludes that the effect of the short-range correlations is negligible if the nucleon dipole form factors are taken into account properly.

In summary, there is a broad consensus in the community about the form of the transition operator in the mass mode, which must include the higher-order terms in the nuclear current that we have discussed, and the proper nucleon form factors. The consensus extends to the validity of the closure approximation for the calculation of the NMEs and to the use of soft (or no) short-range corrections. The situation is less clear concerning the use of bare or quenched values of , and we will discuss this specific point later on.

4. The Nuclear Part of the NMEs

Once the main issues related to the transition operator are settled, we are left with the purely nuclear ingredient of the neutrinoless double-beta decay NMEs, the wave functions of the initial and final states of the process. Two different methods were traditionally used to calculate the NMEs for decays, the quasiparticle random-phase approximation, and the shell model in large valence spaces (ISM). The QRPA has produced results for most of the possible emitters since long [2123]. In this method, the pairing correlations are treated in the BCS approximation and the multipole ones at the RPA level. This is an important aspect because as we will show in what follows the pairing structure of the nuclear wave functions plays a prominent role in the size of the NMEs. The ISM, that was applied only to a few cases till recently, can nowadays describe (or will do it shortly) all the experimentally relevant decays but one, the decay of 150Nd [24]. Other approaches sharing a common prescription for the transition operator (including higher order corrections to the nuclear current), for the treatment of the short-range correlations (SRCs) and the finite size effects, are the Interacting Boson Model [25], the Generator Coordinate Method with the Gogny force [26], and the Projected Hartree-Fock-Bogoliubov method [27].

The ISM calculations are performed in different valence spaces and utilize well-tuned effective interactions which make it possible to describe with great accuracy many different observables in many different nuclei. All the details of the modern ISM approach can be found in the review of [28]. For instance, in the decay of 48Ca, we employ the KB3 interaction in the major shell. For the case of 76Ge and 82Se, the valence space consists of the , , , and orbits, and the interaction is the GCN28.50. Finally the , 1d, , , and valence space and the GCN50.82 interaction are used in the decays of 124Sn, 128Te, 130Te, and 136Xe. Notice that in these calculations, all the possible states of the valence particles in the valence states are taken into account, which leads to basis containing up to O() Slater determinants. QRPA valence spaces comprise typically two major oscillator shells. But only a minor fraction of the possible configurations are taken into account. The effect of the orbits excluded in the ISM calculations in comparison with the QRPA spaces was evaluated in [29], in the particular cases of and , and the effect was to increase the NMEs by less than 25%.

Figure 2 shows the most recent results of the different methods. We can see that in most cases the results of the ISM calculations are the smallest ones, while the largest ones may come from the IBM, QRPA, or GCM.

The difficulty is to decide upon the merit of the different approaches because of our limited understanding of the physical content of the two-body transition operator (and, indeed, the absence of any experimental anchorage). The situation is very different in the 2 mode; the decay proceeds via the sum of virtual Gamow-Teller transitions from the initial nucleus to the states of the intermediate odd-odd nucleus followed by another one to the final one. The matrix element is the sum over all the intermediate states of the products of the two Gamow-Teller amplitudes with an energy denominator (see (4.1) below): Even without any experimental result, one could judge the validity of the predictions of the different nuclear models comparing their predictions for the strength functions as measured in charge exchange reactions [32], the excitation energies of the states of the intermediate nucleus, and so forth. Indeed, the ISM predictions of these observables are quite successful (see [33]) and we will come back to this issue later. In the decay, we lack of direct referents of this sort and the evaluation of the adequacy of the different methods is inevitably more ambiguous. A key point is therefore to understand better the peculiarities of the operator to learn which are the properties of the initial and final nuclei to which it is more sensitive.

4.1. The Role of the Pair Structure of Wave Functions in the NMEs

The two-body decay operator can be written in the Fock space representation as follows: where the indices , , , and run over the single-particle orbits of the spherical nuclear mean field. Applying the techniques of [34], we can factorize the operators as follows: The operators annihilate pairs of neutrons coupled to in the parent nucleus, and the operators substitute them by pairs of protons coupled to the same . The overlap of the resulting state with the ground state of the grand daughter nucleus gives the -contribution to the NME. The—a priori complicated—internal structure of these exchanged pairs is dictated by the double-beta decay operators.

In order to explore the structure of the 0 two-body transition operators, we have plotted in Figure 3 the contributions to the GT matrix element as a function of the  of the decaying pair in the and cases. The results are very suggestive, because the dominant contribution corresponds to the decay of pairs, whereas the contributions of the pairs with are either negligible or have opposite sign to the leading one. This behavior is common to all the cases that we have studied and is also present in the QRPA calculations, in whose context they had been discussed in [23, 35]. To grasp better this mechanism, we shall work in a basis of generalized seniority s (s counts the number of unpaired nucleons in the nucleus). If the two nuclei in the process had generalized seniority zero, only the pairs would contribute to the NME, which therefore would have a large value. This is better seen in Figure 4 where we plot the evolution of the values of the NMEs as a function of the maximum seniority which we allow in the wave functions of the decaying and stable nuclei.

It is clearly seen that truncations in seniority tend to overestimate the value of the NMEs. And this can give us a handle to evaluate the different descriptions in terms of their ability to describe properly the correlations which tend to break the nuclear Cooper pairs. High-seniority components are strongly connected to quadrupole correlations and indeed to nuclear deformation. As an example, we show in Table 1 the decomposition of the wave function of the nucleus 66Ge—that would exhibit a fictitious decay to its mirror 66Se—for different deformations, obtained by adding a variable extra quadrupole-quadrupole term to the interaction. It so happen that as the nucleus becomes more deformed, the high-seniority components become more important.

The next finding of this exercise is even more interesting because it gives us another clue on what is relevant in the nuclear wave functions from the NMEs point of view. We have plotted in Figure 5 the value of the NME as a function of the difference in deformation that we induce by adding the extra quadrupole-quadrupole term to the interaction only for the final nucleus 66Se. Notice in the first place that with the initial interaction both nuclei are mildly deformed (and their wave functions are identical after the exchange of neutrons and protons) with . In spite of that, the NME is a factor of two larger than the values obtained for the and decays in the same valence space and with the same interaction. Hence, even if the two partners are deformed, the fact that their wave functions are identical enhances the decay (the fact that they are mirror nuclei also contributes to this enhancement mainly because of the Fermi contribution, which is enhanced due to the isospin selection rules). Nevertheless, the NME is still far from its expected value in the superfluid limit (NME~7). The figure shows that the reduction of the NME as the difference in deformation increases is very pronounced, and for Δ, the NME is one-third of the initial one. If we increase the deformation of the two mirror nuclei by the same amount, the NME decreases as well, but less rapidly, for instance, if we deform both nuclei till , the value of the NME is reduced just by 25%.

This behavior of the NMEs with respect to the difference of deformation between parent and grand daughter is common to all the transitions between mirror nuclei that we have studied , and to more realistic cases like the decay that we have examined in detail in [36]. Therefore, we can submit that this is a robust result, that can be of importance for the only case which is for the moment out of reach of the ISM description; the decay of 150Nd that SNO+ will try to measure soon, because 150Sm is much more deformed than 150Nd. We have also shown in [36] that the reason for this quenching of the NME is the mismatch in seniority between the initial and final nuclei. Therefore, all the models which tend to smooth these differences and/or to overestimate the low-seniority components of the wave functions are bound to predict too large NMEs.

In Figure 6, the QRPA NMEs are compared with the ISM ones without truncation and truncated at seniority. The comparison is very telling, because the agreement of the truncated ISM results with the QRPA is surprisingly good. Hence, it is apparent that the QRPA results (and the IBM and GCM ones) fall short in capturing in full the multipole correlations in the cases where they are important, and because of this, they produce NMEs which are larger than the ISM ones.

4.2. Other Benchmarks of the Nuclear Wave Functions

Even if we do not have access to observables that are unambiguously related to the neutrinoless NME, there is a plethora of experimental data which can be used to benchmark the wave functions of the participant nuclei, produced by the different nuclear models. We shall discuss the single decays (and charge exchange data) together with the 2 results in the context of the value of which should be used in the calculations in the next section. We are aware of the fact that the different benchmarks are not independent.

(i) Shell and subshell closures: these are very prominent properties in the nuclear dynamics which should manifest in the NMEs. Indeed they do, because in this case the variations in the seniority structure between the initial and final nuclei are very abrupt, leading to very large cancelations of their NME. This is particularly acute in the decay of 48Ca, which is the only doubly magic nucleus candidate to neutrinoless double-beta decay and the one which has the smallest NME.

In Table 2, we show the seniority structures of 48Ca and 48Ti, and we can see that they are very different. We then compute the matrix elements , and we find the values listed in the same table. There are only two large matrix elements; one diagonal and another off-diagonal () of the same size and opposite sign. If the two nuclei were dominated by the seniority zero components, one should obtain . If 48Ti were a bit more deformed, MGT will be essentially zero. The value produced by the KB3 interaction is 0.75, which represents more than a factor five reduction with respect to the seniority zero limit. Earlier work on double-beta decays in a basis of generalized seniority (limited to and components) showing also this kind of cancellations can be found in [35].

Among the favored potential emitters, we have also a few cases of semimagic nuclei in which these effects are less dramatic; however, one should be aware of the fact that if a calculation overemphasizes a subshell closure, its NMEs are bound to be too small. This is possibly the situation in some calculations of the decay of 96Zr. Thus, all these spectroscopic issues should be verified with extreme care before trusting a NME.

(ii) Occupation numbers: another piece of information which is very relevant is provided by the analysis of the experimental spectroscopic factors of stripping and pick-up reactions that lead to the extraction of the occupation numbers of the orbits close to the Fermi level. This has been recently done for neutrons and protons in 76Ge and 76Se in a series of very careful experiments in [37, 38]. Its impact on the different calculations has been uneven; the experimental occupancies were in reasonable agreement with the ISM ones [39], while completely at odds with the QRPA ones [40, 41]. When the QRPA calculations were modified to reproduce these data, their NMEs came closer to the ISM one. There are experiments in progress for 130Te and 130Xe, but for the moment the information is limited to the neutron occupancies [42] (which by the way are not very different from the ISM ones).

(iii) Pair transfer amplitudes: in view of the important cancelations between the contributions to the NMEs coming from the transmutation of pairs of neutrons with and , the knowledge of the pair transfer amplitudes from and to the neighboring nuclei can be a very strict test of the nuclear wave functions. Reference [42] contains a review of the subject and a list of planned experiments.

(iv) Energy spectra and electromagnetic transitions: these are data which are traditionally the labels of the nuclear shapes and reflect the degree of multipole collectivity, superfluidity, shell closures, and so forth. We have seen that the difference in structure between the initial and final nuclei is the major reason for the depletion of the NMEs and thus the importance of describing these properties accurately.

4.3. The Gamow-Teller Operator: To Quench or Not to Quench

It is a well-known fact that in order to explain the experimental transition probabilities of the Gamow-Teller decays, the predictions of any model which does not take into account explicitly the short-range correlations must be affected by a reduction factor. Quenching factors of 0.77 in the -shell, [43] and 0.74 in the -shell [44] have been extracted from fits to the experimental data in the ISM framework. The value tends asymptotically to 0.7. These results are consistent with those of a large series of charge exchange reactions, in which only about one half of the strength predicted by the Ikeda sum rule [45] was actually measured. The quenching factor can be interpreted as a kind of effective charge for the Gamow-Teller operator due to the highly repulsive core of the nucleon-nucleon bare interaction [46]. In principle, "ab initio" calculations should be free of these limitations, but the results of the first attempts are not conclusive yet [47]. All the nuclear models that we are discussing in this paper share the need of using an effective Gamow-Teller operator for the description of the single decays. And, once taken into account, they should be able to reproduce the experimental data, which so provide another important benchmark. Indeed, the ISM calculations perform quite well in this respect.

The main QRPA practitioners have had their Scylla and Charybdis with this issue, because when adjusting one of the key parameters in their calculations, the strength of the interaction in the particle-particle channel, , they had to sacrifice either the reproduction of the single-beta decays or the two neutrino double-beta decay transition probabilities. Finally, they have given up the single-beta decays and fixed their ’s to the experimental half-lives of the 2 decays. In some cases, the calculations were made both with quenched and with bare operators. In our opinion, the only consistent way of doing it is with the effective operator. In any case, as they fix the interaction case by case to the experimental data of the 2 decays, we cannot judge on the merit of their approach in this respect.

The ISM description of the two neutrino double-beta mode started with the 48Ca decay in the full -shell, several years in advance of the experimental measure [48]. The prediction turned out to be quite accurate. For the other decays the situation is less favorable, because the ISM valence spaces are not complete in the sense of comprising all the spin orbit partners. In these spaces, we have made local fits to the single-beta decays, extracted the local quenching factors, and used them in the calculation of the 2 decays, with rather satisfactory results. We have gathered all our results recently in [33].

The important question is what to do in the neutrinoless case. Contrary to the 2, all the multipoles contribute now to the NME, and, in fact, the channel with the Gamow-Teller quantum numbers is never dominant and quite often has opposite sign to the others. It is therefore not guaranteed that the right choice would be to affect all the channels of the quenching derived in the pure Gamow-Teller decays in the long wavelength limit. A very interesting effort to disentangle this problem was made by Hagen and Engel who went on renormalizing the two-body transition operator of the neutrinoless double-beta decay in the closure approximation in parallel to the renormalization of the bare nucleon nucleon interaction [49]. Their preliminary conclusion was that no renormalization was necessary. Another attempt along similar lines using chiral perturbation theory [47] has neither given a definite answer to this question, which is probably the major remaining source of uncertainty of the NMEs of the neutrinoless double-beta decays.

5. A Modest Proposal for the Ranges of Values of the NMEs

The question often posed to theorists working in this field is, what are the error bars of your NMEs? Obviously the error bar cannot be of statistical origin because we do not produce models at random. And if we could control the systematic errors, we should have done it already, hence improving our descriptions. That is why we speak of range of values in a very very loose sense. What would be nonsensical is to average the results of the different approaches blindly, without analyzing their respective merits or trends. Each one of the major methods has some advantages and drawbacks, whose effect in the values of the NME can be sometimes explored. The clear advantage of the ISM calculations is their full treatment of the nuclear correlations, while their drawback is that they may underestimate the NMEs due to the limited number of orbits in the affordable valence spaces. It has been estimated [29] that the effect can be of the order of 25%. On the contrary, the QRPA variants, the GCM in its present form, and the IBM are bound to underestimate the multipole correlations in one or another way. As it is well established that the action of the correlations is to diminish the NMEs, these methods should tend to overestimate their value. With these considerations in mind, we propose here an educated range of NME values which somehow take into account the limitations of the different approaches, very much in the mood of [50]. In what follows, we select the results of the major nuclear structure approaches which share the following common ingredients: (a) nucleon form factors of dipole shape; (b) soft short-range correlations computed with the UCOM method; (c) unquenched axial coupling constant ; (d) higher-order corrections to the nuclear current; (e) nuclear radius , with  fm. The IBM results are multiplied by 1.18 to account for the difference between Jastrow and UCOM, and the RQRPA ones are multiplied by 1.1/1.2 so as to line them up with the others in their choice of fm. Therefore, the remaining discrepancies between the diverse approaches are solely due to the different nuclear wave functions which they employ.

Lets start with the 150Nd case, for which no ISM value is available. The GCM calculation [26] is clearly the most sophisticated in the market from the point of view of the nuclear structure, and gives the smallest NME. The two other approaches, QRPA [51] and IBM [25], give larger and similar results; therefore, we weight more the GCM value to propose a range [2.03–2.63] even if, in view of the precedent discussion on the effect of the missing correlations in these approaches, we could somehow overestimate it. For 136Xe, we have the ISM value which defines the lower end of the range, but we shall increase it by 25% to account for the limitations in the valence space (we shall apply this correction to all the ISM NMEs except the 48Ca one in which the ISM calculation include a full harmonic oscillator major shell). For the upper one, we average the NMEs from the RQRPA calculation of the Tubingen group [30], the GCM, the IBM, and the more recent pnQRPA result from the Jyvaskyla La Plata collaboration [40]. The resulting interval is [2.74–3.45]. With the same ingredients, we obtain a range [3.31–4.61] for 130Te and [3.60–4.69] for 128Te. For 100Mo, the ISM results are still preliminary, and we do not dare to offer an interval, so we propose only an upper bound of 4.23. In the 96Zr case, the NME depends critically on the degree of subshell neutron closure given by the calculation. The anomalously low value proposed by the QRPA calculation of the Tubingen group is surely due to this overclosure (we have checked this effect in our ISM calculation). Discarding this value, the range is [3.06–3.71] (but this time the ISM value is larger than the average of the QRPA and IBM). For 82Se, the interval is [3.30–4.54] using the latest SRQRPA [41]. In the case of the NME of the 76Ge decay, we can use an extra filter, namely, to demand that the calculations be consistent with the occupation numbers measured by Schiffer and collaborators [37, 38]. This leaves us with the ISM [39], SRQRPA, and the pnQRPA. Averaging again the two QRPA values, we obtain the interval [4.07–4.87]. Finally, for the decay of 48Ca, we trust fully the ISM value. The GCM description of double magic nuclei is known to have serious drawbacks. Therefore, we keep the ISM value, 0.85, which can be taken as a lower bound not far from the exact value. We have gathered all these values in Figure 7. It is evident that there are two cases where the NMEs are clearly smaller than the average, 48Ca and 150Nd. For the rest of the decays, the differences in NMEs are within the uncertainty of the calculated values.

6. Experimental Challenge and Strategies

In the standard interpretation of neutrinoless double-beta decay in terms of mass mechanism, experimentalists designing a neutrinoless double-beta decay experiment have three hurdles to leap over in front of them. The first consists in scrutinizing the much debated 76Ge claim [8]: recent experimental results and present developments are very close to accomplish this task. The second one consists in approaching and then covering the inverted hierarchy region of the neutrino mass pattern. The third and ultimate goal is to explore the direct hierarchy region. In this section, we discuss the main guidelines to achieve these targets.

6.1. Size of the Challenge

First, we have to quantify in terms of signal and background rates the challenges that the experimentalists have to cope with. Since we do not want to be precise here, but just to assess orders of magnitude, we will make crude approximations in the formula of (3.5) which gives the rate. We will take for the nuclear matrix elements (this choice is motivated by the results discussed in Section 5 and shown in Figure 7). We observe then that for most of the experimentally relevant isotopes the phase space term (including the factor with the axial coupling constant set equal to 1.25) is in the range  y−1 (with significant exceptions discussed in Section 6.2). We will consider therefore a sort of “average” candidate isotope with and  y−1. In Table 3, we report the rates for 1 kmol of isotope for this standard candidate in correspondence with the reference values of .

Considering that 1 kmol corresponds typically to several tens—one hundred kilograms of isotope mass, and that it is meaningful to operate a well designed experiment for ~5 y, we immediately see that while scrutinizing the 76Ge claim may be done in principle with only ~10 kg isotope, we need typically 1 ton of isotope mass in order to explore the inverted hierarchy region, just to accumulate a few signal counts. The direct hierarchy region seems for the moment out of the reach of the present technologies, since one would need sources of the order of 1 Mmol (typically 100 tons).

In addition, in order to appreciate such tiny signal rates, the background needs to be extremely low. The experimentalists are obliged to operate in conditions of almost zero background, given the constraints imposed by the size of the source. Acceptable background rates are of the order of 1–10 counts/(y kmol) if the goal is just to approach or touch the inverted hierarchy region, whereas one needs at least one order of magnitude lower values to explore it fully, around or even less than 1 count/(y ton).

6.2. Choice of the Double-Beta Decay Isotope

Which are the best isotopes to search for neutrinoless double-beta decay? Experimental practice shows that the following three factors weight the most in the design of an experiment: (i) the -value, (ii) the isotopic abundance together with the ease of enrichment,(iii) the compatibility with an appropriate detection technique.

The -value is probably the most important criterion. It influences both the phase space and the background. It is essentially a -value-based selection which determines the fact that at the moment there are only 9 experimentally relevant isotopes (listed in Table 4, which reports also other parameters and notes relevant for the discussion in the present section). The -values of all these isotopes are larger than 2.4 MeV, with the important exception of 76Ge (-value = 2.039 MeV) which remains in the elite mainly thanks to factor (iii). One can get a grasp of the -value situation in Figure 8, where all the 35 double-beta unstable nuclei are reported with their energy transition. The “magnificent nine” are highlighted. Two markers indicate two important energy limits in terms of background: the 2615 keV line represents the end-point of the natural gamma radioactivity; the 3270 keV line represents the -value of the 214Bi beta decay, which, among the 222Rn daughters, is the one releasing the highest-energy betas and gammas. The 9 candidates are divided by these two markers in three groups of three isotopes. The first group (76Ge, 130Te, and 136Xe) has to cope with some gamma background and with the Radon-induced one; the second group (82Se, 100Mo, and 116Cd) is out of the reach of the bulk of the gamma environmental background but Radon may be a problem; the candidates of the third group (48Ca, 96Zr, and 150Nd) are in the best position to realize a background-free experiment. As for the phase space, the situation is depicted in Figure 9. No great differences are observable among the various candidates, with the significant exceptions of 76Ge, which presents a small value of only ~ y−1 due to its low and, on the other side of 150Nd, characterized by a particularly high value of ~ y−1).

As for the second criterion, natural isotopic abundances are reported in Table 4. Most of the abundances are in the few % range, with two significant exceptions: the positive case of 130Te that with its 33.8% value can be studied with high sensitivities even with natural samples; the negative case of 48Ca, well below 1%. Given the considerations exposed in Section 6.1, an ambitious experiment (aiming at exploring the inverted hierarchy region of the neutrino mass pattern) needs at least 100 kg of isotope mass. In order to keep the detector size reasonable (and recalling that the background scales roughly as the total source, and not isotope, mass), it is clear that isotopic enrichment is a necessary task for almost all high-sensitivity searches. The generally available enrichment techniques are reported in Table 5. For cost, element-mass, and production-capacity reasons, the only technique extensively used so far for double-beta decay experiments is the gas-centrifuge one. Unfortunately, it can be applied only to gases. Therefore, only those elements which admit a stable gas compound can be enriched in this way. This is the case for 76Ge, 130Te, 82Se, 100Mo, and 116Cd (normally the gas compound is a fluoride). Of course, 136Xe is a gas by itself. The enrichment cost is of the order of 50–100 $/g for germanium. For the other nuclides, the approximate scaling factor is reported in Table 4. For a sort of conspiracy of Nature, the three golden-plated isotopes 48Ca, 96Zr, and 150Nd are not on this list. For these nuclides, other technologies have to be used, such as ion cyclotron resonance (ICR), molecular laser isotope separation (MLIS), and atomic vapor laser isotope separation (AVLIS), that, unlike gas centrifugation, are not exploited at the industrial level. Since several years, the last one is at the center of a project in France aiming at the reconversion of a facility designed to enrich uranium to the production of ~100 kg of 150Nd. Recently [54], a possibility showed up to enrich Nd with centrifugation. This requires however to design special centrifuges operating at high temperatures at which a gaseous compound of neodymium is available.

The role of the third criterion will become more clear in the following sections, where specific detection technologies will be described. We would like however to discuss here three special emblematic cases in which the detector principle matches favorably with the isotope to study.(i)  76Ge large volume, high-purity, and high-energy resolution Ge-diodes are currently employed in gamma spectroscopy. A detector of this type containing germanium enriched in 76Ge is almost ideal for double-beta decay search. This explains why past (Heidelberg-Moscow and IGEX) and present (GERDA and Majorana) experiments were and are at the forefront in the field, in spite of the relatively low of this isotope.  (ii)130Te large crystals (up to 1 kg) of the compound TeO2 can be grown with high radiopurity. They can be used for the realization of bolometers with excellent performance. Given also the high natural isotopic abundance of 130Te, it is understandable why a past experiment like Cuoricino has been leading the field for several years, and why CUORE is one of the most promising future searches (both are based on arrays of TeO2 bolometers).  (iii)136Xe liquid and gaseous xenon is an ideal medium for particle detection. It can be used to equip TPCs with tracking/topology capability. Scintillation and ionization can provide reasonable energy resolution. This approach is exploited in experiments like EXO (now leading the field) and NEXT. In addition, xenon can be easily dissolved in organic liquid scintillators, allowing to reach very large masses exploiting existing facilities (this is the case of KamLAND-Zen). Last but not least, xenon is the element that can be isotopically enriched at the lowest prices and with the highest production capacity.

For the usual conspiracy of Nature, the three mentioned isotopes are the less favorable among the “magnificent nine” in terms of -value, but nevertheless they provide at the moment the most stringent limits on neutrinoless double-beta decay. This fact explains better than any digression how the detection technique remains a crucial factor for a highly sensitive search.

6.3. Experimental Approaches and Methods

From the experimental point of view, the shape of the two-electron sum energy spectrum enables to distinguish among the two discussed decay modes. In case of , this spectrum is expected to be a continuum between 0 and with a maximum around . For , the spectrum is just a peak at the energy , enlarged only by the finite energy resolution of the detector. The two distinctive energy distributions are shown in Figure 10(a). Additional signatures for the various processes are the single-electron energy distribution and the angular correlation between the two emitted electrons. As we have previously discussed, ranges from 2 to 3 MeV for the most promising candidates.

The experimental strategy pursued to investigate the decay consists of the development of a proper nuclear detector, with the purpose to reveal the two emitted electrons in real time and to collect their sum energy spectrum as a minimal information. Additional pieces of information can be provided in some cases, like single-electron energy and initial momentum, or, in one proposed approach, the species of the daughter nucleus. The desirable features of this nuclear detector are as follows. (i) High-energy resolution, since a peak must be identified over an almost flat background in case of . In particular, this feature is very useful to keep under control the background induced by the tail of the spectrum. It can be shown that the ratio of counts due to decay over those due to in a narrow window around the -value (of the order of the detector energy resolution) is given by the following expression [55]:where is the fractional energy resolution at the -value. It is worth to note the strong dependence on the energy resolution of this expression. Candidates with a slow decay rate (like 136Xe, for which  y) are of course more favorable than those with a fast process (like 100Mo, for which  y). For the latter ones, an excellent energy resolution (<1%) is mandatory. (ii) Low background, which requires underground detector operation (to shield cosmic rays), very radiopure materials (the competing natural radioactivity decays have typical lifetimes of the order of , years versus lifetimes longer than years for ), and well-designed passive and/or active shielding against local environmental radioactivity. (iii) Large source, in order to monitor many candidate nuclides. Present sources are of the order of 10–100 kg in the most sensitive detectors, while experiments capable to cover the inverted hierarchy region need sources in the 100–1000 kg scale.(iv) Tracking and topology capability for the nuclear events, useful to reject background and to provide additional kinematical information on the emitted electrons.

Normally, the listed features cannot be met simultaneously in a single detection method. It is up to the experimentalist to choose the philosophy of the experiment and to select consequently the detector characteristics, privileging some properties with respect to others, having in mind of course the final sensitivity of the setup to half-life and to .

The searches for can be further classified into two main categories: the so-called calorimetric technique, in which the source is embedded in the detector itself, and the external-source approach, in which source and detector are two separate systems.

The calorimetric technique has been proposed and implemented with various types of detectors, such as scintillators, bolometers [56], solid-state devices [57], and gaseous chambers. There are advantages and limitations in this technique, which are here summarized: (i) due to the intrinsically high efficiency of the method, large source masses are possible: ~100 kg has been demonstrated; ~1000 kg is possible; (ii) with a proper choice of the detector type, a very high energy resolution (of the order of 0.1%) is achievable, as in Ge-diodes or in bolometers; (iii) there are severe constraints on detector material and therefore on the nuclides that can be investigated; (iv) it is difficult to reconstruct event topology, with the exception of liquid or gaseous Xe TPC, but at the price of a lower energy resolution.

For the external-source approach, many different detection techniques have been experimented as well: scintillation, gaseous TPCs, gaseous drift chambers, magnetic field for momentum and charge sign measurement, and time of flight. These are the main features, with positive and negative valence: (i)A neat event reconstruction is possible, making easier the achievement of a virtual zero background: however, cannot be distinguished by event by event if the total electron energy is around ; therefore, because of the low energy resolution, constitutes a severe background source for . (ii)Large source masses are not easy to achieve because of self-absorption in the source, so that the present limit is around 10 kg; 100 kg is possible with an extraordinary effort, while 1000 kg looks out of the reach of this approach. (iii)Normally the energy resolution is low (of the order of 10%), intrinsically limited by the fluctuations of the energy that the electrons deposit in the source itself. (iv)Efficiency is also low (in prospect of the order of 30%).

6.4. The Experimental Sensitivity

In order to compare different experiments, it is useful to give an expression providing the sensitivity of an experimental setup to the lifetime of the investigated candidate, and hence to determine the sensitivity to in case of mass mechanism. The first step involves only detector and setup parameters, while for the second step one needs reliable calculations of the NMEs, extensively discussed in Section 4. The sensitivity to lifetime can be defined as the lifetime corresponding to the minimum detectable number of events over background at a 1 confidence level. For the case of a source embedded in the detector and nonzero background, it holds where is the Avogadro number, is the detector mass (or source mass, in case of external-source approach), is the detector efficiency, is the ratio between the total mass of the candidate nuclides and the detector (source) mass, is the energy resolution, and is the specific background, for example, the number of spurious counts per mass, time, and energy unit.

From this formula, one can see that in order to improve the performance of a given set-up, one can use either brute force (e.g., increasing the exposition ) or better technology, improving detector performance () and background control (). Next-generation experiments require to work on both fronts.

In order to derive the sensitivity to , indicated as , one must combine (6.2) with (3.5), obtaining which shows how the nuclide choice is more relevant than the set-up parameters, on which the sensitivity depends quite weakly. The weak dependence on the exposure causes a rather fast saturation of the sensitivity. If an experiment has been run for 5 years and has established a given limit on , it must be run for further 75 years in order to improve it by a factor 2.

The formula reported in (6.2) assumes a Gaussian approximation for the distribution of the number of observed background counts. For small number of counts (<24), the sensitivity should be computed by assuming a Poisson distribution of the background counts. However, (6.2) is extremely useful in evaluating the expected performances of prospective experiments, as it analytically links the experimental sensitivity with the detector parameters. It is a sort of “factor of merit” extensively used within the decay experimental community.

Nowadays, several experimental techniques promise to realize zero background investigations in the close future. In this circumstance, (6.2) and (6.3) do not hold anymore. The observation of 0 counts excludes counts at a given confidence level. For instance, is excluded at the 95% c.l. in a Poisson statistics. Therefore, the sensitivity for a 0 background experiment is given by and (6.3) modifies accordingly.

Uncertainties coming from NMEs prevent from determining precise values in correspondence to a given lifetime: normally a range is indicated, which takes into account the different models for the calculation of the NMEs.

7. Experimental Situation

We are now (July 2012) at a turning point in the experimental search for decay. In the last decade, two experiments (Cuoricino and NEMO3), now stopped, have reached an sensitivity close to the value claimed by a part of the Heidelberg-Moscow collaboration, in the range 0.2–1 eV. However, they were not able to confirm or disproof this claim, in part as a consequence of the uncertainties related to NMEs. In the meantime, several groups were preparing more ambitious searches capable to go well beyond the Heidelberg-Moscow sensitivity. In the last year, some of these searches (EXO-200, KamLAND-Zen, and GERDA) have started to take data and have released the first results, while others are in an advanced construction phase. In this section, we will review the past experiments and will describe the present experimental scenario, which is exciting and fast moving.

7.1. Past Experiments

In the nineties of the last century, the double-beta decay scene was dominated by the Heidelberg-Moscow (HM) experiment [58]. This search was based on a set of five Ge-diodes, enriched in the candidate isotope 76Ge at 86%, and operated underground with high energy resolution (typically, 4 keV FWHM) in the Laboratori Nazionali del Gran Sasso (LNGS), Italy. This search can be considered, even from the historical point of view, as the paradigm of the calorimetric approach discussed in Section 6.3. The total mass of the detectors was 10.9 kg, corresponding to a source strength of   76Ge nuclei. The raw background, impressively low, is 0.17 counts/(kev kg y) around (2039 keV). It can be reduced by a further factor 5 using pulse shape analysis to reject multisite events. The limits on half-life and are, respectively,  y and 0.3–0.6 eV (depending on the NMEs chosen for the analysis). A subset of the HM collaboration has however claimed the discovery of decay in 2001, with a half-life best value of  y ( y at 95% c.l.), corresponding to a best value for of 0.39 eV ( eV at 95% c.l. including nuclear matrix element uncertainty) [59]. This claim is based on the identification of tiny peaks in the region of the decay, one of which occurs at the 76Ge -value. However, this announcement raised skepticism in the double-beta decay community [60], including a part of the HM collaboration itself [61], due to the fact that not all the claimed peaks could be identified and that the statistical significance of the peak looked weaker than the claimed 2.2 and dependent on the spectral window chosen for the analysis [62, 63]. However, new papers [8, 64] published later gave more convincing supports to the claim. The quality of the data treatment improved, and the exposure increased to 71.7 kg·y. In addition, a detailed analysis based on pulse shape analysis suggests that the peak at the 76Ge -value is mainly formed by single-site events, as expected in case of double-beta decay, while the nearby recognized peaks are compatible with multisite events, as expected from interaction in that energy region and for detectors of that volume. A 4.2 effect is claimed. The half-life value claimed in the last paper is   y [8]. The HM experiment is now over, and the final word on this crucial result will be given by other searches.

The top level of the external-source technique was reached nowadays by the NEMO3 experiment [65]. The NEMO3 detector, installed underground in the Laboratoire Souterrain de Modane (LSM), in France, is based on well-established technologies in experimental particle physics: the electrons emitted by the sources cross a magnetized tracking volume instrumented with Geiger cells and deliver their energy to a calorimeter based on plastic scintillators. Thanks to the division in 20 sectors of the set-up, many nuclides can be studied simultaneously, such as 100Mo, 82Se, 150Nd, 116Cd, 130Te, 96Zr, and 48Ca. The strongest source was 100Mo with nuclei. The energy resolution ranged from 11% to 14.5%. Results achieved with 100Mo fix the half-life limit to  y, corresponding to limits of 0.8–1.3 eV on . In NEMO3 experiment, all the bonuses and all the limits of the external-source approach show off. From one side, the NEMO3 detector produces beautiful reconstruction of the sum and single-electron energy spectrum, and precious information about the angular distribution. Double-beta decay events can be neatly reconstructed with excellent background rejection. Thanks to the multisource approach, decay has been detected in all the seven candidates under observation, a superb physical and technical achievement which makes the NEMO3 set-up a real “double-beta factory.” On the other hand, the low energy resolution and the unavoidable “bidimensional” structure of the sources make a further improvement of the sensitivity to quite difficult, because of the background from and the intrinsic limits in the source strength.

Bolometric detection of particles [66] is a technique particularly suitable to search, providing high energy resolution and large flexibility in the choice of the sensitive material [56]. It can be considered the most advanced and promising application of the calorimetric technique in its high-energy resolution approach. In bolometers, the energy deposited in the detector by a nuclear event is measured by recording the temperature increase of the detector as a whole. In order to make this tiny heating appreciable and to reduce all the intrinsic noise sources, the detector must be operated at very low temperatures, of the order of 10 mK for large masses. Several interesting bolometric candidates were proposed and tested. The choice has fallen on natural (tellurite) that has reasonable mechanical and thermal properties together with a very large (27% in mass) content of the 2-candidate 130Te. A large international collaboration has been running an experiment for five years, named Cuoricino (which means “small CUORE—heart—” in Italian), now stopped, which was based on this approach and was installed underground in the Laboratori Nazionali del Gran Sasso [67]. Cuoricino consisted of a tower of 13 modules, containing 62 crystals for a total mass of ~41 kg, corresponding to a source strength of   130Te nuclei. Cuoricino results are at the level of the HM experiment in terms of sensitivity to , covering a range of limits of 0.2–0.7 eV, depending on the choice of the nuclear matrix elements. A very low background (of the order of 0.18 counts/(keV kg y)) was obtained in the decay region, similar to the one achieved in the HM set-up. The energy resolution is about 8 keV FWHM, quite reproducible in all the crystals. Unfortunately, Cuoricino, despite a sensitivity comparable to that of the HM experiment, cannot disprove the 76Ge claim due to the discrepancies in the nuclear matrix element calculations.

7.2. Features of the Present Generation Searches

In Section 6.1, we have seen that the background target for highly sensitive searches is around or even less than 1 count/(y ton), with the purpose to scrutinize without ambiguity the 76Ge claim and then to attack the inverted hierarchy region. In a high energy resolution experiment (with  keV), this request translates into a specific background coefficient of the order of 1 count/(keV y ton), while the target is even more ambitious for low energy-resolution search, where however the most critical role is played by decay. When designing a modern double-beta decay experiment and selecting a detector technology for it, the experimentalist should therefore ask himself or herself three basic questions, the answer to which must be “yes” if that technology is viable and timely: (1)is the selected technology able to deal with 100 kg or better 1 ton of isotope, at least in prospect? (2)is the choice of the detector and of the related materials compatible with a background of the order of at most 1 count/(y ton) in the region of interest? (3)can the experiment be designed and constructed in a few years, and can the chosen technique provide at least 80% live time for several years?

The first question needs to be considered also from the economical point of view. As Table 4 shows, practically all the nuclei of interest, with the significant exception of 130Te, require isotopic enrichment. The cost of this process, when technically feasible, is in the range 10–300 $/g. Therefore, a next-generation experiment has a cost in the range of several tens of millions of dollars, just to get the basic material. Let us see now which solutions are under test worldwide to get a positive answer to the three questions listed above.

7.3. Classification and Overview of the Experiments

As already discussed in Section 6.3, two approaches are normally followed in decay experiments (calorimetric technique and external source), and two classes of searches can be singled out depending on which experimental parameter is mostly emphasized: high energy resolution or tracking/topology capability. We will schematically review ten projects, grouped in five categories in relation with the approaches and the performance mentioned above (see Figure 11). For the half-life sensitivity, we will use the values declared by the authors, and we will translate this in a range of limits on using the results exposed in Section 5 and exposed in Figure 7. For the phase space factor, we have used the values reported in Table 4. The limits on the effective Majorana neutrino mass may therefore differ from those reported by the various collaborations, since we tried to estimate an educated guess of the NME range rather than taking indiscriminately the available calculations.

This list of ten projects do not cover the full range of existing searches but, according to our judgment, include the experiments with the highest chances to give important contributions to the field under discussion. Among these projects, more space will be given to those searches and techniques which have a special relevance, either for the results that they are providing at the moment or for the excellent prospects offered by the related technology.

The first category is characterized by a calorimetric approach with high energy resolution, with four planned projects.

GERDA [68] is an array of enriched Ge diodes immersed in liquid argon (rather than cooled down in a conventional cryostat) and investigating the isotope 76Ge. The experiment is located in LNGS, Italy. The proved energy resolution is 0.25% FWHM. The first phase (data taken from November 2011) consists of 14.6 kg of isotope mass. The second phase foresees 35 kg. As for the first phase, the predicted 1 y sensitivity to the half-life is  y at 95% C.L., corresponding to a limit range on of 252–302 meV. The first phase will allow therefore to scrutinize the 76Ge claim. The second-phase sensitivity is after an exposure of 100 kg y, which gives 98–117 meV when translated in limits on the Majorana mass. The target background for the first phase was  counts/(keV kg y). The experimental results showed a background higher by a factor two with respect to the expectations. The philosophy of the experiment is to work always in the zero background regime. Therefore, the background goal for the second phase is  counts/(keV kg y), one order of magnitude lower than in the first phase given that the exposure will be higher by the same factor.

MAJORANA [69] is an array of enriched Ge diodes operated in conventional Cu cryostats and investigating the isotope 76Ge. Located in the SURF underground facility in the US, it has a modular structure, and the first step envisages the construction of a demonstrator containing 40 kg of germanium: up to 30 kg will be enriched at 86%. The proved energy resolution is 0.16% FWHM. The scope of the demonstrator is to show that a specific background level better than  counts/(keV kg y) can be reached in 1 ton experiment. Merging with GERDA is foreseen in view of a 1 ton set-up. This corresponds to the so-called third phase of GERDA.

CUORE [70], a natural expansion of Cuoricino, will be an array of 988 natural TeO2 bolometers arranged in 19 towers and operated at 10 mK in a specially designed dilution cryostat. The total sensitive mass will be 741 kg, while the source will correspond to 200 kg of the isotope 130Te. CUORE will take advantage of the Cuoricino experience and will be located in LNGS, Italy. The proved energy resolution is 0.25% FWHM. The 90% C.L. 5 y sensitivity to the half-life is  y, corresponding to a limit range on of 60–84 meV. CUORE is in the construction phase, and data taking is foreseen to start in 2014. A general test of the CUORE detector, comprising a single tower and named CUORE-0, will take data in fall 2012.

LUCIFER [71] will consist of an array of ZnSe scintillating bolometers operated at 20 mK, for the study of the isotope 82Se. The proof of principle with ~10 kg enriched Se is foreseen in 2014. The proved energy resolution is better than 1% FWHM. LUCIFER is in the R&D phase, but it has however a considerable sensitivity by itself (of the order of ~100 meV for the effective Majorana mass). Given the high potential of the scintillating bolometers, capable to reject the harmful alpha background, other searches following this approach have recently started. In France, a project named LUMINEU will operate scintillating bolometers of the compound ZnMoO4, for the study of the isotope 100Mo. In preliminary tests on this compound, resolutions better than 0.5% FWHM look feasible, and an excellent alpha discrimination power was demonstrated [7274]. The first step of the project, that has the purpose to test the concept and measure the ultimate background, will consist of a pilot experiment consisting of an array of four crystals and containing 0.6 kg of 100Mo. Thanks to the foreseen zero background, this small set-up has however a remarkable sensitivity of  y at 90% C.L. on the half-life of 100Mo [72]. It was also shown that the relatively short lifetime of decay of 100Mo does not produce dangerous background in this context [75]. In Korea, an experiment named AMoRE is developing scintillating bolometers of , investigating once again the isotope 100Mo [76]. The AMoRE collaboration will employ crystals depleted in 48Ca (a source of background in this case), and enriched in 100Mo.

Even though these experiments do not have tracking capability, some spatial information and other tools help in reducing the background. An important asset is granularity, which is a major point for CUORE (array of 988 closely packed individual bolometers), MAJORANA (in prospect a set of modules with 57 closely packed individual Ge diodes per module), and the lower energy resolution experiment COBRA [77], discussed later (in the final design, 64000 individual semiconductor detectors). Closed packed arrays are foreseen also in the final stage of experiments based on scintillating bolometers. Granularity provides a substantial background suppression thanks to the rejection of simultaneous events in different detector elements, which cannot be ascribed to a process.

Another tool which can improve the sensitivity of Ge-based calorimetric searches is pulse shape analysis, already used in the HM experiment with remarkable results. It is well known that in ionization detectors one can achieve spatial information looking at the pulse shape of the current pulse. In particular, this fact will be exploited in GERDA using the so-called BEGe detectors [78], consisting of -type HPGe devices with an + contact covering the whole outer surface and a small + contact located on the bottom. These detectors exhibit enhanced pulse shape discrimination properties, which can be exploited for background reduction purposes.

Other techniques to suppress background in calorimetric detectors are sophisticated forms of active shielding. For instance, the operation of the GERDA Ge diodes in liquid argon opens the way, in the second phase of the experiment, to the use of the cryogenic liquid as a scintillating active shield. In bolometers, it was clearly shown that additional bolometric elements thermally connected to the main detector in the form of thin slabs can identify events due to surface contamination [79, 80]. This is a particularly dangerous background source, presently the most limiting factor in the CUORE-predicted performance, since surface ’s, degraded in energy, populate the spectral region of interest for decay. This shows that several refinements are possible in the high energy resolution calorimetric experiments, and that an important R&D activity is mandatory to improve the sensitivity of next-generation experiments.

A very promising development of the calorimetric approach realized by means of low-temperature detectors consists in the realization of scintillating bolometers [81], at the basis of the LUCIFER, LUMINEU, and AMoRE projects. The simultaneous detection of heat and scintillation light for the same event allows to reject particles with efficiency close to 100%, since the ratio between the photon and phonon yield is different for and for / interactions. In addition, rejection by pulse shape analysis looks possible in some cases both in the heat and light channel. The rejection capability becomes formidably promising when applied to candidates with a -value higher than 2.6 MeV, that is, outside the natural radioactivity range, since in this case ’s are the only really disturbing background source. This is the case for 82Se and 100Mo, which are the isotopes investigated in the present searches. A complete elimination of ’s for these candidates could lead to specific background levels of the order of counts/keV/kg/y [72]. A research program in this field, partially already accomplished, has identified promising scintillating compounds of 48Ca, 100Mo, 116Cd, and 82Se, such as PbMoO4, CdWO4, CaMoO4, SrMoO4, ZnMoO4, CaF2, and ZnSe. The choice of LUCIFER has fallen on ZnSe, because of the favorable mass fraction of the candidate, the availability of large radio-pure crystals, and the well-established enrichment/purification technology for Se. The compounds ZnMoO4 and CaMoO4 are equally promising, and this explains their use in the LUMINEU and AMoRE experiments. In nonscintillating materials like 130Te employed in CUORE, the rejection can be achieved exploiting the much weaker Cerenkov light (the two electrons emitted in the are above threshold and produce a flash of light with a total energy of approximately 140 eV). On the contrary, particles are by far below threshold and give rise to dark events. The detection of the Cerenkov light would improve dramatically the sensitivity of CUORE, providing the possibility to bring the specific background from the present  counts/(keV kg y) to  counts/(keV kg y). The detection of the Cerenkov light in a bolometric context, with a sensitivity allowing to fully reject events, requires exceptionally sensitive light detectors, which however look like being within the reach of recently developed technologies.

The second category of future experiments (calorimetric search with low energy resolution and no tracking capability) is represented by two samples which exploit different techniques and solve the low-energy-resolution problem with diverse measures.

KamLAND-Zen [82] is a followup of the KamLAND experiment, used for the detection of reactor neutrinos and located in the Kamioka mine in Japan. It was converted into an apparatus capable to study decay by dissolving Xe gas in an organic liquid scintillator contained in a nylon balloon, which, being immersed in the KamLAND set-up, is surrounded by 1 kton of liquid scintillator. The mass of the Xe-loaded scintillator is 13 tons, and the Xe weight fraction is about 2.5%, resulting in 300 kg of enriched 136Xe. The external scintillator works as a powerful active shield. A reasonable space resolution for interaction vertices allows to define a fiducial volume in the Xe-loaded scintillator, corresponding to 129 kg of  136Xe. The energy resolution at the -value is 10% FWHM. The purpose of the experiment was to scrutinize the 76Ge claim. After an exposure of  kg day, the experimental data showed an unexpected bump in the background structure rather close to the region of interest of that prevented to achieve the primary goal of the experiment. The background level was of the order of 10 counts/(50 keV) in 77.6 days, about 30 times worse than what was initially expected. Some interpretations were proposed for this peak. The most accredited one refers to a contamination of the isotope 110mAg, whose decay releases a total energy of about 200 keV higher than the -value of 136Xe. This isotope could be of cosmogenic origin and could be present either in the balloon walls or in the Xe itself. In the latter case, Xe purification should reduce this background contribution, restoring the initially foreseen sensitivity of the experiment that was  y at 90% C.L. in 5 y of data taking. The 110mAg affair is a good example of the limitation of the low energy resolution experiments. In spite of this unexpected background source, the collaboration was able to set a significant limit on the half-life of the process, equal to at 90% C.L. (corresponding to 329–414 meV for the effective Majorana mass). This result however is obtained through a fit of the background spectrum without a really convincing background model. KamLAND-Zen has provided a superb measurement of the half-life of 136Xe, set at  y [82]. This was before the only missing measurement among the “magnificent nine.” The measured value is about 5 times shorter than a previous experimental limit on this process [83] and has confirmed a fully compatible result obtained by the EXO-200 experiment several months before [84] (see below).

SNO+ [85] is an upgrade of the solar neutrino experiment SNO, located at SNOLAB in Canada. The basic idea consists in filling the SNO detector (which contained heavy water in the solar-neutrino mode) with Nd-loaded liquid scintillator to investigate the isotope 150Nd. A crucial point is of course the possibility to enrich neodymium, discussed in Section 6.2. The SNO+ plan is to use 780 tons of liquid scintillator loaded with natural neodymium. If the Nd fraction is 0.1% w/w, as quoted in [85], the 2 source results in 43.7 kg of 150Nd. The expected energy resolution in this configuration is 6.4% FWHM at the -value of 150Nd. There are however recent plans to increase the Nd concentration [86] up to 0.3% w/w, which gives slightly poorer energy resolution but better sensitivity. For the background rate, about 100 background events per kton of liquid scintillator and per year are expected via simulations in a 200 keV energy window around . The foreseen 3 y sensitivity on the half-life is  y C.L. [87], corresponding to a limit of 137–178meV on the effective Majorana mass. Data taking with Nd-loaded scintillator is foreseen in 2014.

The third category includes an ambitious calorimetric experiment aiming at joining high energy resolution with tracking/topology capability.

NEXT [88] is a proposed 10 bar gaseous-xenon TPC, to be located in the Canfranc, Spain, and containing 89 kg of the isotope 136Xe. Clear two-track signature is achievable, thanks to the use of gaseous rather than liquid Xe. The estimated energy resolution is of the order of 1% FWHM, achieved thanks to the electroluminescence signal associated to the ionization electrons produced by the events. This is the only calorimetric experiment which is in principle capable to get reasonably high energy resolution in addition to topology capability. The experiment is in the R&D phase. Recent results on small prototypes have shown that the high-resolution target is indeed possible. The expected sensitivity, based on a simulation which foresees a specific background at the order of  counts/(keV kg y), is of  y at 90% C.L., corresponding to the range 102–129 meV for the limits on .

The fourth category comprises calorimetric experiments based on detectors which compensate the low energy resolution with tracking or some form of event-topology capability. There are two samples in this group.

EXO [89] is a Xe TPC experiment which envisages a first phase known as EXO-200, which is now taking data. The second phase foresees a much higher isotope mass, in the 1–10 ton range. There is a unique case in direct-detection experiments; the second phase considers the possibility of tagging the Barium single ion—the decay daughter—by means of optical spectroscopy methods, in particular through laser fluorescence. The final state of the decay would be totally identified. If successful, this approach would eliminate any form of background, with the exception of that due to the decay. The EXO-200 TPC contains 200 kg of enriched liquid xenon and is located in the WIPP facility in the USA. The detector measures both the scintillation light (which provides the start signal for the TPC) and the ionization. The apparatus is capable to get topology information and to distinguish between single-site events (potential signal) and multisite events (certain background). The simultaneous exploitation of the correlated scintillation and charge signal allows to improve the energy resolution, which is 3.9% FWHM in the region of interest. No signal was observed after an exposure of 32.5 kg yr, with a background of counts/(keV kg yr) in the region of interest. This sets a lower limit on the half-life of the decay of 136Xe of  y at 90% C.L. [90], corresponding to effective Majorana masses of less than 196–247 meV, depending on the matrix element calculation. Even if obtained with another isotope, this limit is so stringent to be in considerable tension with the 76Ge claim. EXO-200 has provided also the first remarkable measurement of the half-life of 136Xe [84], which resulted to be  y, in excellent agreement with the result of KamLAND-Zen [82]. Possible improvements in the radon-induced background and in the data analysis could lead the EXO-200 sensitivity up to ~ y at 90% C.L. in 4 y live time. A practical realization of the second phase, which is under study, consists in scaling up the successful EXO-200 set-up, with a sensitive mass of 4 tons of enriched xenon. This project is called nEXO [91], which could reach in a few years a sensitivity of the order of , allowing to explore deeply the inverted hierarchy region.

COBRA [77] is a proposed array of 116Cd-enriched CdZnTe semiconductor detectors at room temperature. Nine isotopes are under test in principle, but 116Cd is the only competing candidate. The final aim of the project is to deploy 117 kg of 116Cd with high granularity. Small-scale prototypes have been realized at LNGS, Italy. The proved energy resolution is 1.9% FWHM. The project is in R&D phase. Recent results on pixelization show that the COBRA approach may allow an excellent tracking capability, making possible, for example, a quite effective / rejection.

The fifth category is represented by setups with external source (which necessarily leads to low energy resolution) and sophisticated tracking capability, allowing to reach virtually zero background in the relevant energy region (with the exception of the contribution from the tail). We will discuss one project belonging to this class.

SuperNEMO [92] is a proposed set-up composed by several modules containing source foils, tracking (drift chamber in Geiger mode), and calorimetric (low Z scintillator) sections. A magnetic field is present for charge sign identification. SuperNEMO will take advantage of the NEMO3 experience and will investigate 82Se, but the use of the golden-plated isotopes 150Nd, 96Zr, and 48Ca is not excluded, if enrichment is technically feasible. As for NEMO3, SuperNEMO is the only experiment of the next generation having access to the energy distribution of the single electron and to the two-electron angular distribution. This information can lead to the identification of the leading mechanism (see Section 2), if the process is observed with high enough statistic. Important improvements are foreseen with respect to NEMO3, among which we mention the much larger source, the better energy resolution (from 10.5% to 7.5% FWHM), the higher efficiency (from 18% to 30%), and the much better radiopurity of the source (208Tl and 214Bi contaminations to be improved by a factor 10 and a factor 30, resp.). The use of 82Se, whose half-life is a factor 10 slower than in 100Mo, reduces proportionally the contribution to the background. The radiopurity of the source is chosen so as to keep the background due to equal to that coming from the residual radioactive contamination: both are anticipated to be of the order of 1 counts/(100 kg y). A possible configuration foresees 20 modules with 5 kg source for each module, providing 100 kg of isotope mass. The predicted 5 y sensitivity at 90% C.L. is  y for 82Se, corresponding to a limit range of 71–98 meV on . The project is in an advanced R&D phase: the first module, operating as a demonstrator containing 7 kg of 82Se, will take data in 2013.

7.4. The Technology and the Physics Race

As it is clear from the above discussion and from the experiment description, the three essential ingredients for a sensitive experiment are (i) a low background level in the region of interest, (ii) a corresponding high number of nuclides under observation, and (iii) the use of an intrinsically favorable candidate. We will focus now on the first point, referring in particular to (6.2), in which the expression appears. This combination is also crucial to define a zero-background experiment (for which , being the experiment duration and the detector/source mass), whose sensitivity is given by (6.4). The two parameters and never appear separated, and their product is a sensible figure of merit for a given technology in terms of total background. However, if we want to use this figure of merit to compare coherently different experiments, we should express the specific background in terms of unit of number of candidate nuclides (or equivalently of a multiple of the number of moles) rather than of detector mass. We will redefine then the specific background as (measured for instance in counts/(keV kmol y)), where [] is the energy interval, containing the region of interest, over which a constant background can be assumed, is the duration of the experiment aiming at fixing the background level, and is the number of moles of the candidate isotope which can potentially give a signal in the observed spectrum.

Figure 12 shows the plot of (FWHM) versus . The diagonal lines correspond to constant values of this product. The technologies exploited by the experiments examined in Section 7.3 are represented as points on this plot, and the position of these points with respect to the diagonal lines allows a direct comparison between the various figures of merit. We have to stress that there are two types of experiments in this comparison: on one hand, past and running experiments, for which the background has already been measured; on the other hand, future searches, for which only projections and simulations are available. When possible, we have used the evaluation of the background provided by the collaborations themselves. We have to notice however that an experiment named CUORE-2 with a well-defined structure does not exist officially yet. We have hypothesized here that the background level for CUORE-2 is  counts/(keV kg y), possible if the R&D activities ongoing to suppress the alpha background are successful. We have also assumed that CUORE-2 will be enriched. As for LUCIFER, since a precise quantitative evaluation of the background does not exist in the literature for the moment, we have used the results of simulations made for the very similar scintillating bolometers of ZnMoO4, where a background level of  counts/(keV kg y) looks within the reach of this technology. For KamLAND-Zen, we have used the observed background level rather than that anticipated before running the experiment.

The points in Figure 12 are distributed in two clusters: we have a group of experiments with energy resolutions below 10 keV FWHM and another one with resolutions in the 100–300 keV range. The experiment NEXT is in between. The lowest background level was achieved by NEMO3, although EXO-200 is now challenging this primacy. Recalling the considerations made in Section 6.1, one immediately sees that many experiments use technologies capable to attain the background level (10–100 counts/(y kmol) required to scrutinize the 76Ge claim (in fact, this task has been almost accomplished by EXO-200, and it will be accomplished soon by GERDA-1). In order to fully cover the quasidegenerate pattern of the neutrino mass and start to attack the inverted hierarchy region, we see that evolved forms of the calorimetric approach seem to be in the best position, even though NEXT and SuperNEMO are in the game.

Of course, referring only to the specific background, the plot in Figure 12, while instructive, misses crucial aspects. The role played by the phase space of a given isotope does not appear. That is why, for example, 76Ge-based are not at all better than 130Te-based experiments in terms of sensitivity. Another crucial point that does not emerge is the scalability of the technique. Lower energy resolution approaches, like the ones pursued by the Xe-based experiments or searches making use of hundreds of tons of liquid scintillators as isotope solvent (KamLAND-Zen and SNO+), are much more suitable for ton or even multiton experiments.

Every approach has its good reasons, as one can see in Figure 13. Here, one can clearly see that the sensitivity reached by presently running experiments (in blue) is in the range 200–400 meV, barely at the level to scrutinize the 76Ge claim. Some important margins of improvement are expected for EXO-200, which is continuing data taking, and KamLAND-Zen, if the purification of Xe is successful and if other unexpected background sources do not appear. Several future searches, using a variety of technologies, should be able to cover fully the range of the claim and to approach the inverted hierarchy region.

8. Looking into the Crystal Ball

We discuss here the future prospects for search, concentrating on the very few projects that seem to be now in the position to impact substantially in the fields: GERDA (and MAJORANA), CUORE (and scintillating bolometers), EXO-200, SNO+, KamLAND-Zen, SuperNEMO, and possibly NEXT, if the achievements of the R&D phase will be confirmed also for the final detector. However, it is not possible to exclude rapid developments of the present R&D programs towards real experiments. The continuation of the R&D activity is crucial, since the future of the search depends critically on the richness and variety of the technologies under development, which can lead to further increases of the sensitivities and to the possibility to study many isotopes with different approaches, essential elements in the medium long-term prospects for decay.

The future scenario of decay depends on the choice made by Nature on the neutrino mass pattern. In case of quasidegenerate pattern, that is, in the range 100–500 meV (this would be in agreement with the 76Ge claim), we expect the following developments. (i) GERDA will detect decay in 76Ge, marginally in the first phase and with high statistics in the second one. (ii) EXO-200 will detect decay in 136Xe and so would do KamLAND-Zen, if the background problems are solved. NEXT has also the chance to see it in the same isotope. These three 136Xe experiments could cross-check each other. (iii) CUORE will detect in 130Te. (iv) SNO+ will detect decay in 150Nd. (v) LUCIFER could detect decay in 82Se if the present R&D phase leads to a significant pilot experiment, and a major role could be played also by 100Mo-based scintillating bolometers. (vi) SuperNEMO may investigate the mechanism looking at the single-electron energy spectrum and at the electron angular distribution in 82Se or in 150Nd.

The redundancy of the candidates with positive observation will help in reducing the uncertainties coming from nuclear matrix element calculation: we would enter the precision measurement era for decay! We have however to stress that this optimistic scenario is already in tension with the present EXO-200 results.

In case of inverted hierarchy pattern, that is, in the range 20–50 meV, detection is still possible in the middle term, under the condition that the projects under development achieve the planned sensitivity in their “aggressive” version or with substantial upgrades. (i) CUORE could detect decay, more likely if enriched in 130Te and equipped with some method to get rid of the alpha background, or if upgraded in scintillating-bolometer mode. (ii) nEXO, the extension if EXO-200 under discussion, could detect decay in 136Xe. (iii) Extensions of KamLAND-Zen, of course after the solution of the present background problems, and of NEXT, if the first phase is successful, can also have the chance to observe decay in 136Xe. (iv) GERDA phase III, after merging with MAJORANA, could detect it in 76Ge. (v) SuperNEMO could marginally detect it if 150Nd mode will result at the end possible. (vi) SNO+ could detect it in 150Nd if Nd enrichment is viable.

The discovery in 3 or 4 isotopes is necessary for a convincing evidence, and it would be still possible thanks to the variety of projects and techniques under development. A nonobservation could be very important for neutrino physics as well. In fact, if experiments were able to exclude completely the inverted hierarchy region (putting say a limit on the effective Majorana mass at a level of 10–15meV), and in the meantime future long baseline neutrino oscillation experiments discovered that the hierarchy is indeed inverted; this would be a strong indication towards a Dirac nature of neutrino.

In case of direct hierarchy pattern, that is, in the range 2–5 meV, new strategies have to be developed. At the moment, no viable solution is conceivable. However, given the importance of the subject, educated speculations on experiments with such a sensitivity are useful, and the running searches along with the R&D activities are very important to stimulate new ideas in view of this extreme challenge.

Acknowledgments

This work was partially supported by the MICINN (Spain) (FPA2011-29854); by the Comunidad de Madrid (Spain) (HEPHACOS S2009-ESP-1473); by the Spanish Consolider-Ingenio 2010 Program, CPAN (CSD2007-00042).