Abstract

In the past ten years, neutrino oscillation experiments have provided the incontrovertible evidence that neutrinos mix and have finite masses. These results represent the strongest demonstration that the electroweak Standard Model is incomplete and that new Physics beyond it must exist. In this scenario, a unique role is played by the Neutrinoless Double Beta Decay searches which can probe lepton number conservation and investigate the Dirac/Majorana nature of the neutrinos and their absolute mass scale (hierarchy problem) with unprecedented sensitivity. Today Neutrinoless Double Beta Decay faces a new era where large-scale experiments with a sensitivity approaching the so-called degenerate-hierarchy region are nearly ready to start and where the challenge for the next future is the construction of detectors characterized by a tonne-scale size and an incredibly low background. A number of new proposed projects took up this challenge. These are based either on large expansions of the present experiments or on new ideas to improve the technical performance and/or reduce the background contributions. In this paper, a review of the most relevant ongoing experiments is given. The most relevant parameters contributing to the experimental sensitivity are discussed and a critical comparison of the future projects is proposed.

1. Introduction

First suggested by M. Goeppert-Mayer in 1935, double beta decay (DBD or ) is a rare spontaneous nuclear transition in which an initial nucleus () decays to a member () of the same isobaric multiplet with the simultaneous emission of two electrons. Unfortunately, also the equivalent sequence of two single beta decays can produce the same result and—in experimental investigations—the choice of the parent nuclei is therefore generally restricted to the nuclei which are more bounded than the intermediate ones. Because of the pairing term, such a condition is fulfilled in nature for a number of even-even nuclei. The decay can then proceed both to the ground state or to the first excited states of the daughter nucleus. Double beta transitions accompanied by positron emission or electron capture are also possible. However, they are usually characterized by lower transition energies and poorer experimental sensitivities. (The neutrinos emitted in all decays are electron neutrinos. It is generally understood, that where not explicitly indicated, “” indicates an electron neutrino. We will follow such a convention everywhere in the text) Different decay modes are possible. Among them, two are of particular interest: the mode () which obeys lepton number conservation and is allowed in the framework of the standard model (SM) of electroweak interactions, and the mode which violates the lepton number by two units and occurs if neutrinos are their own antiparticles (i.e., the neutrino is a Majorana particle). A third decay mode (), in which one or more neutral bosons (Majorons) are emitted is also often considered. The interest in this decay is mainly related to the existence of Majorons, massless Goldstone bosons that arise upon a global breakdown of B-L symmetry [1].

From the point of view of particle physics, is of course the most interesting of the decay modes for its important theoretical implications. In fact, after 80 years from its introduction [2, 3], is still the only practical way to probe experimentally missing neutrino properties like mass and nature. Indeed, it can exist only if neutrinos are Majorana particles and it can provide unique constraints on the neutrino mass scale. Furthermore, observation would prove that total lepton number is not conserved in physical phenomena, an observation that could be linked to the cosmic asymmetry between matter and antimatter (baryogenesis via leptogenesis [47]).

In addition to a theoretical prejudice in favor of Majorana neutrinos, there are other reasons to hope that experimental observation of is at hand, in particular, the results of oscillation experiments which have demonstrated that neutrinos are massive particles. Although these results cannot provide a firm prediction for rates, they suggest that favorable conditions for its observation may be realized in nature and have enormously increased the interest toward the experimental search for this decay. It should also be stressed that could have been already observed. Indeed, an extremely intriguing and debated claim for observation in 76Ge is awaiting unambiguous confirmation by upcoming experiments.

The important implications of massive Majorana neutrinos and the possible experimental observation of have triggered a new generation of experiments spanning a variety of candidate isotopes with different experimental techniques, all aiming at reaching a sensitivity allowing one to test the region of neutrino masses indicated by neutrino oscillation experiments. Experimental techniques range from the well-established germanium calorimeters to xenon time projection chambers and low temperature calorimeters. Some of the experiments are already running or will run very soon. Others are still in their R&D phase, trying to reach the limit of their experimental technique.

In all cases, the common claim is of being sensitive to very light neutrino masses by assuming an improvement of one to three orders of magnitude in term of background suppression, detector performance, or increase of the target mass.

In this paper we review the state of the art of this rapidly changing field. In Section 2 we summarize the general status of neutrino phenomenology, while in Section 3 we analyze the case of . Section 3.1 is devoted to the nuclear part of the problem, the calculation of the transition probabilities (or nuclear matrix elements (NME)). In Sections 4 and 5 the most important experimental aspects are described. In Section 5.1 we summarize the results of previous experiments. In Section 6 we introduce the challenging aspects of present and future projects while in the following Sections we review and compare them. Our conclusions are summarized in Section 12.

2. Neutrinos

Today, we know that there are three generations of neutrinos, distinguished by their leptonic flavor. These are the only known neutrinos with mass lower than the mass which interact with matter via the exchange of or bosons (“active” neutrinos). A number of experiments in the past 20 years have monitored intense neutrino sources (solar, atmospheric, reactor, and accelerator neutrinos) and have reported the observation of neutrino flavor conversion during propagation (neutrino oscillations and Mikheyev-Smirnov-Wolfenstein (MSW) effect), either in terms of neutrino disappearance or in terms of the appearance of a wrong neutrino flavor. This phenomenon has its natural explanation when assuming that neutrinos are massive particles and mixing among mass eigenstates is assumed, which implies the need to modify or better extend the standard electroweak model to include massive neutrinos.

Massive neutrino phenomenology (see, e.g., [811]) is described in the framework of three distinguishable particles provided with their own leptonic number, flavor, and mass eigenvalue. As for the quark sector, a not diagonal matrix—the Pontecorvo-Maki-Nakagawa-Sakata matrix (PMNS)—describes the mixing of neutrinos. The PMNS matrix, in its most general case, is parametrized by 3 angles (, , and ) and 3 CP-violating phases (, , and ) for a total of 6 parameters to be added to the 3 unknown values of the neutrino masse eigenstates (). The PMNS matrix can be expressed as where and . When neutrinos are Dirac particles, the two Majorana phases can be reabsorbed by a rephasing of the neutrino fields and the PNMS matrix has therefore only 4 free parameters.

Neutrino oscillation probabilities are described in terms of the PNMS angles and of the square mass differences of the three eigenstates. The results from oscillation experiments (see, e.g., [12] and the references therein) constrain neutrino square mass differences and most of the PMNS mixing parameters within rather narrow bands (Table 1). In particular, the measured square mass differences prove that one neutrino state is much more split than the other two. This allows three different mass orderings: direct hierarchy (, ), inverted hierarchy (, ), and degenerate hierarchy () [1320].

Only two of the three possible square mass differences are independent and presently constrained. These are , generally labeled as the solar term, and , the atmospheric one (see Table 1 for their definition). The only parameters irrelevant for oscillations are the Majorana phases and . In fact, as pointed out above, they are strictly related to the possible Majorana nature of the neutrinos and appear only in phenomena where such a condition is essential. Table 1 summarizes the present status of our knowledge about PNMS matrix elements and neutrino mass split.

Few experimental results cannot be accommodated in this framework: the LSND anomaly [21] (further investigated by MiniBooNE [22]) as well as a possible neutrino deficit observed in reactor [23] and Gallium measurements with very intense (Mci) radioactive neutrino sources [24]. If confirmed, these could prove the existence of sterile neutrinos. These interact with ordinary matter only through gravitation and can be observed only indirectly in oscillation experiments if they mix with active neutrinos.

The challenge of next generation oscillation experiments is to be able to measure the sign of and therefore fix the neutrino mass hierarchy problem [25].

Although the hierarchy can be accessible by oscillation experiments, nevertheless they will not be able to provide information on the absolute scale of neutrino masses which is presently only constrained by experimental measurements of the following three parameters:(1) (cosmology);(2) (beta decay);(3) (neutrinoless double beta decay).

These three parameters are strictly correlated among each other and bounded by oscillation results within well defined regions shown in Figure 1. In particular in the case of , and of lower bounds of ~0.04 and ~0.008 eV, respectively, are obtained. In the case of (also called neutrino Majorana mass) cancellations among the complex terms of the mass combination are always possible and consequently has no lower bound.

Upper limits on are derived from astronomical observations by fitting the experimental data to complex cosmological and astrophysical models. Actually, cosmological neutrinos (i.e., neutrinos produced just after the Big Bang) influence the evolution of the Universe and the large scale structures (LSS) formation in a way that is strictly dependent on the size of , with effects on astrophysical observables such as the anisotropies of the cosmic microwave background (CMB) or the power spectrum of mass-density fluctuations. Despite their increasing sensitivity, cosmological bounds on neutrino masses are considered with caution since they are (strongly) model dependent. The most recent result in this field comes from the Planck collaboration [26] and yields a upper limit on ranging from about 1 eV to 0.23 eV depending on the set of data and models used in the computation.

The study of the end point in the beta decay Kurie plot provides a straightforward and direct technique to measure . Present experimental results come from Tritium experiments providing an upper bound on of 2 eV at 95% C.L. [27, 28]. This bound will be improved in the next future by the KATRIN spectrometer [29] that aims at reaching a sensitivity of the order of ~0.2 eV. KATRIN is considered as the final step in the use of spectrometers for beta decay measurements, while new ideas and projects are emerging in the case of calorimetric measurements of the beta spectrum [30, 31].

3. Neutrinoless Double Beta Decay

The neutrinoless mode of nuclear double beta decay (2) is a hypothetical, very rare transition in which two neutrons undergo decay simultaneously without the emission of neutrinos. It was immediately recognized as a powerful method to test Majorana’s theory with neutrinos. Indeed, it can be derived from the mode assuming a Racah sequence of two single beta decays in which the (anti-) neutrino emitted at one vertex is absorbed at the other. This is only possible if neutrino and antineutrino coincide; that is, they are Majorana particles. In contrast to the two-neutrino mode, it violates total lepton number conservation and is therefore forbidden in the Standard Model. Its existence is linked to that of Majorana neutrinos even though a variety of exotic models can account for it. So far, no convincing experimental evidence of this decay has been found.

When mediated by the exchange of a light virtual neutrino, the rate is expressed as where is the phase space integral (exactly calculable, but affected by the uncertainties on the axial coupling constant, as discussed in the next section), is the nuclear matrix element, and is the electron mass. Finally, —introduced in the previous section—is the so-called Majorana mass of the neutrino that can be expressed in terms of the PNMS matrix elements as

As evident from Figure 1 oscillation results constrain to be between 20 and 50 meV in the case of inverted hierarchy (above ~50 meV the bands representing the two hierarchies merge in the same degenerate band). This is more or less the sensitivity range of forthcoming experiments. If these would not observe any decay (and assuming that neutrinos are Majorana particles) the inverse ordering could finally be excluded, thus fixing the problem of the neutrino absolute mass scale [10, 32]. If, on the other hand, other experiments would demonstrate that neutrino mass ordering is inverted, then nonobservation would demonstrate that neutrinos are Dirac particles.

is the only experimental observable presently studied where Majorana phases appear explicitly; these phases measure CP violation for Majorana neutrinos (if CP is conserved they are integer multiples of ). Their presence implies that cancellations are possible (see Figure 1). In principle Majorana phases can have measurable consequences even if in practice their determination is very difficult. Many authors have examined the potential to combine measurements with single beta and cosmology results to determine their value [17, 18, 33, 34]. The general conclusion is that at least two experiments that depend on the phases are required to unambiguously determine both. Moreover, a significant improvement in the precision of nuclear matrix elements (Section 3.1) is also required.

is also the only measurable parameter containing direct information on the neutrino mass scale. Unfortunately, its derivation from the experimental results on half-lifetimes requires precise knowledge of the transition nuclear matrix elements appearing in (5). Many evaluations are available in the literature, but they are often in considerable disagreement, leading to large uncertainty ranges for . This has been recognized as a critical problem by the community.

Neutrinoless double beta decay is presently the only practical way to discover if the neutrino is its own antiparticle. Its observation would have dramatic consequences for nuclear and particle physics as well as for astrophysics and cosmology. Indeed, one of the most intriguing problem in accommodating massive neutrinos in a Standard Model extension is to be able to explain the smallness of neutrino masses. The see-saw mechanism—which predicts the existence of Majorana neutrinos—is a very attractive solution which could also provide an explanation for one of the biggest cosmological puzzles, that of the observed matter-antimatter asymmetry of the Universe (via the leptogenesis mechanism [47]).

Lepton number violation and Majorana neutrinos are the distinctive features of , and they represent the primary mission of upcoming experiments. However, the exchange of a light massive Majorana neutrino is not the only mechanism able to account for . Actually many extensions of the standard model include mechanisms that can explain it. This is the case, for example, of the L/R symmetric GUTs’ with the exchange of right-handed W-bosons or of SUSY models with R-parity violation. In all the cases, however, the observation of is irremediably linked to the Majorana nature of neutrinos [35].

A possibility to distinguish between different mechanisms could consist in the analysis of the energy and angular distributions of the emitted electrons and the study of the transitions to the ground and excited states. Unfortunately, the study of the single-electron distributions is possible only for a very limited number of experimental techniques. Moreover, in most cases the decay is mediated by the exchange of heavy particles which give rise to similar terms and produce, in particular, the same single electron distributions.

The measurement of the transitions to different final states in the same nucleus seems then the only viable solution [36], taking advantage of the different nuclear matrix elements that enter the decay amplitudes. This requires an accurate calculation of all the nuclear matrix elements, a goal still far from being reached.

Constraints coming from other experiments that study extensions of the Standard Model can of course provide some help. This is the case, for example, of the LHC measurements on supersymmetric particles which will limit the parameter space reducing the number of possible contributions.

3.1. Nuclear Matrix Elements

The most relevant parameter available from is the effective neutrino mass . According to (5) it can be obtained from the measured half-lifetime once all the other terms appearing in the equation are known. This requires precise knowledge of the phase space factor and of the nuclear matrix elements (NME) which cannot be separately measured and therefore can only be evaluated theoretically.

While precise calculations of the phase space factors have been carried out by many authors [35, 37, 38], only approximate estimates of the NME’s have been so far obtained, due to the many-body nature of the nuclear problem. NME’s include all the nuclear structure effects of the decay and are indispensable not only to extract the value of but also to compare the sensitivities and the results of the experiments based on different nuclei.

In this respect, it should be stressed that uncertainties on NMEs’ and on the experimental value of the decay half-lifetime concur in the same way to the uncertainty on . Comparable efforts should be therefore addressed for both aspects of the problem.

A lot of work has been actually devoted in the last decade to develop a proper many-body technique. Indeed, the calculation of NME’s has been carried out by many authors using different methods: the Quasi-particle random phase approximation [3941] (QPRPA, RQPRPA, pnQPRPA etc.), the nuclear shell model [42, 43] (NSM), the Interacting Boson Model [44] (IBM), the generating coordinate method [45] (GCM), and others. These models have complementary virtues and flaws. The true problem is that it is not always easy, if not impossible, to establish which is providing the correct answer so that the spread in the theoretical calculations is generally considered as an estimate of the uncertainty.

At first, really large discrepancies (by orders of magnitude) were observed. After discarding some evident pathologic calculations, discrepancies shrank to about one order of magnitude. However, despite the significant improvements obtained in the past years, the QRPA matrix elements still exceed those of the shell model by factors of up to about two in the lighter isotopes (e.g., 76Ge and 82Se) and somewhat less in the heavier isotopes (see Table 2). On the other hand, IBM results are in reasonable agreement with QRPA calculations [46].

The origin of the discrepancies is still unclear and attempts to constrain the models by referring to additional observables have been pursued. Actually, the more observables a calculation can reproduce, the more trustworthy it probably is. This is the case, for example, of Gamow-Teller distributions which enter indirectly into and can be measured through () reactions [47]. The nuclear process most close to is however , which has now been measured in 10 different nuclei. results have been used to calibrate QRPA calculations [48]. In particular, when renormalizing all QRPA strengths by the same amount, no dependence on model-space size or on the form of the nucleon-nucleon interaction or on the QRPA flavor is observed. This is an astonishing result which has been interpreted as an indication of the correctness of the method.

A number of common approximations characterize all the calculation methods while the most significant differences relate to the details of the nuclear part. In all cases, the reaction amplitude is factorized into the product of a leptonic and a hadronic part. As already mentioned above, in the case of a decay mediated by the exchange of a light neutrino, the leptonic part is proportional to the Majorana mass and to a potential describing the effects of the neutrino propagator. has two most relevant consequences in the calculation of the nuclear matrix elements: it introduces a dependence on the excitation energies of the virtual states in the odd-odd intermediate nucleus, as well as a dependence of the transition operator on the coordinates of the two nucleons. Given the relatively high momentum of the exchanged virtual neutrino ( (100 MeV), where is the nuclear radius) a closure approximation is then applied when integrating over the virtual neutrino energies. This consists in neglecting the energy variation of the intermediate nuclear states and adding coherently the contributions of the two electrons. The impulse and long wave approximations are then used to get rid of the hadronic current and lead to (5) for the decay rate.

The different nuclear models are then used to estimate the purely nuclear term . All models agree that only nucleons which are very close ( fm) contribute (somehow justifying the closure approximation) although none of them takes care of the short-range repulsive core ( fm), introducing on the contrary further approximations to get rid of it.

The basic assumption of the nuclear shell model is that the nucleons move independently in a proper mean field. A strongly attractive spin-orbit term is then introduced to describe the correct level separation and explain magic numbers. As the number of protons and neutrons depart from the magic numbers, the introduction of a residual two-body nucleon interaction among nucleons is needed to move particles through orbits while respecting angular momentum conservation and the Pauli principle. The calculation problem consists then in the diagonalization of a matrix over a sufficiently large (valence) basis. The use of a limited valence space represents the most relevant limitation. On the other hand, all the configurations of valence nucleons are included and the NSM describes well properties of low-lying nuclear states.

In the quasiparticle random phase approximation the residual interaction among nucleons is dominated by the pairing force. As it is well known, this force accounts for the tendency of nucleons to couple pairwise to form particularly stable configurations in even-even nuclei. As a result of the strong coupling between homologous nucleons, the orbital angular momentum and spin of each pair add to zero with a for the nuclear ground states. The nucleon pairing is introduced via a BCS approach applied to a quasiparticle basis obtained after a unitary (Bogoliubov) transformation. Quasiparticles are thus generalized fermions with a finite probability of being either particles or holes and the net effect of the transformation is to smear out the nuclear Fermi surface for both protons and neutrons. Quasiparticles are, to a first order, independent nevertheless allowing a simple description of the pairing force between neutrons. Once the vacuum of quasiparticles in the even-even nucleus has been fixed, the problem of QRPA consists in evaluating the transition amplitudes to arbitrary excited states in the neighboring odd-odd nuclei through a proper charge changing single-body operator.

The main advantage of QRPA is the inclusion of correlations in a ground state characterized by purely independent quasiparticles. As a consequence, the vacuum state can accommodate two-particle two-hole excitations so that new processes can be taken into account. The corresponding transition amplitudes can be written in terms of particle-hole (-) and particle-particle (-) matrix elements which are usually parametrized in terms of two adjustable coupling constants, and respectively. The realistic nucleon-nucleon interaction is then recovered for , a condition which is unfortunately often unstable. Many different variants of the QRPA method have been considered to get rid of this undesired behavior and to produce a more realistic description.

The generating coordinate method refers to the so-called aligned coupling scheme for describing the nucleon pairing and fix the equilibrium shape of a nucleus. In this scheme, each nucleon has the tendency to align its orbit with the average field produced by all other nucleons, thus giving rise to nuclei with deformed equilibrium shapes and collective rotational motion. A common representation of the shape of these nuclei is that of an ellipsoid. A self-consistent field approach is then used to reduce the multi-body problem into one of noninteracting particles in a mean field (including deformation effects). In this way, a set of Hartree-Fock-Bogoliubov (HFB) wave functions are obtained, with eigenstates that can be found by the projection of those components having well defined proton/neutron number and angular momentum (PHFB [49]).

The Gogny interaction [50, 51] is then used as the underlying nucleon-nucleon interaction. Different deformations are then allowed leading to a superposition of wavefunctions with coefficients which can be found by solving the so-called Hill-Wheeler-Griffin (HWG) equation [52].

The interacting boson model [44, 53] can be considered somehow halfway between the microscopic view of NSM and the collective ones of QRPA and GCM. The collective nuclear states of NSM are assumed while collective excitations are described by bosons. However, as the number of valence nucleons increases, the direct application of the shell model becomes prohibitively difficult, and it is usually assumed that the closed shells are inert. Furthermore, it is also assumed that the dominant configurations in even–even nuclei are those in which identical particles are paired together in states with total angular momentum and parity 0+ or 2+. Particle pairs are then treated as bosons, like Cooper pairs in a gas of electrons. The result is a system of interacting bosons of two types, protons and neutrons. The number of shells is reduced to the simple s-shell () and d-shell () and the number of proton and neutron bosons is counted from the nearest closed shell in terms of particles or holes depending on the current shell is less or more than half-filled. All fermionic operators are mapped into bosonic operators [54] and the matrix elements between fermionic states in the collective subspace are identical to the matrix elements in the bosonic space [44]. A realistic set of wavefunctions for even-even nuclei with mass is provided by the IBM-2 extension [53] which provides an accurate description of many properties (energies, electromagnetic transition rates, quadrupole magnetic moments, etc.) of the final and initial nuclei and allows one to calculate NME through proper bosonic operators [44]. A peculiar feature of IBM-2 is its independence of nuclear deformation details which allows the calculation of NME also for heavily deformed nuclei (e.g., 150Nd) which is almost prohibitive with other methods.

The different methods provide an important cross-check of the NME calculations although the effect of the different approximations still needs to be explored. The clear advantage of the NSM calculations is the full treatment of the nuclear correlations. On the other hand, the limitations in the valence spaces can underestimate the NME’s [55]. On the contrary, all the other methods tend to underestimate the correlations, thus overestimating the NME’s [56, 57].

Unfortunately, as already mentioned above NME results are still into significant disagreement and despite a better relative agreement (Figure 2 and Table 2) they have not provided yet an answer to the question of which method is closer to the truth nor to the origin of the observed disagreement.

The careful check of the models in order to account for the omitted physics or the important missing information seems the only way out of the problem. A systematic analysis of the calculation methods and their basic hypotheses has been therefore started. However, the inclusion of the missing correlations into the QRPA looks like a very difficult task (because of the several uncontrolled approximations of the method), while for the shell model, at least in principle, a systematic procedure for adding the effects of missing states exists.

The ultimate limitation of the QRPA method seems the perturbative approach which is implemented in a renormalized nuclear interaction and requires always some adjustment to the data. Reasonably good results are usually obtained by a proper parametrization of the short range correlations or the reduction of the axial-vector coupling constant . This corresponds to a phenomenological correction of the operator whose reliability is not easy to assess. A better approach could consist in obtaining an effective double-beta-decay operator [58].

A statistical analysis of the different NME calculation (comparison of different methods and model parameters) has also been recently considered [59]. Besides providing useful recipes for the comparison of the experimental results on different isotopes this approach can help in identifying systematic effects in the different calculations.

Particular attention deserves the aptitude, adopted in many occasions in the past, to consider the disagreement between different calculations as a measure of the theoretical error. This is a very dangerous approach which creates a lot of confusion especially when comparing the experimental sensitivities. Indeed, it does not take into account the above-mentioned correlations between different calculations (for the same isotope) and suggests an improper use of the error intervals. Although characterized by good common sense, the proposed Physics motivated intervals [60] or Educated ranges [61] do not add any clarification and limit themselves to propose better intervals (uncertainties at the level of 20–30%).

A possible (and provocative) solution consists in the (arbitrary) choice of a single calculation [62]. This could be, somehow, justified by the recently recognized trend of NME calculations which show only small differences among different nuclei (Figure 2), generally within the uncertainty interval. Known as the no super-element conjecture, such an observation has been very recently strengthened by the astonishing discovery of a possible anticorrelation between phase space factors and NMEs [63].

This is easily realized when plotting (Figure 4) the available NME’s versus the respective specific phase space (defined as , where and are the Avogadro and atomic number, resp., Table 3) for emitters with -values larger than 2 MeV (the most relevant from the experimental point of view).

The general conclusion is that, within a factor of 2-3 (i.e., of the same order of the present NME discrepancies), the decay rate per unit (isotope) mass does not depend on the nucleus or, equivalently, that there are no especially favored or disfavored isotopes. This also means that (within the same approximation) experimental sensitivities on the half-lifetime would translate directly (apart from a common scaling factor) in sensitivities on .

Phase space factors reported in Table 3 are taken from the recent extensive calculations of Kotila and Iachello [38]. As recognized by the authors, uncertainties in arise from the possible choices for the renormalization of the axial-vector coupling In order to decouple this problem from other sources of uncertainty an explicit factor is suggested in the expression of . Indeed, calculated phase space factors for neutrinoless decay are generally presented for different free-nucleon values in the range 1–1.269. The difference between these values and the minimum reported value 0.6 (renormalized to fit experimental lifetimes) is significantly large in terms of rates (~20). renormalization is therefore another critical item in neutrinoless double beta decay, and still a topic of debate among theorists.

4. Experimental Overview

The observation of neutrinoless double beta decay would unambiguously prove that neutrinos are Majorana particles and lepton number is violated. This ambitious goal has been challenging experimental physicists for about fifty years, justifying the enormous efforts in searching for such an evanescent decay. The most suitable and best performing experimental techniques have been designed to build massive detectors operating in the most extreme conditions of low radioactivity. However, the discovery of neutrino oscillations and the measurement of the oscillation parameters have dramatically changed the experimental situation, fixing a clear target for next generation experiments whose primary goal is to reach the needed sensitivity to study the inverted hierarchy of neutrino masses. The intriguing claim of observation in 76Ge has further rocked the boat with a new unexpected milestone.

The size of the challenge is essentially the rarity of the decay which asks for increasingly larger masses while maintaining an excellent performance and ultralow background environments. According to Figure 3 a sensitivity to half-lifetimes in the range of 1026-27 yr is required to enter the inverted hierarchy region,  meV. This is equivalent to about a count per year in 104 moles of isotope or in one tonne of isotopically enriched material on the average. Consequently, to record a sizable number of events over its operation time, an experiment needs to have a of at least 100 kg if  meV and few tonnes if is as low as the lower bound of the inverted hierarchy (i.e., 10 meV).

On the other hand, the decay signature exploited by most experiments is simply based on the monochromatic energy of the two emitted electrons (the sum kinetic energy of the electrons is equal to the transition energy since nuclear recoil is negligible). Unfortunately, as discussed later, there are several sources that can produce background counts in this same energy region. Their fluctuations can easily hide very faint peaks like the one, spoiling the effectiveness of the signature. A better signature is often synonymous of a lower background and, definitely, of a better sensitivity. In principle the reconstruction of the single-electron energies, the angular correlations, and the identification and/or counting of the daughter nucleus could result in a large improvement of the signal to background ratio of an experiment. However, exploiting these complementary signatures is not simple and in general it has a price. All experiments tend therefore to find a compromise between the desire to collect the maximum information and the best way in which such a goal can be accomplished.

4.1. The Experimental Sensitivity

The performance of the different experiments is usually expressed in terms of an experimental sensitivity or detector factor of merit, defined as the process half-lifetime () corresponding to the maximum signal that can be hidden by the background fluctuations at a given statistical confidence Level (C.L.).

The sensitivity expresses the capacity of a detector to maximize the signal while minimizing the background and is given, at , level by where is the number of decaying nuclei under observation, is the detection efficiency, is the measure time, and is the maximum number of counts hidden by fluctuations of the background.

In the raw (but often well motivated) assumption that the background rate scales with the mass of the source, one can obtain the expected total number of background counts by integrating over a proper interval (customarily chosen equal to the FWHM resolution of the detector): ), where is the specific background rate per unit mass, time, and energy and is the FWHM resolution. On the other hand, can be rewritten as ), where is the number of atoms in the molecule, is the isotopic abundance, is the Avogadro number, and is the molecular weight. Assuming then Poisson statistics one gets (at ) and the sensitivity formula can be rewritten as

A slightly different version of this formula can be obtained by introducing a new specific background rate normalized to the mass of the isotope ), where is the atomic weight of the isotope. The new background rate is then related to by , while ). Then the sensitivity becomes

Despite their simplicity, (8) and (9) have the unique advantage of emphasizing the role of the essential experimental parameters: mass, measuring time, isotopic abundance, background level, and detection efficiency.

Of particular interest is the case when the background rate is so low that the expected number of background events in the region of interest along the experiment life is close to zero. In such cases, one generally speaks of zero background (ZB) experiments, a condition sought by a number of future projects. In such conditions (9) is no more valid. Indeed is given by a constant term (the maximum number of counts compatible, at a given C.L. with no counts observed [64]) and the sensitivity reads as follows:

The most relevant feature of (10) is that it does not depend on the background level or the energy resolution and that it scales linearly with the sensitive mass and the measure time . On the contrary, in the finite background case of (9) the sensitivity depends only on the square root of and . The dramatic effect of background is therefore not only to limit the sensitivity but even to change its dependence on the other experimental parameters.

The intermediate situation in which the expected number of counts is close to unity marks the transition between the two regimes: (. No equation exists that can properly describe this condition and one has to rely here on numerical estimates of the sensitivity.

Since is usually limited to a few years and is usually fixed for a given experimental technique, there is little room to improve these terms and the transition to the ZB condition is ruled by the () term only. This means that the ZB condition can be obtained because of a very good background level or of an insufficient mass of the source.

On the other hand, (10) indicates that, in the ZB regime, the sensitivity does not depend anymore on the background rate but only on and further improvements in the background are useless without corresponding increases of the experimental mass.

Similar considerations apply to the discovery potential usually defined in terms of the ratio of the observed effect and background events. Also in this case, in the ZB regime the background contribution is constant and the discovery potential scales linearly with ().

We conclude this section with the following note: there are sometimes ambiguities in the sensitivity numbers reported in literature, often because the parameters/confidence level/technique used for sensitivity computation are not clearly stated. In this paper, we will adopt the following convention: provide our own evaluation of a 68% C.L. sensitivity, which we will label as (when computed according to (9)) or (when computed according to (10)). We will use the latter whenever ( (making an approximation for the grey zone where the background is only nearly zero). We will use to indicate sensitivities estimate provided by the authors, for which we will either specify the hypotheses under which they have been evaluated or we will report a reference where that sensitivity estimate is discussed.

4.2. Experimental Parameters

Most of the criteria that need to be considered when optimizing the design of a new experiment follow directly from (9) and (10):(i)a well performing detector (e.g., good energy resolution and time stability) giving the maximum information (e.g., electron energies and event topology);(ii)a reliable and easy to operate detector technology requiring a minimum level of maintenance (long underground running times);(iii)a very large (possibly isotopically enriched) mass, of the order of one tonne or larger;(iv)an effective background suppression strategy.

Unfortunately, these simple criteria cannot be satisfied simultaneously and actual experiments have to find always, for any given technique, the best compromise between incompatible requests.

Among the experimental parameters entering (9), the background rate is probably the one presently attracting most of the interest of the researchers. The main reason behind this is that and are the only parameters on which improvement by orders of magnitude still looks possible. Moreover the possibility to reach the zero background region, with its linear dependence on and , is particularly appealing.

integrates the contributions from all the physical processes which produce measurable effects that are not distinguishable from a decay. Unfortunately they are many and only two approaches can be devised: identify their origin and eliminate their sources or find a recipe to recognize and separate each single event.

The natural radioactivity of detector components (bulk or surface) is often the main background source. Even traces of nuclides from the natural radioactive chains can become a significant background. A serious problem is becoming the availability of a proper diagnostic technique with the required sensitivity to measure trace levels well below the capability of conventional techniques. The decays of 208Tl and 214Bi (due, resp., to the 232Th and 238U chains) with their high -values populate the region above 2 MeV and are therefore particularly pernicious. In some specific case (e.g., bolometers), surface contaminations of alpha emitters have demonstrated a limiting problem. In all cases, a careful selection of material and purification is mandatory and next generation experiments are being built with extremely radio pure components. Radon isotopes, either 222Rn or 220Rn, are liberated in natural decay chains and can contaminate all materials with their progeny. Special care is usually requested for them.

External backgrounds which originated outside the detector have also to be taken into account. Underground location is the usual (and fundamental) recipe to get rid of cosmic rays. Depth requirements vary from case to case and depend on the experimental technique. In many cases, well designed effective shields and/or additional detection signatures compensate the benefits of a very deep laboratory. Besides the depth, other important factors characterize the underground sites like the accessibility, the size and the availability of services in the halls, and, of course, a low environmental radioactivity [65] (starting from the rock itself). In the underground laboratories, muons and neutrinos are the only surviving radiation from cosmic rays. Even if muons can be easily eliminated with proper veto systems, their interactions can produce high-energy secondaries such as neutrons or electromagnetic showers (as well as nuclear activation) that can represent a more serious problem. The effects of this secondary radiation can be particularly dangerous above ground (e.g., during detector components preparation) so that when material activation can be a concern (e.g., for germanium or copper), underground fabrication and/or storage of the detector components are essential. Electromagnetic showers and -rays from radioactive decays produced in the rock surrounding the underground halls can produce background. Detectors need therefore to be surrounded by heavy shields to reduce the effects of this radiation. To this end, layers of increasing radiopurity are used as the innermost parts of the detector are approached. Shields against neutrons are also usually implemented with layers of a moderating (hydrogenous) material followed by materials with a high cross-section for neutron capture. Finally, even solar neutrinos can be an irreducible source of background when very massive detectors (e.g., huge liquid-scintillator calorimeters) are used.

In most cases, detectors are designed to measure only the total energy released in the decay (sum of the electron kinetic energies). Additional information (e.g., topological reconstruction) can be extremely helpful in identifying background contributions. Actually the lowest background rate so far was achieved by the NEMO3 experiment [66], a calorimeter with tracking capabilities (Figure 5).

Given the rarity of decays, a high detection efficiency is another important requirement, as (9) and (10) clearly indicate. In general, simple calorimeters have the highest detection efficiency.

Even if not appearing explicitly in (9), the choice of the isotope is particularly important since it influences all the relevant factors that characterize the design of an experiment:(i)the isotopic abundance,(ii)the nuclear details of the decay (i.e., the nuclear factor of merit),(iii)the -value (),(iv) background,(v)the choice of the experimental approach or technique.

Of the 35 naturally occurring isotopes that are  emitters none can match simultaneously all the requirements listed here. For each isotope a figure of merit can be drawn considering all the listed factors and this allows one to identify the best candidates.

As discussed in the introduction to this section, even in ideal conditions of efficiency and background, any experiment aiming at entering the inverted hierarchy region needs at least a mass of 100 kg of isotope. Isotopic abundance is therefore a key ingredient in the choice of the isotope.

The natural isotopic abundance of some of the most relevant   emitters are reported in Table 3. In most of the cases, the listed values is in the few % range, with two significant exceptions: 130Te and 48Ca. With its 33.8% 130Te is the only case in which a high sensitivity is possible even with natural samples. On the contrary, the natural abundance of 48Ca is well below 1% and isotopic enrichment is indispensable. In order to limit the detector size and taking into account that the background level scales roughly with the total mass of the detector (and not simply the isotope fraction), it is evident that isotopic enrichment is a necessity for almost all next generation experiments.

A further criterion can then affect the choice of the isotope: the availability and the cost of the enrichment techniques. In particular, 48Ca, 96Zr, and 150Nd cannot be enriched with centrifuges and the cost becomes a limiting factor.

The nuclear structure of each specific isotope can affect the value of the respective amplitude in a peculiar way. Indeed, a favorable value of the NME can identify some specific super-element. This has been the case of 150Nd some year ago but, as discussed in Section 3.1, present calculations seem to level the values of NME’s which are becoming therefore a less relevant criterion.

The -value is also particularly critical since it has a double effect on sensitivity, affecting both the phase space factor (which varies as ) and the background contributions (natural radioactivity populates the energy region below 3 MeV). Isotopes with large -values are therefore favored and the choice is usually restricted to MeV (the lowest of them is 76Ge). Only emitters survive this request.

From an experimental point of view, and decays can be distinguished from the shape of the two-electron sum energy spectrum which is a continuum between 0 and for and a sharp line at the transition energy for . However, these distributions are smeared by the finite energy resolution of the detector and the tail of the distribution can overlap the peak. half-lifetime and energy resolution of the detector are the critical parameters, although for next generation experiments this is not a concern when the resolution is better than 1% (Figure 6).

The relation between the choice of the isotope and the experimental approach will become more clear in the following when specific detection methods will be described. In practice, only two general experimental approaches have been so far devised: an external-source (or inhomogeneous or passive source) approach in which the electrons emitted by a very thin source sample (~60 mg/cm2 in NEMO3) are observed by means of (usually very complex) external detectors, and a calorimetric (or homogeneous or active source) approach in which the source sample is active and acts simultaneously as detector of the decay. Calorimetric detectors present serious limitations in the choice of the isotope since only few materials can satisfy the request to be at the same time the active material of a detector. Few emblematic exceptions are 76Ge (germanium diodes), 136Xe (gas and liquid chambers), and 130Te (bolometers). On the other hand, the calorimetric approach has provided so far the best sensitivities and this justifies the effort for the quest of a technology able to enlarge the list of isotopes that can be studied with a calorimetric approach. Bolometers have actually provided such an answer although few exceptions still exist (e.g.,150Nd).

5. Experimental Methods

Two main general approaches have been followed so far for experimental investigation: (i) indirect or inclusive methods and (ii) direct or counter methods. Inclusive methods are based on the measurement of anomalous concentrations of the daughter nuclei in properly selected samples, characterized by very long accumulation times. They include geochemical and radiochemical methods which, being completely insensitive to different modes, can only give indirect evaluations of the and lifetimes. They have played a crucial role in searches especially in the past.

Counter methods are based instead on the direct observation of the two electrons emitted in the decay. Different experimental parameters (energies, momenta, topology, etc.) can then be registered according to the different capabilities of the employed detectors. These methods are further classified in inhomogeneous (when the observed electrons originate in an external sample) and homogeneous experiments (when the source of ’s serves also as detector).

Given the limited information coming from the decay, the experimental strategy generally adopted to investigate the decay consists in developing a proper detector to measure in real time the properties of the two emitted electrons. The minimal request is to collect the sum energy spectrum of the electrons. However, when possible, additional pieces of information can be useful to lower background effects or constraining theoretical models. They consist usually of the single-electron energy and initial momentum, of the event topology, and, in one specific case, of the species of the daughter nucleus. The next step consists then in the optimization of most of the experimental parameters addressed by the sensitivity equation (9)(i)Energy Resolution . A very good energy resolution is maybe the most relevant feature to identify the sharp peak over an almost flat background. It is however very useful also to keep under control the background induced by the unavoidable tail of the spectrum. Although almost negligible when the energy resolution is better than about 2% (Figure 6), it represents a limiting factor in low resolving detectors. In these cases, candidates with a slow decay rate (e.g.,136Xe) are of course preferred.(ii)Background Rate . As already discussed above, a very low background requires a proper underground laboratory, extremely radiopure materials, and effective passive and/or active shields against environmental radioactivity.(iii)Mass of the Isotope . A large number of candidate nuclei are an inalienable constraint. Present experiments are characterized by masses of the order of few tens of kg (hundred in the most sensitive detectors), while experiments aiming at covering the inverted hierarchy region should reach the 100–1000 kg scale.

Normally, these features cannot be met simultaneously in a single detection method and compromise solutions have to be worked out, privileging some properties with respect to others while having in mind of course the final sensitivity of the setup. As already mentioned above, the searches for can be further classified into two main categories: calorimetric and external-source systems.

Originally proposed for germanium diodes [67], the calorimetric technique has been implemented with many types of detectors, such as scintillators, bolometers, solid-state devices, and gaseous chambers. Advantages and limitations of this technique can be summarized as followsThe intrinsically high efficiency of the method allows large source masses. (100 kg) has been already demonstrated and the tonne scale seems possible.With a proper choice of the detector type, a very high energy resolution is achievable (e.g., Ge diodes and bolometers).Severe constraints arise from the request that the source material is embedded in the structure of the detector. These constraints have been however weakened by the use of liquid scintillator (e.g., KamlAND-Zen and SNO+) and bolometers.Topology reconstruction is usually difficult. Also here, exceptions exist (liquid or gas Xe TPC).

Different detection techniques have been adopted also for the external-source approach, namely, scintillators, solid-state detectors, and gas chambers. Also here positive and negative aspects can be listed.Reconstruction of the event topology is possible, making easier the achievement of the zero background condition. Such a beautiful feature is unfortunately masked by the negative effects of a bad energy resolution which mixes and events.Large masses of the isotope can be hardly gathered. Self-absorption in the source is the limiting factor and only masses of the order of 10 kg have been possible so far. The target of 100 kg seems possible even if at the cost of an extraordinary effort, while the tonne scale looks presently unreachable.Typical energy resolutions are of the order of 10% mainly determined by source effects.Low detection efficiencies (of the order of 30%) are another typically negative aspect of this approach.

Besides having provided so far the best experimental results on , the calorimetric approach is still promising the best sensitivities and is therefore characterizing most of the future projects. Here, the well performing detectors seem limited by the scalability while the opposite holds for the very big liquid scintillation detectors. The quest for the zero background condition is common to both; but let us remind you that the golden rule is that the best sensitivity is achieved when . This is easily recognized when reworking (9) as follows [68]: where is the number of moles of isotope rescaled for the efficiency while is the background rate per unit of . Equivalently for (10) for the zero background regime. It is then apparent that and must proceed hand in hand and that big efforts to reduce background without a corresponding increase in the source mass risk to be a waste of time.

5.1. Past Experiments

Started in the 1940s, with the first experimental work of Fireman [69] and soon after its theoretical proposal by Furry in 1939 [3], the research in double beta decay has been characterized for about half century by continuous attempts to improve the limits on lepton number conservation exploiting the improvements in the available technology. The first direct measurement of dates back to 1987 [70] when Elliott and collaborators observed the first tracks of the electrons emitted by a source of 14 g of 97% enriched 82Se deposited on a thin mylar foil inside their Time Projection Chamber (TPC) at Irvine. Until that moment, the only evidence of the existence of double beta decay came from geochemical methods. Then, starting in the 1980s, the scene was dominated for about 20 years by germanium diodes, which demonstrated an excellent technique to search for and established the superiority of the calorimetric approach. The discovery of neutrino oscillations at the end of the 90’s has marked a true revolution in the field, providing for the first time a clear target for the experimental search. Since then, a rich and varied list of new experiments has been proposed.

Next generation experiments will be reviewed in the next section while here we would like to summarize the most recent results.

Experimental evidence for several decays has been provided in recent years (see Table 4) mainly exploiting the external source approach to measure the two-electron sum energy spectra, the single electron energy distributions, and the event topology. Impressive progress has been obtained in the same periods also in improving half-life limits for a number of isotopes. The best results are still maintained by the use of isotopically enriched HPGe diodes for the experimental investigation of 76Ge (Heidelberg-Moscow [71] and IGEX [72]) but two other experiments have reached comparable sensitivities: NEMO3 [73, 74] at Laboratoire Souterrain de Modane (LSM) and Cuoricino [75] at Laboratori Nazionali del Gran Sasso (LNGS).

NEMO3 was a large inhomogeneous detector aiming at overcoming the intrinsic limits of the technique (relatively small active masses) by expanding the setup dimensions. The big advantage of the NEMO3 technique was the possibility to access single electron information. This made it possible to measure a variety of half-lives and to reach an excellent background rate. Cuoricino was, on the other hand, a TeO2 granular calorimeter based on the bolometric technique. Its goal was to exploit the excellent performance of the bolometers (and the possibility they offer to be built with any material of practical interest [7678]) to scan the most interesting active isotopes. Apart from the relevant result on , Cuoricino has the big merit of having demonstrated the scalability of the technique, paving the way for CUORE. NEMO3 and Cuoricino were stopped in 2010 and 2008, respectively.

The evidence for a signal has also been claimed [79, 80] (and confirmed later [81, 82]) by a small subset (KHDK) of the HDM collaboration at LNGS. The latest reported result amounts to evidence with a half-life measurement of  yr. It corresponds to 11 ± 1.8 counts in the peak and agrees with the previously quoted value within a error [81]. The result is based on a complex reanalysis of the HDM data, leading to the observation of a peak in the sum energy spectrum at 2039 keV. This claim has triggered an intense debate in the community. No consensus still exists about its validity. The only certain way to confirm or refute it is with additional sensitive experiments. Its verification is actually one of the goals of the next generation experiments. Preliminary results (Section 6) seem to exclude it according to most of the theoretical NME calculations.

6. Goals and Methods of the Next Generation Experiments

The conclusion of Cuoricino and NEMO3 marks in some way the transition toward a new generation of experiments characterized by bigger detectors (100–1000 kg of isotope), designed and constructed by wide international collaborations sharing work and costs. The ultimate goal of these next generation projects would be to explore the inverted-hierarchy region of neutrino masses, a very ambitious objective which requires the realization of experiments at the multitonne scale with background levels of the order of 1 counts/(keV·tonne·yr). The cost, the risk profile, and the time scale (of the order of ten or more years) that characterize the preparation phase of these big experiments motivate the adoption of a cautious strategy, generally based on the construction of a 100 kg scale experiment that can be expanded at a later time to 1 or more tonnes. Scalability and performance are therefore the key issues on which next generation experiments will select the future technique.

Some of the parameters appearing in (9) (e.g., the energy resolution) only depend on the experimental technique and cannot be improved at will. On the other hand, sizable improvements of the sensitivity can be obtained acting on the following:(1)background level;(2)isotopic enrichment;(3)active mass.

Next generation experiments are therefore facing the challenge of developing detectors characterized by masses of isotopically enriched materials of the order of ~1 tonne, operating underground in conditions of extremely low radioactivity. In this game, a further, certainly not naïve, and not always properly mentioned difficulty is the unavailability of proper diagnostic methods to certify the assessment of a given level of background. In these conditions detector prototypes characterized by intermediate masses (the mentioned 100 kg scale phase) are the only possibility.

So far, the best results have been pursued exploiting the calorimetric approach which characterizes therefore most of the future proposed projects. They can be classified in three broad classes:(1)dedicated experiments using a conventional detector technology with improved background suppression methods (e.g., GERDA and MAJORANA);(2) experiments using unconventional detector (e.g., CUORE) or background suppression (e.g., EXO and SuperNEMO) technologies;(3) experiments based on suitable modifications of an existing setup aiming at a different search (e.g., SNO+ and KAMLAND).

Experimental methods and expected sensitivities of the proposed projects are compared in Tables 5 and 15. As discussed above, technical feasibility tests are requested in some cases, but the crucial issue will be the capability of each project to pursue the expected background suppression.

Calorimetric detectors are usually preferred for future experiments since they have produced so far the best results. The calorimetric approach suffered for years from a strong limitation: it was possible only for a small number of isotopes (e.g.,76Ge, 136Xe, and 48Ca), thus limiting the number of experimentally accessible isotopes. Today, the multiple choices offered by new detectors and techniques (e.g., bolometers) show that a possible way out exists.

7. Time Projection Chambers

Particle tracking is a powerful technique to distinguish a signal from a background signal. A event is characterized by a pair of very short tracks which originated at the source position if compared with background events with the same energy (most of the studied isotopes have -values of 2-3 MeV) that are usually characterized by much longer tracks (as in the case of cosmic ray muons) and/or by multisite energy depositions (as in the case of or emissions).

Tracking is accomplished by the use of gas counters or Time Projection Chambers (TPCs) where the source is introduced in the form of thin foils or—in the special case of 136Xe decay—as the TPC filling gas/liquid. A magnetic field can be used to improve particle identification capability (which is the case for NEMO3 and also for the Moe pioneering experiment). A segmented detector is used to reconstruct the spatial distribution of the ionization cloud, deriving event topology with a resolution that strongly depends on details of detector implementation: vertex position, number of interactions, and track length are among the information that can be obtained. These are used for background rejection and background identification; the latter is of primary importance for background modeling. In the case of a high spatial resolution it becomes also feasible to disentangle from and to study the different decay mechanisms (see SuperNEMO description). Whatever the choice done for the tracking read-out, the energy is measured through a scintillation signal that in the case of xenon TPC’s is produced by Xe itself, while in the other cases it is obtained by the introduction of an array of scintillators in the TPC. Energy resolution is often much worse than in pure calorimetric approaches such as those involving HPGe diodes or bolometers with two consequences: the increase of the number of sources able to mimic events and the need of a background reconstruction to disentangle the signal.

Tracking can provide multiple techniques for background rejection, varying according to the specific characteristics of the detector. For example, a powerful and simple way to get rid of some radioactive sources (in particular those emitting short range particles) is the definition of a fiducial volume. Requiring that the interaction vertex be within a volume that is sufficiently far from important sources as the TPC vessel, most of + and events from natural chains (or other or decaying isotopes) are rejected. Obviously a compromise has to be reached between the benefit—in terms of background rate—of a small fiducial volume and the corresponding reduction of the active mass, this compromise can change in time according to the changes in intensities and locations of the background sources.

7.1. 136Xe TPCs

136Xe is an attractive candidate for various reasons:(i)it has a high (2457 keV); therefore, the signal grows in a region that is less contaminated by radioactive background events;(ii)its mode is slow (even slower than expected, as proved by EXO-200 and later confirmed by KamLAND-ZEN) and hence its contribution in the decay region of interest (ROI) is irrelevant even when the energy resolution is poor;(iii)xenon can be used for the realization of a homogeneous detector since it provides both scintillation and ionization signals;(iv)it is a gas and can be easily and cheaply enriched (its natural isotopic abundance (i.a.) is 8.86%) and purified.

The running experiment EXO-200 and the projected NEXT-100 use xenon in an active source approach, while in the KamLAND-ZEN experiment the 136Xe passive source is dispersed in a liquid scintillator (see Section 11).

At 2457 keV, multiple sources can mimic a decay. The dominant background comes from the high energy lines due to isotopes in the 238U and 232Th natural chains: the 2448 keV from 214Bi (222Rn progeny) and the 2615 keV from 208Tl. The former is certainly the most threatening one since it is less than 10 keV apart from the signal. The implementation of radon suppression techniques is a mandatory requirement for these experiments, while mitigation of radon-induced background can be obtained by improving the energy resolution of the calorimeter, the accuracy of energy calibration, and the ability to identify and subtract 214Bi contributions from the measured spectrum. In particular cases, short-living nuclei produced by cosmic ray activation or by fallout can be important background contributors as proved by the KamLAND-ZEN experience (see Section 11). Finally, cosmic rays—although potentially dangerous—can be easily suppressed through the use of optimized veto systems and underground deep locations.

7.2. EXO

The Enriched Xenon Observatory (EXO) Collaboration is planning a series of experiments to search for decay of 136Xe with progressively higher sensitivity using liquid xenon (LXe) TPC’s. Within this program, EXO-200 is a 200 kg scale experiment designed to achieve a 2-year sensitivity of 6.5 × 1025 yr. However, this was computed assuming a fiducial mass of 140 kg of Xe, namely, higher than the actual case, meaning that the same sensitivity will be reached in a longer time. The experiment is located at a depth of 1585 m water equivalent in the Waste Isolation Pilot Plant (WIPP) near Carlsbad, New Mexico. The advantage of LXe over a gaseous xenon TPC lies mainly in the reduced volume where the same mass can be concentrated, at the price of a worse energy resolution. EXO-200 exploits both the scintillation and ionization signals produced by particle interactions in xenon, while the future plans of the collaboration include the implementation of a Ba tagging technique. This aims at the identification (through laser excitation) of the 136Xe decay daughter (136Ba++) as a further and unambiguous signature of a decay. If successful, this technique would impressively improve background discrimination (see Table 6).

The EXO-200 detector consists in a cylindrical TPC filled with LXe (see Figure 7) mounted inside a cryostat and externally shielded from cosmic rays and radioactivity by 25 cm of lead. A further thickness of 5 cm in copper and of thickness 50 cm in the liquid refrigerator are provided by the cryostat itself. All components used for the construction of the detector were carefully selected for low radioactive content. The clean room module—housing the cryostat and the TPC—is surrounded on four sides by an array of plastic scintillators acting as cosmic rays veto. At WIPP, the muon rate is of about (3.10 ± 0.07) × 10−7μ/(s·cm2·sr) (~10 times higher than at LNGS), while μ’s traversing the TPC are easily rejected any μ’s traversing the experimental apparatus but not tracked in the TPC can produce dangerous background events via bremsstrahlung or spallation. A cosmic-ray-induced background rate 10 times higher than the EXO-200 goal (3 events/year in the ROI) was estimated in absence of the veto. This rate is reduced to negligible levels by the veto [83].

EXO-200 uses about 200 kg of xenon, enriched to (80.6 ± 0.1)% in the isotope 136Xe. Xenon is continuously recirculated; therefore, only a fraction of it (110 kg) is in the liquid phase inside the detector chamber. The cylindrical TPC (44 cm in length and 40 cm in diameter) is divided into two identical volumes (two halves) by a cathode grid held at negative high voltage, located in the midplane of the cylinder. The ionization signal is read out at the two ends of the cylinder by two wire planes held at virtual ground potential (charge collection U-wires). A further plane of wires (induction V-wires) oriented at 60 degrees with respect to U-wires is positioned at each end of the TPC, at a distance of 6 mm from each U-wire plane. The electrically induced signal is used to have a second coordinate that allows two-dimensional localization of the ionization cloud.

In order to improve the energy resolution of the detector, also the scintillation signal produced by particle interactions in LXe is readout using two arrays of large area avalanche photodiodes (preferred to phototubes mainly for the lower radioactivity), with one behind each of the two charge collection planes. The scintillation signal provides complementary energy information used to improve the energy resolution, to reject events corresponding to incomplete charge collection or alpha particles (that are characterized by a different charge-to-light ratio with respect to ’s), and to achieve a three-dimensional position sensitivity: the -coordinate is indeed obtained by using the difference in the arrival time between the ionization and scintillation signals (electron drift time).

The spatial information allows one to reject events coming from the chamber walls (by the definition of a fiducial volume) and to classify signals in single-site (SS) and multi-site (MS) events. The majority (about 82.5%) of events are SS (a fraction of events is MS because of bremsstrahlung). MS events are mainly used to constrain background components.

Periodic calibrations of the apparatus are necessary in order to monitor continuously the free electron lifetime and the overall charge-to-energy conversion. Source measurements are also used to verify the SS and MS reconstruction efficiencies through comparison with Monte Carlo simulations.

Data collected between May 21, 2011 and July 9, 2011 were used for analysis [84] with the discovery (later confirmed by KamLAND-ZEN) that was shorter than what was previously reported in the literature [85].

In June 2012 the first result on was published, using a detector exposure of 32.5 kg (136Xe) × yr (corresponding to a fiducial volume of 98.5 kg of LXe). Here the combination of the charge and light signals is used for the first time to improve the energy resolution, with a gain of about a factor 2 with respect to the use of the ionization signal alone. The resolution at is 1.67% for SS events and 1.84% for MS events (i.e., the FWHM at the transition energy is 96 keV in SS events and 106 keV in MS events). The calibration error is lower than 1%. The and signals are extracted by a simultaneous fit of SS and MS spectra (the fitting region covers the range from 700 keV to 3.5 MeV; see Figure 7) with the spectral shapes predicted by the Monte Carlo simulation for and decays and for the main radioactive sources responsible for the background counting rate. While the SS spectrum is dominated by events (according to the best fit, the ratio of events to background ones is 9.4 to 1 [84]) only a small fraction of them contributes to the MS spectrum which on the other hand is dominated by background sources (in the region the MS counting rate is about 10 times higher than SS one). The contamination levels yielded by the fit for the different background sources are consistent with the material screening measurements, which in some way proves the reliability of the background model. Indeed, the consistency between contaminations extrapolated from the data and those measured for the single detector parts before assembly is not trivial: in many cases only upper limits on contaminant concentrations are available and moreover new contributions are often introduced by components handling, machining, and assembly.

The already measured in [84] has been recently updated to = 2.172 ± 0.0017 (stat) ± 0.060 (syst) × 1021 yr [86].

No peak is observed in the ROI. The fit yields a background rate in the ( region) of (1.1 ± 0.1) × 10−3 counts/(keV·kg·yr) due to external background sources (i.e., not coming from 136Xe itself). The main contributors are identified in the 2448 keV line of 214Bi, ascribed to 222Rn in the cryostat-lead air gap, 232Th (contributing through Compton scattering of the 2615 keV line), and 238U (again the 2448 keV peak) in the TPC vessel. Actually, the spectral shape of a 222Rn contamination in the air gap cannot be distinguished from that of a 238U contamination in materials outside the cryostat but 222Rn measurements confirm the assumed hypothesis and allow for the possibility of a background improvement in the near future.

A lower limit on half-life is evaluated corresponding to 1.6 × 1025 yr at 90% C.L. The future evolution of EXO will go in the direction of a tonne scale experiment that aims at an active mass of 4 tonnes of 136Xe, a slightly improved energy resolution (1.4% at ), and a background reduction obtained through an improved radon suppression and the different surface/volume ratio.

7.3. NEXT

The concept of the NEXT project is very similar to the one of EXO: use ionization and scintillation signals in a xenon TPC. However, NEXT xenon is in its gaseous phase where energy and tracking resolutions are better, an advantage whose price is the larger volume needed for the same xenon mass: LXe has a density of 3 g/cm3, while in NEXT (which plans to work at a pressure of ~15 bar) density is 0.075 g/cm3 (see Table 7).

In NEXT-100, scintillation and ionization are read out as light signals, with a solution that aims at reaching the best energy resolution (down to about 12 keV FWHM) and high resolution tracking: in a high pressure Xe chamber the two electrons emitted in a decay produce a characteristic track ~30 cm long (see Figure 8), easily distinguished from most radioactive-induced events. The detection principle is the following: a particle interacting in the chamber produces excitation and ionization of Xe atoms. The former mechanism gives rise to the prompt emission of scintillation light (this is the start of the event) while the latter produces charges (distributed along the particle track) that are drifted on a long length (of the order of 1 m) in an electric field of relatively low intensity. At the end of the drifting region, between the gate and the anode, a much more intense electrical field induces electroluminescence (EL): drifted electrons acquire so much energy that scattering on Xe atoms they produce Xe excitation followed by scintillation. In this way, the ionization signal is converted into scintillation light which is used for both energy measurement and tracking.

The NEXT-100 detector is a cylindrical, stainless-steel pressure vessel containing a polyethylene field cage (see Figure 8). A 12 cm thick copper shield separates the cage from the vessel and is used to mitigate the possible effect of vessel radioactivity.

Three wire-meshes, cathode, gate (ground), and anode separate the two electric field regions of the detector. The drift region, between cathode and gate, is a cylinder of 107 cm diameter and 130 cm length. The EL region, between gate and anode, is 0.5 cm long. The tracking function is provided by a plane of multipixel photon counters placed behind the anode plane that measure EL signal. An array of PM is located behind the transparent cathode and is used to readout the scintillation light in order to provide a precise measurement of the energy released by the interacting particle. The solution of using two different arrays of optical devices, one dedicated to tracking and the other to energy measurements, allows one to optimize separately the two measurements.

Tests on small-scale prototypes have proved an energy resolution of 1% FWHM at 662 keV which scales to 0.5% at (namely, 12.5 keV) and a track reconstruction with an uncertainty of the order of 5–10 mm [87].

According to Monte Carlo simulations the background rejection efficiency obtained through the combination of cuts based on tracking and energy is impressive, ranging from 3 to 7 orders of magnitude. The latter is obtained exploiting full event topology and corresponds to a detection efficiency (i.e., the fraction of events that survives topology cut) of 25%. A background level of 8 × 104 counts/(keVkgyr) is predicted for the energy region of interest on the basis of the background budget of the experiment (material radioactive screening) and of the efficiency of topology cut. The 5-year sensitivity, in these hypotheses, is = 1.6 × 1026 yr. NEXT-100 is approved for operation in the Laboratorio Subterrańeo de Canfranc (LSC), in Spain, at a depth of 2450 m.w.e. The assembly and commissioning of the detector are planned for early 2014.

8. Inhomogeneous Tracking Detectors

A completely different approach to searches separates the source from the detection device. In this case, the source is a thin foil made of the candidate, while the detector consists in a tracker combined with a calorimeter. This technique was successfully employed, for example, by Elegans V whose planned prosecution is MOON [88]. However, the best example of passive-source tracking detectors is certainly the NEMO3 [89] experiment where tracking was associated with particle charge identification (thanks to the presence of a magnetic field) allowing not only an efficient background rejection but also a precise measurement of the different background sources producing the experimental counting rate.

8.1. SuperNEMO

The SuperNEMO project is an extension of the NEMO3 technique toward the realization of a new apparatus able to overcome NEMO3 limitations (see Table 8). The increase in sensitivity will be based on a larger isotope mass (i.e., a larger experimental apparatus) and on the reduction of background. A clear idea of the background sources that need to be controlled in SuperNEMO comes from the NEMO3 experience. NEMO3 was a cylindrical detector combining gas tracking counters and calorimeters. It was divided in 8 sectors, each one dedicated to the specific study of a isotope (100Mo, 82Se, 130Te, 116Cd, 96Zr, 48Ca, and 150Nd). The best results were obtained for the two isotopes present with the highest masses, 100Mo and 82Se, both having a at about 3 MeV. The latest NEMO3 results are [90](i)100Mo: 1.1 × 1024 years at 90% C.L.,(ii)82Se: 3.6 × 1023 years at 90% C.L. with a background counting rate as low as 0.003 counts/(keV·kg·yr). A decay was identified as two electrons emitted from the source foil. Background sources that can mimic this kind of events are as follows:(i)the two electrons emitted by (i.e., the tail of the decay spectrum that is comprised in the ROI);(ii)high energy ’s impinging on the foil and producing two electrons, double Compton, or Compton + Moller scatterings or also pair production (in the case of misidentification of the positron charge). The highest contribution here comes from 214Bi due to 222Rn contamination in the gas counters;(iii)internal contaminations of the source foils with decaying isotopes accompanied by internal conversion (IC), Moller or Compton scattering. Radioisotopes with high enough energy to produce such kind of events in the ROI ar 214Bi ( = 3.3 MeV) and 208Tl ( MeV), respectively, from 238U and 232Th chains.

SuperNEMO will have to reach a much better radiopurity in the source foils as well as a stronger Rn suppression. However, this will not be enough to get rid of background due to events and a reduction of the FWHM is also compulsory. SuperNEMO plans to improve the energy resolution by about a factor of 2 and to choose a candidate with a sufficiently long with respect to the expected . This excludes the already studied 100Mo. Favorite isotopes are therefore 82Se, 150Nd, and 48Ca although the possibility of enriching the latter two isotopes is still under study.

SuperNEMO [91] is designed as an experiment made of 20 modules (Figure 9), each containing 5–7 kg of emitter in the form of a thin foil of enriched material. The single module has a planar design (i.e., different from the NEMO3 cylindrical symmetry). The source is a thin (40 mg/cm2) foil (3 × 4.5 m) mounted in the middle plane of a gas tracking chamber; the 6 walls of the chamber are covered by plastic scintillator blocks (500 to 700 depending on the design which is not yet fixed) to realize the calorimeter. The tracking volume contains 2000 wire drift cells operated in Geiger mode in a magnetic field of 25 Gauss. These are arranged in nine layers parallel to the foil and will be able to provide particle identification, vertex reconstruction, and angular correlation between the two electrons emitted in decay. The expected spatial resolution is 0.7 mm in the direction perpendicular to the source foils and 1 cm in the parallel one. The scintillators provide a calorimetric measurement of particle energy with an expected energy resolution of 7% FWHM at 1 MeV (i.e., 120 keV at 3 MeV). The angular correlation between the two electrons emitted in the decay can be used to study the decay mechanism [92].

The first module, the SuperNEMO demonstrator (SND), containing 7 kg of 82Se (i.e., more than 7 times the isotope contained in NEMO3), is presently under construction and will be installed in the Laboratoire Souterrain de Modane (LSM) within the year 2014. No background count is expected for the demonstrator in 2.5 years, corresponding to a sensitivity of 6.5 × 1024 yr at 90% C.L [93]. This is equivalent to a background counting rate of about 5 × 10−4 counts/(keV·kg·yr); therefore, the 5-year sensitivity evaluated with our criteria is = 3.3 × 1024 yr (we assume a signal efficiency of 30% as quoted in [92]). SuperNEMO, which will require a much larger space, will be installed in the planned extension of the Modane laboratory, the 5-year sensitivity evaluated on the basis of 100 kg [93] of 82Se is = 1.3 × 1026 yr).

9. Bolometric Detectors

A thermal detector is a sensitive calorimeter which measures the energy deposited by a single interacting particle through the corresponding temperature rise. This is accomplished by using suitable materials (dielectric crystals, superconductors below the phase transition, etc.) and by running the detector at very low temperatures (usually below 100 mK) in a suitable cryostat (e.g., dilution refrigerators). Indeed, according to the Debye law, the heat capacity of a single dielectric and diamagnetic crystal at low temperature is proportional to the ratio ()3 ( is the Debye temperature) so that for extremely low temperatures it can become sufficiently small. Of course, the measurement of the temperature change requires also a proper thermal sensor. A low-temperature detector (LTD or bolometer) consists of three main components: (i) a particle absorber (the sensitive mass of the device where the particles deposit their energy), (ii) a temperature sensor (or transducer), and (iii) a thermal link to the heat sink.

The absorber material can be chosen quite freely, with the only requirements being, in fact, a low heat capacity and the capability to stand the cooling in vacuum. The absorber can therefore be easily realized with materials containing any kind of unstable isotopes and many interesting searches are therefore possible (e.g., decay spectroscopy, neutrinoless double beta decay, and dark matter). So far, absorbers with masses in the range from few micrograms to almost one kilogram have been developed.

In principle, the intrinsic energy resolution of a bolometer is limited only by the thermodynamical fluctuations of thermal phonons through the thermal link and it can be as small as few tens of eV even in the case of ~kg bolometers. Besides the exceptionally low value, the intrinsic energy resolution does not depend on the deposited energy . In practical cases, is dominated by other noise contributions. Dedicated low-noise front-end electronics are therefore usually required in order not to spoil such a wonderful feature of these devices. However, important contributions to the detector noise come from vibrations (through the induced thermal dissipations) and are often referred to as microphonic noise. In TeO2 bolometers (Cuoricino and CUORE experiments) energy resolutions lower than 1 keV at 10 keV (dominated by noise) [94] and of ~5 keV at 2.6 MeV have been demonstrated (at the latter energy an additional contribution to the resolution is observed, in particular, for ’s, and is ascribed to an incomplete thermalization of the particle energy deposition).

The material choice flexibility together with the excellent energy resolution and the sensitivity to low or nonionizing events are certainly the best features that make bolometers an excellent opportunity for rare events searches. On the other hand, the response slowness is an unavoidable limitation. Even if not actually a problem for the present generation of experiments, signal velocity could become important in approaching the inverted hierarchy region of neutrino masses, due to the unavoidable pile-up of events [9597]. One of the worst effects of the long thermal integration times is that they tend to wash out any possible difference in the time development of the signals (e.g., those arising from the interaction details of different particles). This is actually an undesired feature in the critical process of background abatement although hybrid techniques (e.g., the simultaneous detection of scintillation) can represent a practical solution. Very interesting results have already been obtained for a number of different absorbing materials as it is discussed in the following.

9.1. Specific Backgrounds in Bolometers

Bolometers can measure with high resolution the total energy deposited by any type of particle interaction. They rely on the observation of excess events above background in the region of the expected signal as the primary (or unique) signature for neutrinoless double beta decay.

The candidates that are presently used or proposed for a bolometric experiment are 130Te (Cuoricino and CUORE), 82Se (LUCIFER), 100Mo, and 116Cd, selected according to their and to the feasibility of a bolometric detector (with energy resolution of the order of 10 keV at ) based on one of their compounds. In the energy region where the line of these isotopes should appear (between 2.5 and 3 MeV) a number of sources contribute to background formation. Besides the usual sources, such as environmental and cosmogenic radioactivity, neutron and cosmic muon background (for which the already discussed mitigation solutions are generally adopted), bolometers are particularly sensitive also to a usually minor source of background signals: surface contaminations. While most of the other kind of detectors can rely on the use of topological information to reject surface events or—in other cases—can be completely insensitive to them thanks to the existence of a surface dead layer protecting the sensitive volume, in bolometers this is not the case. Surface contaminations can be therefore considered a specific background to bolometers, whose effects represent today the worst limitation to sensitivity.

Most of the information on the nature and effects of background sources for bolometric detectors comes from the Cuoricino [98] experiment (the CUORE prototype which collected data at LNGS from January 2003 until June 2008) and a series of dedicated measurements carried out in the past years at LNGS, on smaller arrays of bolometers prepared under different conditions and with different materials [99, 100]. All these measurements confirm a background model according to which the dominant sources in the ROI (130Te  keV) are (with different weights) [101] as follows: (i) unshielded 208Tl  ’s from the environment and the setup materials, (ii) U and Th surface contaminations of the detector crystals, and (iii) U and Th surface contaminations of the copper used for the detector supporting structure.

Concerning source (i), it is important to recall that the 208Tl 2.6 MeV line is the highest natural line due to environmental contamination having a branching ratio >1%. It appears as the dominant contribution in 130Te ROI (through Compton events). In the case of 82Se, 100Mo, and 116Cd whose is >2.8 MeV, pure contributions of natural radioactivity come only from the low branching ratio lines of 214Bi.

The background measured above the 208Tl line in Cuoricino is ascribed mainly to degraded ’s coming from U and Th radioactive chains and due to surface contamination of the bolometric crystals (absorbers) or of (inert) detector elements directly facing the bolometers (the copper of the assembly structure, the PTFE stands that are used to secure the crystals in the copper structure…). This continuum clearly extends below the 208Tl line, thus participating in the background counting rate at lower energies (these are the contributions listed above as (ii) and (iii)). Besides degraded ’s, surface contaminations produce also events of the few isotopes belonging to U and Th chains that can produce a signal in the region when their -value is greater than the isotope (e.g., 208Tl and 214Bi). This is generally a smaller contribution with respect to degraded ’s, which however becomes the only contribution from surface contamination in the case of scintillating bolometers, where events are rejected on the basis of their different scintillation yield.

While well designed heavy shields can ensure a strong reduction of the background, for (and ) background (that come only from the very inner part of the detector, i.e., the crystal themselves and the material directly facing the crystals), only a severe control of bulk and surface contaminations of the detector materials can guarantee the fulfillment of the sensitivity requirements. To this end, correct identification and localization of the sources are mandatory, which requires a powerful diagnostic method able to detect and identify very small surface contaminations. For the same reasons for which surface background is their worst enemy, bolometers are the best tools to study alpha sourface contaminations but measurements are long, difficult, and very expensive. Diagnostic programs including analyses at different levels of sensitivities (with different techniques) are therefore the best choice [99, 100].

From Figure 9, it is evident that surface contaminations are the worst background contribution in bolometers. Two main approaches can be adopted to mitigate their effects:(i)reduction of surface contamination;(ii)identification and rejection of the events which originated at the detector surface.

The former implies the development of effective techniques for the cleaning of all the surfaces faced to the bolometer crystals and the latter the development of bolometers able to identify surface events or to identify particle type. Very promising results have been obtained—in this framework—with hybrid detectors exploiting the different scintillation properties of ’s and ’s. Unfortunately they apply only to bolometers built with scintillating materials. It should be finally pointed out that the two approaches are not mutually exclusive, and their development should run in parallel together with further checks of the radioactive contamination of all the detector parts and a complete scan of all the possible background sources.

9.2. CUORE

(Cryogenic underground detector for rare events) CUORE [101] is a next generation experiment for the search of of 130Te, which brings the concept of large mass bolometric detectors to the extreme. Its design is based on the successful and demonstrated technology of the pilot experiment Cuoricino. It consists of an array of 988 (dielectric and diamagnetic) natural TeO2 cubic crystals grouped in 19 separated towers (13 planes of 4 crystals each) arranged in a rather compact cylindrical structure (Figure 10) designed in order to reduce to a minimum the distance among the crystals and the amount of inert material interposed (mainly copper from the mechanical support structure). Each crystal is 5 cm in side, with a mass of 750 g, and is expected to operate at a temperature of 10 mK. Neutron transmutation doped (NTD) Ge thermistors are used to detect the small temperature rise resulting from single nuclear decay events (see Table 9).

The array, surrounded by a 6 cm thick lead shield (built with low activity lead from a sunk Roman ship), will be operated at about 10 mK in a He3/He4 dilution refrigerator (see Figure 10). A further thickness of 30 cm of low activity lead will be used to shield the array from the dilution unit of the refrigerator and from the environmental activity. A borated polyethylene shield and an air-tight cage will surround externally the cryostat. The experiment will be installed underground at LNGS, in the same experimental hall where Cuoricino was operated. The design and construction of the cryostat that will be used to maintain the detectors at the necessary cryogenic temperatures are a rather unique undertaking. They are based on the comparatively recently developed technology of the cryogen-free dilution refrigerators, which utilizes pulse tube (PT) precooling instead of a liquid helium bath; this should allow improved stability of the base temperature of the detectors as compared to the traditional He3/He4 refrigerator (used for Cuoricino). It will be the first cryostat of its kind big enough to house and cool the large detector mass represented by the CUORE array (~1 tonne) and its copper/lead shields.

For the point of view of the candidate, tellurium offers the advantage of high natural abundance (33.8%) of the candidate isotope, which means that enrichment is not necessary to achieve a reasonably large active mass. Also, the -value of the decay (2527 keV [102, 103]) falls between the peak and the Compton edge of the 2615 keV gamma line of 208Tl; this leaves a relatively clean window in which to look for the signal.

In addition to the increase in scale from Cuoricino to CUORE and in order for CUORE to reach its anticipated sensitivity, improvements are required in two crucial aspects of detector performance: resolution and background.

The resolution is expected to improve from the 6.3 ± 2.5 keV FWHM measured by Cuoricino (the error measuring the spread over the detectors) to about 5 keV FWHM which is the goal resolution for CUORE. This will be achieved both by the minimization of vibrational noise in the new cryostat and by progress (already achieved) in the crystal quality control, detector mounting structure design, and in the reproducibility of the thermistor-crystal couplings.

Concerning background, an improvement of a factor ~20 with respect to Cuoricino is necessary to reach CUORE goal: from 0.18 counts/(keV·kg·yr), as measured by Cuoricino, to 0.01 counts/(keV·kg·yr) that is the conservative target for CUORE. As previously discussed, in Cuoricino an important contribution to the background counting rate in the ROI is ascribed to irreducible contaminations of the set-up that will be overcome in CUORE thanks to the new cryostat + shield system built with selected ultralow radioactivity materials. On the side of the detector, large efforts have been spent to carefully select low background materials (starting from crystal production) and to clean their surfaces (focusing on crystals and copper that represent the largest area of the detector array). Finally, to prevent any recontamination of surfaces after their cleaning the CUORE assembly line allows the construction of the array without exposure of the detector parts to air and minimizing the contact (in space and time) with other materials.

Projections of CUORE sensitivity generally assume a 5 keV FWHM resolution and 0.01  counts/(keVkgyr) background and results in years in 5 years of exposure. Tests of the first batches of crystals produced for CUORE [104] and of copper parts which have undergone special surface treatments prove that a background rate of the order of 0.01 counts/(keV·kg·yr) is feasible [105], but an important answer from this point of view will come from CUORE-0.

CUORE-0 is the first CUORE tower that is now installed, as a stand-alone experiment, in the Cuoricino cryostat and is taking data. Besides being a very important step in CUORE construction, CUORE-0 will be able to produce a meaningful improvement in the 130Te results of Cuoricino. While CUORE-0 background rate in the ROI will be most probably dominated by cryostat contaminations (therefore it will not be able to provide a direct check of CUORE background since the cryostat will be different), the information about degraded ’s contribution will be extracted from the counting rate recorded in the 3-4 MeV region, with the same technique discussed in [105]. The total TeO2 mass is 39 kg; the expected background in the region is higher than 0.06 counts/(keV·kg·yr), being this the irreducible contribution evaluated for the cryostat contamination (the actual background rate of CUORE-0 will depend mainly on the success of the surface background control). Preliminary CUORE-0 data [106] (see Figure 11) prove the achievement of a relevant reduction of the 3-4 MeV counting rate with respect to Cuoricino (by a factor ~6), while—as expected—the counting rate in the region is only a factor ~2 better than in Cuoricino. With an energy resolution of 5.6 keV FWHM and a background counting rate of (0.074 ± 0.012) counts/(keVkgyr) [106] the 5-year sensitivity of CUORE-0 is years. Most probably CUORE-0 exposure will be of about 2 years since the experiment will close as soon as CUORE will start taking data; in this case the 2-year sensitivity is years.

9.3. R&D Programs and LUCIFER

A very promising development of low-temperature calorimeters consists in the simultaneous detection of light and heat, that is, in the construction of hybrid scintillating bolometers. Pioneered by the Milano group with CaF2 [107] in the 1990s, this approach [97, 108] represents the basic idea behind the LUCIFER [109], LUMINEU [95], and AMoRE [110] projects. The detector in this case is made of a scintillating crystal containing the candidate. The read-out of the scintillating light escaping the crystal is done with an unconventional technique since both photomultipliers and photodiodes (commonly used for this purpose) are unsuited to the use in vacuum and at very low temperature. The light is detected by a second bolometer, a Si or Ge undoped wafer provided with a temperature sensor. Thanks to the small volume of the wafer, the heat capacity of this bolometer is so low that even optical photons give rise to a sizable temperature increase (see Table 10).

The simultaneous detection of the heat and scintillation components of an event allows one to identify and reject particles with very high efficiency (close to 100%). The concept is very simple: the ratio between the light and phonon yield is different for and for interactions. In addition, it has been shown that discrimination by pulse shape analysis is also possible in some crystals, both in the heat and light channels [111]. The rejection capability is particularly appealing when applied to candidates with a large . In fact, above 2.6 MeV the natural contributions from environmental and material radioactivity tend to vanish and s are the only really disturbing background source. R&D measurements carried out in the past decade have identified a full list of candidates (e.g., 48Ca, 100Mo, 116Cd, and 82Se) which are characterized by scintillating compounds such as PbMoO4, CdWO4, CaMoO4, SrMoO4, ZnMoO4, CaF2, and ZnSe [111115]. In particular, 82Se and 100Mo look the most promising ones. Scintillating bolometers based on their compounds have been operated successfully and the complete elimination of events is expected to lead to specific background levels of the order of 10−4 counts/(keV·kg·yr) [95]. Therefore, they have been selected as the basic ingredients of the above mentioned projects.

The choice of LUCIFER has fallen on ZnSe, because of the favorable mass fraction of the candidate, the availability of large radiopure crystals, and the well-established enrichment/purification technology for Se.

LUCIFER [116, 117] will consist of an array of ZnSe crystals similar to the Cuoricino one and is designed to fit exactly the experimental volume of the Cuoricino cryostat (since the baseline for the LUCIFER program is to use this cryostat). The array will be realized with ZnSe crystals grown from enriched material. About 15 kg of metallic Se (enriched to 95% in 82Se) will be purchased and used to grow ZnSe crystals. The chemical process used to produce the ZnSe compound from the enriched material and the following crystal grow procedure imply—as usual—a material loss that in the case of ZnSe is quite relevant. The goal of the LUCIFER collaboration is to be able to achieve a production yield of about 65% (still to be demonstrated); this will result in about 17 kg of ZnSe crystals corresponding to 9.3 kg of 82Se. Assuming an energy resolution of 13 keV FHWM [116, 117] and a background rate of 10−3 counts/(keV·kg·yr) the experiment will work in nearly zero background condition. The sensitivity estimate yields  yr in 5 years.

The compounds ZnMoO4 and CaMoO4 are equally promising and have been selected for the LUMINEU and AMoRE experiments. For other very interesting isotopes, like 130Te employed in CUORE, scintillating materials have not yet been identified. However, also in this case the rejection could be achieved by exploiting a similar approach based on the much weaker Cerenkov signal [118, 119]. Indeed, the two electrons emitted in the decay are above threshold and can produce a flash of light with a total energy of approximately 140 eV. This is not the case however for particles which are by far below threshold. The detection of the Cerenkov light would improve dramatically the sensitivity of CUORE, providing the possibility to reduce the present specific background (10−2 counts/(keV·kg·yr) ) by an order of magnitude. However, the detection of the Cerenkov light in bolometers, with the proper sensitivity to discriminate events from natural radioactivity, still requires an intense R&D program aiming at exceptionally sensitive light detectors.

10. High Purity Germanium Detectors Enriched in 76Ge

The use of germanium diodes to search for decay dates back to 1967 [67] when it was realized that the decay of 76Ge could be investigated with a calorimetric approach, using what was at the time—and is still today—the best detector for gamma spectroscopy in the MeV range.

Today, standard HPGe diodes reach energy resolutions of the order of 0.2% FWHM at 2 MeV and masses as high as few kg. To be efficiently used in searches, the germanium crystals have to be grown starting from isotopically enriched material since the natural isotopic abundance is low (Table 3). This has been done by the HDM [71] and the IGEX [72] collaborations that carried out the reference experiments in the field. Using, respectively, 11 kg and 8 kg of isotopically enriched (86%) germanium, these two experiments were located in deep underground laboratories (resp., LNGS and LSC). In both experiments, the set-up consisted in HPGe diodes operated in a low contamination copper cryostat, surrounded by lead and/or copper thick shields. A pulse shape analysis (PSA) technique was used to reject multisite events (typical of non- interactions). However, in both experiments this was possible only on a subset of the total exposure. The two experiments concluded their operation with two of the most sensitive results ever reached: a 90% C.L. limit on 76Ge of  yr [71] (HDM, exposure = 35.5 kg × yr) and 1.57 × 1025 yr [72] (IGEX, exposure = 8.9 kg × yr).

Today two large scale projects benefit from the heritage of HDM and IGEX for their ambitious program: GERDA, a mainly European collaboration and MAJORANA, mainly US collaboration. Both experiments have phased programs with time schedules dictated by funding, isotope production, and a continuous update of the project on the basis of the knowledge acquired along the path. The ultimate goal is to merge the two experiments in a single one-tonne, zero background project.

10.1. Specific Backgrounds in Germanium Experiments

The transition energy of 76Ge is considerably lower ( keV) than that of most of the isotopes discussed so far. This implies that—in spite of their high resolution—experiments using Ge diodes fight against an unusually large number of dangerous background sources. Both 238U and 232Th can contribute to the ROI through their major emissions while the short-range and particles emitted by the same chains can mimic a event only in the case of contaminations sufficiently close to the detectors. Furthermore, sizable background contributions can be due to a number of long-lived cosmogenically produced isotopes (e.g., 68Ge with = 271 d, and 60Co with  d, 56Co with  d), characteristic of copper and germanium activation, as well as a number of anthropogenic radioisotopes (i.e., artificially produced radioisotopes as 207Bi with  yr). Thanks to the high energy resolution, yields a completely negligible background.

GERDA and MAJORANA aim at a background reduction of more than one order of magnitude with respect to HDM and IGEX. While both experiments are based on the same technology, the way they plan to achieve their background goal is influenced by the different conclusions of the respective precursors concerning the most relevant background sources.

In HDM the main background sources were identified in radioactive natural/cosmogenic contaminations of the experimental apparatus (in the lead and copper shields and in the copper of the cryostat), with a negligible contribution coming from Ge diodes themselves (this contribution was excluded on the basis of the absence of 238U and 232Th ’s peaks). This has biased the unconventional design of the GERDA project aimed at surrounding the detectors only with an ultrapure material acting as passive or (better) active shield.

In IGEX, on the contrary, the background counting rate was ascribed to radioisotopes produced by cosmic ray neutron spallation reactions, which occurred in the detector and cryostat components while they were above ground. The major contributions were identified in 68Ge, 56Co, and 60Co. This has influenced the choices of the MAJORANA collaboration that has focussed the attention on the control and reduction of cosmogenically generated isotopes through material preparation completely carried out underground.

As a concluding remark, in this section it is worthy to underline how impressive the background achievements are—already obtained by the past generation Ge experiments—in spite of the low 76Ge transition energy. The extremely background counting rates characterizing these experiments have been obtained through a careful choice of the setup materials. Indeed, what is today the standard procedure in the field was just pioneered by germanium experiments.

At the present stage of the realization of the next generation experiments a new ingredient has to be added to maintain the competitiveness of this technology: an active background reduction based on a new detector design. This represents the new frontier and is presently addressing large experimental efforts.

10.2. Pulse Shape Discrimination in HPGe Diodes

Most of the background sources listed in the previous section produce events in the ROI through multiple Compton scattering of higher energy ’s. This is the only possible contribution coming from radioactive contaminations far from the detectors, while for contaminations in close proximity of the diodes (or in the HPGe itself) also ’s and ’s can produce relevant energy depositions. The HPGe used both by GERDA and MAJORANA are of -type, with a large and thick + electrode which effectively shields the sensitive volume from impinging ’s or ’s and a thin + electrode that is the only entrance window for these particles, after an almost negligible energy degradation. In the case of or energy depositions in the sensitive volume, the topology of the event is characterized by multiple interaction sites inside the crystal (MSE), extending over several centimeters. Single-site events (SSE) extend over volumes of few mm cube and originate from single Compton scattering, from photoelectric or multi site interactions very close to each other. The latter category includes electron induced interactions and double-escape events. Double beta events are SSE.

As discussed below, in germanium diodes SSE and MSE have a different pulse shape which allows one to implement background rejection techniques that can be highly efficient. As an example, the HDM experiment measured a background counting rate of about 0.19  counts/(keV·kg·yr) in the region from 2000 to 2080 keV and—using a PSA technique based on neural network computations—managed to reduce it by a factor of 3 down to 0.06 counts/(keV·kg·yr).

The reason for a different pulse shape is the lack of uniformity of the electric field over the detector sensitive volume. Indeed, the time structure of the charge signal changes according to the topology of the initial energy deposition: the current pulse is higher when charges drift through the volume of a large weighting potential gradient [120]. This implies that the number of sites where primary ionization occurs and the differences in charge trajectories and drifting times induce a shaping of the signal that can be used to distinguish single-site event (SSE) from multiple-site events (MSE).

The rejection capability can be optimized with a proper design of the detector. In -type point contact detectors (PTPCGe) the signal electrode is very small if compared to standard coaxial HPGe detector; this results in a completely different field distribution capable of enhancing the differences between SSE and MSE pulses. Examples of this technology are the commercially available Broad Energy Germanium detectors (BEGe) produced by Canberra Company and used in GERDA. Practically the same design is used in the MAJORANA demonstrator (MJD). These are -type HPGe diodes with a point-like + electrode for induced charge collection and a Li-drifted + contact (0.5 mm thickness) covering the whole outer surface, including most of the bottom part. Due to their peculiar electric field configuration and limited size of the collection implant they exhibit a superior pulse shape discrimination performance: SSE and MSE can be easily distinguished simply on the basis of the ratio with being the pulse amplitude (measured as the maximum of the pulse current) and being the energy [121].

On the contrary, in coaxial HPGe (namely, the kind of detectors employed in the past generation experiments, like HDM and IGEX) the difference in shape is less pronounced and more varied, requiring sophisticate algorithms (as neural network systems) for event classification.

Finally, alternative detector technologies aiming at very efficient background rejection capabilities have been also proposed (Canberra SEGA, [122]). Based on -type segmented diodes they are able to achieve remarkable event discrimination but have been so far superseded by the more practical PTCPGe design.

10.3. The GERDA Experiment

Evolved from the HDM experiment, GERDA [120] implements the concept of Ge diodes immersed in a liquid argon (LAr) bath [123] for a radical background suppression. The experiment, installed in LNGS, looks today as shown in Figure 12. A stainless-steel cryostat filled with liquid argon (~100 tonnes) is surrounded by a water Cherenkov detector. ~86% isotopically enriched HPGe detectors are mounted in strings (each of about 3–5 detectors) which are suspended from the top in the center of the cryostat. The water tank shields the inner part of the set-up from radiation due to rock radioactivity and serves as muon veto (being completed—at the top of the cryostat—by plastic scintillator panels, realizing a complementary muon coverage where the water Cerenkov detector is thinner). The cryostat has an internal lining of ultrapure copper, used primarily to reduce the radiation from the steel vessel itself (as a rule of thumb copper is less radioactive than most materials, including steel which however is preferred for its mechanical qualities and costs). LAr serves both as a passive shield and as a refrigerant for the HPGe diodes. The motivations for this shielding configuration are various. With respect to conventional set-up, the naked diodes are far from any cladding materials (with their radioactive contaminations) and a liquid can be easily purified to extremely low levels of contaminants (the main worries in the case of LAr are radon and 42Ar, discussed below). Moreover, in LAr the production from muons interaction is much lower—thanks to its low —than in the traditionally used high Z shielding materials (as copper and lead). Finally, LAr offers the future possibility of reading out the Ar scintillation light for additional background rejection. (See Table 11).

The preoperation phase of GERDA allowed one to highlight two weak points in the project: a different behavior of HPGe in LAr with respect to liquid nitrogen (that was the refrigerant considered in the early phase of the project and the one where the naked HPGe were tested) and an unexpected high contribution from 42Ar. The former problem consisted in an excess leakage current appearing upon irradiation of the detectors; it was solved by changing the passivation layer on the HPGe surface. The latter was ascribed to an anomalous concentration (20 times higher than expected) of 42K close to the detectors. 42K is the progeny of 42Ar, a known radioactive contaminant of argon. It decays with a  keV and  h, with a most intense line at 1524.7 keV (B.R. = 18.1%). When close to the detectors, the emitted and particles can produce events in the ROI. The reason of this surprise was at the end clarified: 42K is produced, after 42Ar decay, as a positively charged ion which migrates toward the diodes attracted by their externally extended weak electrical field. The solution of the problem was obtained by the installation of a thin (60 m) copper electrostatic shield (called minishroud) surrounding the detectors array at a very close distance (few mm) and permeable to LAr. A further thin copper shield protects the detector from radon emanation (radon shroud).

GERDA is designed to proceed in two phases.(i)GERDA-I (presently taking data) is going to verify the KHDK claim [81] using the coaxial HPGe enriched detectors inherited from the HDM and IGEX experiments (~18 kg of ~86% enriched Ge) and few new detectors (enriched BEGe diodes, deployed only in June 2012, having a total mass of ~3.6 kg of ~88% enriched Ge).(ii)GERDA-II will see the deployment of additional detector strings to achieve (21 + 18) kg of germanium isotopically enriched in 76Ge to 86% (for the old coaxial HPGe’s) and 88% (for the BEGe’s), aiming at a 5-year sensitivity of  yr.

Depending on the actual physics results of the two experimental phases, a third phase using 500 to 1000 kg of enriched germanium detectors is planned, merging GERDA (this is phase III) with MAJORANA.

The first result released by the collaboration was the one, confirming previous measurements [124], and a detailed background study [125]. While this review was written, the unblinding of the phase I data was presented with a paper dedicated to background modeling [125] (with the identification of major background) and the paper reporting the result. Phase I results are summarized here.

The average energy resolution at is 4.8 keV, and 3.2 keV, respectively, for the coaxial and the BEGe detectors. The total exposure and the counting rate are as follows:(i)17.9 kg × y (gold-data) plus 1.3 kg × y (silver-data) collected with 6 of the 8 coaxial HPGe diodes. Two coaxial diodes had to be switched off due to excess leakage current (one of them after having collected a fraction of data). The silver-data corresponds the deployment of the BEGe detectors, and for a short period a slightly higher than usual counting rate was observed. The corresponding rate is  counts/(keV·kg·yr) (with PSA cuts);(ii)2.4 kg × yr collected with 4 of the 5 BEGe diodes. One BEGe is not used in the analysis because of instabilities. The rate is  counts/(keV·kg·yr) (with PSA cuts).

No excess of signal counts is observed over the background in the ROI 5 keV) where the observed events are 6 for the coaxial HPGe and 2 for the BEGe, reduced, respectively, to 2 and 1 with the application of PSA cuts (see right panel of Figure 12). This is translated into a 90% C.L. lower limit of  yr using a frequentist approach to be compared with a median sensitivity of  yr at 90% C.L. (similar results are obtained with a Bayesian analysis). The compatibility of the result with the KDHK claim is studied comparing the probability of two models describing collected data: is the model of background without signal and is the model with background plus the same signal found by KDHK in [79, 80]. The Bayes factor ()/() is found to be 0.024. Assuming model to be true, the probability of observing 0 events in GERDA, namely, the Bayes factor, is [126]. Extending the GERDA profile likelihood to include HDM and IGEX spectra (i.e., using the sum exposure or the three experiments) the Bayes factor is further reduced to 2 × 10−4; that is, model is strongly disfavored. It is worthy to note that the GERDA collaboration decided to take into account only the 2004 KDHK publication [79, 80], where    evidence was reported with a half-life of  yr. Indeed, later papers, again based on reanalysis of the same data, are characterized by an improved statistical significance. For example, the latest reported result [82]) amounts to evidence with  yr.

The major background sources contributing in the ROI are identified in 42K and 222Rn in the LAr plus 214Bi and 228Th in detector assembly and a contribution from ’s in the + electrode surface (i.e., the only portion of the diode surface where the dead layer is so thin that ’s can enter the active volume without being too degraded in energy).

GERDA will conclude phase I (the target exposure is already reached) as soon as it will be ready to start with the upgrades required for phase II. 25 new BEGe detectors have been prepared by Canberra, totaling—with the five already installed—30 BEGe (20.8 kg of Ge) that once added to the old coaxial HPGe will reach the phase II goal of about 21 + 18 kg of enriched Ge detectors.

The background goal of this latter phase is 10−3 counts/(keV·kg·yr), that is, more than one order of magnitude lower than the BEGe counting rate recorded in phase I. With such a low counting rate, achieved on both coaxial and BEGe detectors, the experiment would reach a nearly zero background condition, corresponding to a 5-year sensitivity  yr. This is probably a very optimistic case, since at least for coaxial detectors the achievement of this low background condition looks very difficult (for example, background rejection through PSA is more than 2 times better in BEGe than in coaxial diodes). A more conservative hypothesis is to assume a counting rate of counts/(keV·kg·yr) (the best recorded in phase I) for all the detectors; in that case the 5-year sensitivity is reduced to  yr.

The upgrades foreseen for phase II include various modifications of the apparatus to host an increased number of detectors, with improvements on both radioactivity and electronics. The efforts to get rid of 42K background will focus on detector performance: with a lower noise the events induced by 42K can be rejected using PSA. The instrumentation of LAr (i.e., read-out of the scintillation light of Ar) is on the other hand the way to mitigate 214Bi background.

10.4. The MAJORANA Experiment

(See Table 12) MAJORANA is an evolution of the IGEX experiment. The basic ideas behind the project are summarized in the year 2003 White Paper [127]:(i)realize a large mass Ge experiment (final goal is a sensitivity of the order of 1027 yr) based on a well known technology and design, that is, using an array of hundreds of HPGe detectors operated in a conventional configuration;(ii)focus the main effort on two goals: the improvement of HPGe technology (aiming at the use of segmented HPGe with highly improved pulse shape capabilities) and the selection and/or custom production of high radiopurity materials.

The proposed configuration [128] is based on an evolution of the traditional HPGe set-up: close-packed arrays of HPGe diodes (57 crystals each) are mounted inside ultraclean electroformed conventional cryostats, minimizing in this way the amount of structural materials in-between the diodes (see left panel of Figure 13). A number of these 57-crystal arrays are installed in a low background passive shield provided with a muon active veto. The entire apparatus is installed in a deep underground laboratory. The ultimate goal of the project is the realization of a tonne scale experiment with a counting rate lower than 1 counts/(tonne·yr) in the ROI, that is, nearly zero background condition. In addition to the extremely difficult challenge from the point of view of background rate achievement, both time and cost of this project are very high, in particular for what concerns germanium enrichment. The present program of the MAJORANA collaboration is to realize a small-scale prototype to demonstrate the viability of the technique (the MAJORANA demonstrator [128, 129]) and to define a one-tonne scale project in collaboration with GERDA, aiming at a sharing of costs and of knowledge, having therefore the opportunity to benefit from the experience and skills acquired by the two initial stages of both experiments.

The MAJORANA demonstrator (MJD) will use about 40 kg of germanium diodes (~30 kg will be of enriched 76Ge). The detector performance is comparable to GERDA’s (the baseline for the MAJORANA demonstrator is the same PTPC Ge diodes used by GERDA) and the target background rate is  counts/(keV·kg·yr) in the 4 keV ROI (with PSA) [129], nearly identical to that of GERDA-II. Screening and selection of commercially available materials may not allow one to fulfill the background requirements; therefore, special techniques have been developed not only for the custom production of the MJD enriched detectors (which is quite common in this field) but also for the custom production of the inner shielding material (which today is a standard procedure only for experiments using liquid detectors or shields, but not for solids). The cryostat enclosing the HPGe array and the inner shielding layer of the MJD are made of copper. The MJD radioactivity requirements for this material are extreme: 238U and 232Th contaminations below 1 Bq/kg (a contamination level that—by itself—is very hard to measure) and a negligible cosmic ray activation. The solution was identified in the underground electroforming of copper. The collaboration has realized a facility at 1500 m depth ((SUSEL), Stanford Underground Science and Engineering Laboratory, South Dakota, USA) where electro-forming of copper is done in underground clean rooms, purifying in this way the copper from 238U and 232Th as well as from cosmogenically generated radionuclides (60Co is an example) that will not be regenerated thanks to the reduced cosmic ray flux.

The same facility will host the MJD operation. This will consist (Figure 13) of two electro-formed cryostats; the first will be ready in 2013 and will contain both natural HPGe and enriched HPGe, surrounded by an onion-like shield made of 5 cm of electro-formed copper, 5 cm of oxygen free high conductivity (OFHC) copper (the procedure used for the production of this special kind of copper ensures very high radiopurity levels), 45 cm of lead and 30 of polyethylene with embedded a plastic scintillator used as cosmic rays veto. The completion of this phase is expected in 2014. The one-year sensitivity for the MJD is (according to (10))  yr scaling to  yr in five years.

11. Loaded Organic Liquid Scintillators

In the last decade a new class of experiments started occupying the international scenario. These are based on the conversion to decay searches of huge liquid-scintillator or water Cherenkov detectors that were first designed and employed for neutrino oscillation measurements. Indeed, the need of a low background counting rate (low intrinsic radioactivity, shielding, and underground location), of a high detector efficiency, and of an optimized energy resolution is common to the two research fields. Once their campaign of measurements with solar/reactor neutrino is completed these detectors can be dedicated—with minor modifications and therefore at limited expenses—to DBD searches. This is what happened with Kamland-ZEN and what is in progress with SNO+, although the original idea dates back to 2001 with the proposal of dissolving Xe in Borexino [130] or of placing an array of CdWO4 crystals inside its core (CAMEO proposal [131]).

These experiments are characterized by the capability of reconstructing the interaction vertex that allows one to define a fiducial volume where the events have to be located in order to be accepted. This allows one to reduce the number of background sources that can mimic a decay. On the other hand, the poor energy resolution achievable in liquid scintillators implies first that is an irreducible background (i.e., the choice of the candidate has to take into account rate) and second that the result can be extracted only after a careful background reconstruction (similarly to what happens in the case of most experiments based on tracking detectors).

11.1. KamLAND-ZEN

The KamLAND-Zen [132] experiment is based on a modification of the existing KamLAND detector carried out in the summer of 2011: a miniballoon filled with a Xe-loaded liquid scintillator has been added in the very core of the apparatus to search for 136Xe decay (for a discussion of 136Xe as a source see Section 7). KamLAND is located in the site of the earlier Kamiokande at a depth of 2700 m.w.e. and has been used since 2002 for neutrino oscillation measurements (see Table 13).

The detector today looks as in Figure 14 (left panel). It comprises the following:(i)the inner balloon (IB) (made of a 25 m thick transparent nylon film) suspended at the center of the detector; this contains the source in the form of 13 tons of Xe-loaded liquid scintillator (Xe-LS);(ii)the outer balloon (OB) (135 m thick nylon/EVOH film) filled with 1 ktonne of liquid scintillator (LS); this is the detector used for neutrino oscillation measurement in KamLAND while in KamLAND-ZEN it is used as an active shield for external gammas;(iii)the stainless steel tank (SST) that is the containment vessel for the two balloons. The gap between the SST and the OB is filled with a buffer of mineral oil that passively shields the LS from external radiation. The inner surface of the SST is covered by an array of 1879 photomultiplier tubes (PMTs) that read out the scintillation signal produced either in the IB ( decay candidate events) or in the OB (background events);(iv)a 3.2 ktonne water-Cherenkov detector—read out by 225 PMTs—that surrounds the whole structure. This outer detector (OD) absorbs gamma-rays and neutrons from the surrounding rock and provides a tag for cosmic ray muons.

The LS is a mixture of 80% dodecane and 20% pseudocumene plus PPO. The Xe-LS has a similar composition to which is added a (2.52 ± 0.07)% in weight of enriched xenon gas (~300 kg) with isotopic abundances (90.93 ± 0.05)% for 136Xe and (8.89 ± 0.01)% for 134Xe.

A decay is observed through the detection of the scintillation light from the two coincident electrons emitted in the transition. The two particles cannot be separately identified and only their summed energy at 2.458 MeV can be measured. Various background sources can hide this signal due to the poor energy resolution of the detector. Indeed, is the resolution estimated with multi-gamma calibration, which means a FWHM resolution of ~240 keV at the energy.

Data acquisition and analysis aim at the reconstruction of the background spectrum on a wide region (from ~0.5 to 5 MeV) besides using multiple cuts to select candidate events. These are as follows:(i)a fiducial volume (FV) cut to select only events originating inside the IB (the source);(ii)a cut that removes both muon and muon induced events (i.e., events occurring within 2 ms after a muon);(iii)a delayed coincidence cut, applied to remove events from the 214Bi-214Po cascade;(iv)a delayed coincidence cut that removes antineutrino induced events (mainly from reactors);(v)a cut based on the time-charge distribution in the vertex recorded by the photomultiplier array, that removes poorly reconstructed events.

The FV cut is designed in order to mitigate background coming from the radioactivity of the miniballoon. Indeed, the study of the vertex distributions of candidate and events shows an increase near the IB boundary that is ascribed to 134Cs in the case of the region and 214Bi in the case of the region. The FV is therefore smaller than the IB volume, thus reducing the active mass of 136Xe. The presence of 134Cs and 137Cs and the ratio of their activities is compatible with a contamination of the IB balloon related to the Fukushima accident. Other fallout isotopes might therefore be present (although not directly observed).

Background events surviving cuts are ascribed to three categories: external to the Xe-LS (mainly from IB material), from the Xe-LS, and induced by spallation. A careful study is performed to identify and disentangle the various background sources. and half-lives are estimated as the result of a best-fit spectral decomposition, MC simulations are used to represent the spectral shape of the different sources whose weight in the fit is in some case constrained by independent measurements of the source intensity. The result is shown in Figure 14 (right panel): the spectrum shows a peak structure centered slightly above the region. To account for this peak all the isotopes in ENSDF database [133] have been analyzed and few candidates (with the correct spectral shape and an ancestor live time greater than 30 days) have been identified. These are Ag ( MeV, = 360 day), 88Y, 60Co, and 208Bi that can be either Fukushima fallout products or (except 208Bi) the result of cosmogenic activation of Xe. These isotopes are therefore included in the likelihood function, with unconstrained weights. The peak structure is found to be compatible with a Ag dominant contamination. The results reported in the more recent paper [134] refer to two data-sets collected before and after an attempt of Xe-LS purification. The second data set has a smaller FV (125 kg instead of 179 kg of 136Xe ) due to additional fiducial volume cuts made around the siphoning hardware left in place after the filtration. Unfortunately the filtration did not have the wanted effect: in the window (the interval 2.2–3 MeV) the background counting rate due to Ag is 0.19 ± 0.02 counts/(tonne·day) in the first data-set and 0.14 ± 0.03 counts/(tonne·day) in the second data-set.

and results reported so far are as follows:(i) yr at 90% C.L. with an exposure of 89.5 kg × yr of 136Xe (about 210 days) [134];(ii) yr for an exposure of 30.8 kg × yr of 136Xe (77.6 days) [132], compatible with the EXO [84, 86].

The first phase of the experiment was terminated in order to start a purification campaign to remove the Ag isotope. This is done by removing the Xe from the LS and distilling the LS to purify it, meanwhile considering the possibility of a substitution of the miniballoon.

11.2. SNO+

The SNO experiment, located in the one of the deepest experimental sites (SNOlab, 6010 m.w.e.), was an imaging Cherenkov detector used in the first decade of the year 2000 for a successful campaign of solar neutrino measurements. The SNO detector (Figure 15) consists of a 12 m diameter acrylic sphere filled with heavy water and surrounded by a shield of ultrapure water (1700 tonnes) contained in a 32 m high, 22 m diameter tank. A stainless steel geodesic structure supports ~9500 photomultipliers looking toward the center of the acrylic sphere to read-out the Cherenkov light produced by neutrino interactions on deuterium. A smaller number of photomultipliers looking outwards are used to tag any particle producing Cerenkov light in the external water shield (5700 tonnes), acting as a veto for cosmic rays and external background radiation (see Table 14).

In November 2006, the experiment was terminated and heavy and light water were removed. At present, the SNO+ collaboration is modifying the detector by replacing the heavy water with about 780 tonnes of liquid scintillator (linear alkylbenzene with 2 g/L of PPO as wavelength shifter) loaded with a candidate. The lower density of the scintillator with respect to water has required the installation of a rope net over the top of the acrylic sphere to anchors it to the floor. A purification system able to ensure U and Th concentrations in the scintillator similar to those reached in the BOREXINO experiment ( of 238U and 232Th ) is under construction [135, 136].

In a first proposal, 156Nd was the isotope to be studied [137], but in April 2013 it was decided to start the first phase of the experiment with natural tellurium. Tellurium contains about 34% of 130Te, and has a high transition energy and a much slower decay than 156Nd (nearly by two orders of magnitude). This choice has the advantage of being cheaper than the 156Nd one, mainly because neodymium isotopic enrichment is not obvious since it cannot be done by centrifuge. According to preliminary studies a 0.3% loading of the liquid scintillator will be possible, corresponding to a mass of 130Te of 800 kg. The goal of this phase is to reach a sensitivity that touches the IH region. If successful, a further step will consist in increasing the tellurium loading to 3% (8 tonnes of 130Te ) with the goal of covering the IH region. The sensitivity of the SNO+ experiment, in this first phase can be tentatively inferred from data and studies presented in the 156Nd proposals [135, 137]: the FWHM energy resolution is estimated to be ~240 keV (evaluation done on the basis of the scintillator photon yield: 400 photoelectron/MeV at 1 MeV), the fiducial volume is assumed to be 20% of the actual volume (i.e., a 130Te mass of 163 kg).The main background sources (as discussed in [138]) are expected to be the 130Te rate and the elastic interaction of solar neutrinos (8B line). From the figure shown in [138] (reported in the right panel of Figure 15) a counting rate integrated over the ROI of ~3 × 10−4  counts/(keVkgyr) can be extrapolated. This corresponds to a 5-year sensitivity of = 2.0 × 1026 yr.

12. Summary and Outlook

We have reviewed the status and perspectives of the search for . Neutrinoless double beta decay is still the most promising probe to test lepton number violation and verify if neutrinos are Majorana particles.

The features of this challenge are more clear after the discovery of neutrino oscillations and the measurement of oscillation parameters. Indeed, has turned into a sensitive probe for neutrino masses capable of providing relevant information on their absolute scale and ordering.

However, precise nuclear physics knowledge is required in order to map the observed rates into neutrino mass constraints. Actually, several calculations exist for nuclear matrix elements. They share common ingredients and differ in their treatment of nuclear structure. Unfortunately a relevant disagreement still exists between different calculations. This is of course a serious problem which has triggered in the past decade a strong effort to improve the situation.

From the experimental point of view, good performance (high energy resolution and very low background), a proper scale (large number of candidate nuclei and long measure time), a favorable candidate, and a proper experimental technique are the essential ingredients for a sensitive experiment. These requirements are often conflicting, and no next generation proposal has succeeded so far in optimizing all of them (Table 15). Indeed, most of the projects tend to excel in one or the other aspect, still missing the goal of getting the best sensitivity.

In particular the high resolution calorimeters are facing an incredible effort to achieve the best performance but in most cases they cannot guarantee a proper scalability (indeed some of them have crossed the ZB boundary, while maintaining a good energy resolution).

On the contrary, the extremely massive scintillators are found to be very effective in reaching very low (external) background rates but are irreducibly limited on the performance side by poor energy resolution (which widens the ROI, thus increasing the background).

This situation is pictorially summarized in Figure 16 where it is apparent how future projects tend to align along the 1026 yr iso-sensitivity line, though spanning large intervals in performance and scale. These two parameters, defined in Section 5 through (11) and (12), measure respectively the number of background events in the ROI (performance = = ) per unit exposure and the exposure itself (scale = = ) measured in number of moles per year. It is important to point out that the ZB condition is dynamic and depends on the interplay between Performance and Scale to maintain the condition.

Then the common goal should be to approach the golden region characterized by where the sensitivity increases in the fastest way along the direction [68]. Indeed, by improving the performance, one can succeed in entering the ZB region. Then the sensitivity can be improved linearly by increasing the detector mass until the ZB condition is no longer satisfied.

This is a nice picture which can translate suddenly in a nightmare. Actually performance improvements cannot be maintained easily (if not at all) with larger scales and intermediate projects (demonstrators) are becoming a rule. Moreover, all the new generation experiments tend to sit far away (on opposite sides) from the golden region.

Demonstrators (SND, MJD, and Lucifer) are paving the road for larger future projects, while new ideas are being verified in a number of R&D programs. The future of the experimental search depends critically on the richness and variety of the technologies under development. The most successful ones will turn quickly into real experiments characterized by improved sensitivities and capabilities.

Let us summarize the situation by considering just the very few projects characterized by the best conditions for impacting the future of research: CUORE, GERDA, EXO, SNO+, and KamLAND-Zen. An important impact is expected also from the demonstrators SND, the scintillating bolometers, MJD, EXO, and NEXT, whose target is to assess the readiness and effectiveness of the respective techniques. Altogether, these experiments represent the most advanced effort to guarantee the highest possible sensitivity study of the maximum number of different nuclei with different experimental techniques and approaches.

The future of searches depends critically on the actual ordering of the neutrino masses. In case nature has selected the quasi degenerate hierarchy (i.e., ~100–500 meV), then the 76Ge claim could be confirmed by GERDA. The signal could be cross-checked in 136Xe by EXO, KamLAND-Zen (if the background problems are solved) and NEXT (if the results with the prototypes are confirmed). CUORE and SNO+ could detect in 130Te while a large scale array of scintillating bolometers could have chances to observe the signal in 82Se or 100Mo. On the other hand, SuperNEMO could get more insight into the decay mechanism looking at the single-electron energy and angular distributions in 82Se or 150Nd. The redundancy of the candidates under study will reduce the uncertainties coming from NME calculations.

As mentioned above, this optimistic scenario is already in tension after the results of EXO-200. On the other hand, GERDA-I results, expected shortly, will further clarify the situation.

In the case of the inverted hierarchy (i.e., 20–50 meV) the observation of is still possible if is hidden just below the upper part of the error bars or if the projects under development will be able to achieve their planned sensitivity. CUORE (most likely enriched in 130Te) or bolometric evolutions with improved reduction of the surface background have good chances to detect but nEXO, the extension of EXO-200 under discussion, could also succeed in 136Xe. In case of success of their present phase, extensions of SNO+, KamLAND-Zen, and NEXT could have the chance to cross-check the result in 130Te and 136Xe, while GERDA-III, after merging with MAJORANA, could observe a signal in 76Ge.

The discovery of for three or four isotopes is necessary for convincing evidence. This should be possible thanks to the variety of projects and techniques under development.

It is worth stressing that also the missed observation of could be very important for neutrino physics. Indeed, if the long baseline neutrino oscillation experiments would provide evidence for an inverted neutrino hierarchy, then a limit on below the inverted hierarchy band would be a strong indication in favor of a Dirac nature of neutrino.

No present or future project seems to have any chance to probe the direct hierarchy region. The study of in the range of few meV needs new revolutionary strategies. R&D activities are crucial to stimulate the new ideas needed to face this extreme challenge.

To conclude, searches are living a very exciting period characterized by a lot of enthusiasm for the possibility to finally observe this very rare decay. A lot of projects have been proposed either to exploit the capabilities of present technology or to pave the road for next generation experiments. Their sensitivity to half-lifetime is in the range of few 1026 yr.

Long-term predictions are not easy, but future generation experiments will unavoidably need a multitonne scale in the isotope mass. It will then become difficult to maintain the present variety of experimental approaches. On the other hand, taking into account the past evolution of the experimental sensitivities, an improvement by an order of magnitude seems a likely frontier for future generation experiments on a scale of 10–20 years.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.