Abstract

Recent years have witnessed many exciting breakthroughs in neutrino physics. The detection of neutrino oscillations has proved that neutrinos are massive particles, but the assessment of their absolute mass scale is still an outstanding challenge in today particle physics and cosmology. Since low temperature detectors were first proposed for neutrino physics experiments in 1984, there has been tremendous technical progress: today this technique offers the high energy resolution and scalability required to perform competitive experiments challenging the lowest electron neutrino masses. This paper reviews the thirty-year effort aimed at realizing calorimetric measurements with sub-eV neutrino mass sensitivity using low temperature detectors.

1. Introduction

Almost two decades ago, the discovery of neutrino flavor oscillations firmly demonstrated that neutrinos are massive particles [1]. This was a crucial breach in the Standard Model of fundamental interactions which assumed massless neutrinos. Flavor oscillations show that the three active neutrino flavor states (, , and ) are a superposition of three mass states (, , and ) and allow the measurement of the difference between the squared masses of the neutrino mass states, but they are not at all sensitive to the absolute masses of the neutrinos.

Today, assessing the neutrino mass scale is still an outstanding task for particle physics, as the absolute value of the neutrino mass would provide an important parameter to extend the Standard Model of particle physics and understand the origin of fermion masses beyond the Higgs mechanism. Furthermore, due to their abundance as Big Bang relics, neutrinos strongly affect the large scale structure and dynamics of the universe by means of their gravitational interactions, which hinder the structure clustering with an effect that is dependent on their mass [2, 3]. In the framework of CDM cosmology (the model with Cold Dark Matter and a cosmological constant ), the scale dependence of clustering observed in the universe can be indeed used to set an upper limit on the neutrino mass sum , where is the mass of the state. Depending on the model complexity and the input data used, this limit spans in the range between about 0.3 and 1.3 eV [4]; more recently, by combining cosmic microwave background data with galaxy surveys and data on baryon acoustic oscillations, a significantly lower bound on the neutrino mass sum of 0.23 eV has been published [5], although this value is strongly model dependent.

The oscillations discovery and the accurate cosmological observations revived and boosted the interest in neutrino physics (this is also confirmed by the Nobel Prizes in Physics awarded in the years 2002, 2008, and, very recently, 2015), with the start of many ambitious experiments for different high precision measurements and the rate of publishing papers increased by almost an order of magnitude, but in spite of the enhanced experimental efforts very little is known about neutrinos and their properties. Several crucial pieces are still missing, in particular the absolute neutrino mass scale, the neutrino mass ordering (the so-called mass hierarchy), the neutrino nature (Dirac or Majorana fermion), the magnitude of the CP (charge and parity) violation phases, and the possible existence of sterile neutrinos.

This paper is devoted to the assessment of the absolute neutrino mass scale and in particular to the direct measurement of the electron neutrino mass via calorimetric experiments. After a brief overview of our present picture for massive neutrinos, I will introduce both the theoretical and the experimental issues involved in the direct determination of the neutrino mass and discuss the past and current calorimetric experiments, with a focus on experiments with low temperature detectors.

2. The Neutrino Mass Pattern and Mixing Matrix

Most of the existing experimental data on neutrino oscillations can be explained by assuming a three-neutrino framework, where any flavor state   () is described as a superposition of mass states   () orwhere is the Pontecorvo-Maki-Nakagawa-Sakata unitary mixing matrix (see, e.g., [6]). As a consequence, the neutrino flavor is no longer a conserved quantity and for neutrinos propagating in vacuum the amplitude of the process is not vanishing.

The mixing matrix is parametrized by three angles, conventionally denoted by , , and , one CP violation phase , and two Majorana phases , ; these two have physical consequences only if neutrinos are Majorana particles—that is, identical to their antiparticles—but they do not affect neutrino oscillations. To these six parameters, three angles and three phases, the three mass values must be added also, for a total of nine unknowns altogether. In the years, oscillation experiments measuring the flux of solar, atmospheric, reactor, and accelerator neutrinos have contributed to the precise determination of many of these unknowns.

The oscillation probabilities depend, in general, on the neutrino energy, on the source-detector distance, on the elements of the mixing matrix, and on the neutrino mass squared differences . At present, the three mixing angles and the two mass splittings, conventionally (from solar neutrino oscillations) and (from atmospheric neutrino oscillations), have been determined with reasonable accuracy [1]. However, the available data are not yet able to discriminate the neutrino mass ordering. While the effect of the interactions of solar neutrinos with matter constituents (known as Mikheyev-Smirnov-Wolfenstein effect) allows the establishment of so that , we have and we are left with two possibilities: either (normal ordering, i.e., ) or (inverted ordering, i.e., ) (compare also with Figure 3). In both schemes, there is a Quasi-Degeneracy (QD) of the three neutrino masses when , with  eV. Depending on the value of the lightest mass values, the neutrino mass ordering can also follow a Normal Hierarchy (NH), with (in which and ), or an Inverse Hierarchy (IH), with (in which and are quasi-degenerate); see Figure 1. As a final remark, as shown in Figure 3, independent of the mass scheme, oscillation results state that at least two neutrinos are massive, with masses larger than  eV.

Most of the oscillation data are well described by the three-neutrino schemes. However, there are a few anomalous indications (the so-called reactor neutrino anomaly) [7] that cannot be accommodated within this picture. If confirmed, they would indicate the existence of additional neutrino families, the sterile neutrinos. These neutrinos do not directly participate in the standard weak interactions and would manifest themselves only when mixing with the familiar active neutrinos. Future reactor experiments will test this fascinating possibility.

Assessing the neutrino mass ordering, that is, the sign of , is of fundamental importance not only because it would address the correct theoretical extension of the Standard Model, but also because it can impact many important processes in particle physics (like neutrinoless double beta decay). In addition, the phase governing CP violation in the flavor oscillation experiments remains unknown and a topic of considerable interest [8]. A worldwide research program is underway to address these important open issues in the near future by precise study of the various oscillation patterns.

The oscillation experiments, however, are not able to access the remaining unknown quantities, that is, the absolute mass scale and the two Majorana phases. Their determination is the ultimate goal of nuclear beta decay end-point experiments and neutrinoless double beta decay searches.

The finite neutrino mass manifesting in neutrino oscillations is already an important breach in the Standard Model of fundamental interactions, but the neutrino sector could hold more surprises. In fact, recent reanalysis of existing data from reactor oscillation experiments together with some anomalies observed in short baseline accelerator oscillation experiments (LSND, MiniBOONE) and in solar experiment calibration with neutrino sources (GALLEX) points to the existence of at least a fourth generation of neutrinos [7]. These hypothetical neutrinos would be sterile in the sense that they would feel only gravitational interactions, along with those induced by mixing with the other ordinary neutrinos. Combined analysis of the available data from various sources leads to an additional mass splitting of , with a mixing parameter of about . Sterile right handed neutrinos are indeed introduced naturally when one tries to extend the Standard Model to include the mass of active neutrinos (MSM) [9]. Moreover, sterile neutrinos in the keV mass range are perfect candidates as Warm Dark Matter (WDM) particles [10].

3. Weak Nuclear Decays and Neutrino Mass Scale

Fundamental neutrino properties, in particular its absolute mass and its nature, can be investigated by means of suitable weak decays, where flavor state neutrinos are emitted along with charged leptons and/or pions. There are two complementary approaches for the measurement of the neutrino mass in laboratory experiments: the precise spectroscopy of beta decay at its kinematical end-point and the search for neutrinoless double beta decay. Though the expected effective mass sensitivity for neutrinoless double beta decay search is higher, this process implies a strong model-dependence since it requires the neutrino to be a Majorana particle.

Direct neutrino mass measurement, by analyzing the kinematics of electrons emitted in a beta decay, is the most sensitive model independent method to assess the neutrino mass absolute value (analogue measurements involving pion or tau decays give much weaker limits on or ). The beta decay is a nuclear transition involving two nuclides and :where and are, respectively, the mass and atomic numbers of the involved nuclei. Neglecting the nuclear recoil, the kinetic energy available to the electron and antineutrino in the final state is given bywhere indicates the mass of the atoms in the initial and final state.

In practice, this method exploits only momentum and energy conservation: it measures the minimum energy carried away by the neutrino—that is, its rest mass—by observing the highest energy electrons emitted in this three-body decay. To balance the energy required to create the emitted neutrinos, the highest possible kinetic energy of the electrons is slightly reduced. This energy deficit may be noticeable when measuring with high precision the higher energy end (the so-called end-point) of the emitted electron kinetic energy distribution . If one neglects the nucleus recoil energy, is described in the most general form bywhere is the Coulomb correction (or Fermi function) which accounts for the effect of the nuclear charge on the wave function of the emitted electron, is the form factor which contains the nuclear matrix element of the electroweak interaction and can be calculated using the V-A theory, and is the radiative electromagnetic correction, usually neglected due to its exiguity. is the Heaviside step function, which confines the spectrum in the physical region . The term is the phase space term in a three-body decay, for which the nuclear recoil has been neglected; is the electron momentum. For the sake of completeness, it is worth noting that the particle emitted in the experiments considered here is the electron antineutrino . Since the CPT theorem assures that particle and antiparticle have the same rest mass, from now on, I will speak simply of “neutrino mass” both for and for . Moreover, it must be stressed that since the effect of the neutrino mass on nuclear beta decay is purely due to kinematics, this measurement does not give any information on the Dirac or Majorana origin of the neutrino mass.

From oscillation experiments, we know that any neutrino flavor state is a superposition of mass states. Therefore, (4) can be generalized as [11, 12]where is a term which groups all terms in (4) which do not depend on the neutrino mass, is the electron row of the neutrino mixing matrix, and are the masses of the neutrino mass states. The square root term is the part of the phase space factor sensitive to the neutrino masses. An example of the resulting spectrum is shown in Figure 2.

Since the individual neutrino masses are too close to each other to be resolved experimentally, the measured spectra can still be analyzed with (4), but the quantityshould now be interpreted as an effective electron neutrino mass, where the sum is over all mass values . Therefore, a limit on implies trivially an upper limit on the minimum value of all , independent of the mixing parameters : ; that is, the lightest neutrino cannot be heavier than . By using the currently available information from oscillation data [1], it is possible to formulate the values of the neutrino masses (and the values of as well) as a function of the lightest mass, that is, in the Normal Hierarchy (NH) and in the Inverted Hierarchy (IH). This is done in Figure 3, which shows that, in the case of NH, while the dominant component of is , the numerical value of is equal to over the whole range and also to for larger than few tenths of electronvolt. In the case of IH, has practically the same value of and . Finally, in the case of QD spectrum, in both schemes. From the figure, it is also clear that the allowed values for in the two mass schemes are quite different: in the case of IH, there is a lower limit for of about 0.04 eV, while in the NH this limit is of about 0.01 eV. Therefore, if a future experiment will determine an upper bound for smaller than 0.04 eV, this would be a clear indication in favor of the NH mass pattern. Finally, Figure 3 shows that the ultimate sensitivity needed for a direct neutrino mass measurement is set at about 0.01 eV, the lower bound in case of NH. However, if experiments on neutrino oscillations provide us with the values of all neutrino mass-squared differences (including their signs) and the mixing parameters and the value of has been determined in a future search, then the individual neutrino mass squares can be determined:On the other hand, if only the absolute values are known (but all of them), a limit on from beta decay may be used to define an upper limit on the maximum value of :In other words, knowing , one can use a limit on to constrain the heaviest active neutrino.

At present, the most stringent experimental constraint on is the one obtained by the Troitsk [13] and the Mainz [14] neutrino mass experiments, at 95% CL. This falls in the QD region for both mass schemes.

Another type of weak process sensitive to the neutrino mass scale is the neutrinoless double beta decay (-), a second-order weak decay that violates the total lepton number conservation by two units, whose existence is predicted for many even-even nuclei:The search for - is the only available experimental tool to demonstrate the Majorana character of the neutrino (i.e., ). In fact, the observation of - always requires and implies that neutrinos are massive Majorana particles [15]. However, there are many proposed mechanisms which could contribute to the - transition amplitude, and only when - is mediated by a light mass Majorana neutrino the observed decay is useful for determining the neutrino mass. In this case, the measured decay rate is given bywhere is the - decay half-life, is electron mass, and is the effective Majorana mass, defined below. The nuclear structure factor is given bywhere is the accurately calculable phase space integral and is the nuclear matrix element which is subject to uncertainty [16]. At present, the discrepancies among different nuclear model calculations of amount to a factor of about 2 to 3. These reflect on and are an unavoidable source of systematic uncertainties in the determination of from the experimental data. Measuring the lifetime of different isotopes would allow one to disentangle the model dependency linked to the exact mechanism causing - and to reduce the systematic uncertainties on .

If - decay is observed and the nuclear matrix elements are known, one can deduce the corresponding value, which in turn is related to the oscillation parameters throughDue to the presence of the unknown Majorana phases , cancellation of terms in (12) is possible and could be smaller than any of the masses . Therefore, unlike the direct neutrino mass measurement, a limit on does not allow the constraint of the individual mass values even when the mass differences are known. On the other hand, the observation of the - decay and the accurate determination of the value would not only establish that neutrinos are massive Majorana particles, but also contribute considerably to the determination of the absolute neutrino mass scale. Moreover, if the neutrino mass scale would be known from independent measurements, one could possibly obtain from the measured also some information about the CP violating Majorana phases [17].

Given the present knowledge of the neutrino oscillation parameters, it is possible to derive the relation between the effective Majorana mass and the lightest neutrino mass in the different neutrino mass schemes. This is done in a number of papers (see, e.g., [18]). Figure 4 shows the effective Majorana mass as a function of the effective electron neutrino mass in both NH and IH mass schemes, demonstrating the complementarity of the two methods.

As a final remark, - and decays both depend on different combinations of the neutrino mass values and oscillation parameters. - decay rate is proportional to the square of a coherent sum of the Majorana neutrino masses because the process originates from the exchange of a virtual neutrino. On the other hand, in beta decay one can determine an incoherent sum because a real neutrino is emitted. That shows clearly that a complete neutrino physics program cannot renounce either of these two experimental approaches. The various methods that constrain the neutrino absolute mass scale are not redundant but rather complementary. If, ideally, a positive measurement is reached in all of them (- decay, decay, and cosmology), one can test the results for consistency and with a bit of luck one can determine the Majorana phases.

4. The Direct Neutrino Mass Measurement via Single Nuclear Beta Decay

As already pointed out, the most useful tool to constrain kinematically the neutrino mass is the study of the “visible” energy in single beta decay. The experimental beta spectra are normally analyzed by means of a transformation which produces a quantity generally linear with the kinetic energy of the emitted electron: The graph of this quantity as a function of is named Kurie plot. In a Kurie plot, each bin has the same error bar and therefore the same statistical weight.

Assuming massless neutrinos and infinite energy resolution, the Kurie plot is a straight line intersecting the -axis at the transition energy . In case of massive neutrino, the Kurie plot is distorted close to the end-point and intersects the -axis with vertical tangent at the energy . The two situations are depicted in Figure 5.

Most of the information on the neutrino mass is therefore contained in the final part of the Kurie plot, which is the region where the counting rate is lower. In particular, the relevant energy interval is and the fraction of events occurring here iswhere is a constant of order unity which depends on the details of the beta transition. From this, it is apparent that kinematical mass measurements require beta decaying isotopes with the lowest end-point energy. Tritium is one of the best and most used isotopes thanks to its very low transition energy: ; nonetheless, the fraction of events falling in the last 5 eV of the tritium spectrum is only .

Every instrumental effect such as energy resolution or background will tend to hinder or even wash out this tiny signal. In Figure 5, the effect on the spectral end-point of an energy resolution of 0.5 eV is shown. This distorts the Kurie plot in the opposite way with respect to the neutrino mass effect. It is therefore mandatory to evaluate and/or measure the detector response function, which includes the energy resolution but is not entirely determined by it. Finally, the analysis of the final part of the Kurie plot is complicated by the background due to cosmic rays and environmental radioactivity. Because of the low beta counting rate in the interesting region, spurious background counts may affect the neutrino mass determination.

The possibility to use beta decay to directly measure the neutrino mass was first suggested by Perrin [19] in 1933 and then by Fermi [20] in 1934, but the first sensitive experiments were performed only in the ’70s. The first experiments were the one of Bergkvist [21, 22] and the one of the ITEP group [23], both of which used magnetic spectrometers to analyze the electrons emitted by tritium sources. This experimental approach has clear advantages such as (1) the high specific activity of tritium, (2) the high energy resolution and luminosity of spectrometers, and (3) the possibility to select and analyze only the electrons with kinetic energies close to .

In the ’80s and through the ’90s, experiments with spectrometers using tritium were reporting largely negative [24] (see Figure 6) or even an unlikely finite value of about 35 eV [23]. These were all signs of under- or overcorrected instrumental effects which were causing systematic shifts [25, 26]. In fact, despite the relative conceptual simplicity of the kinematic direct determination of the neutrino mass, it has been soon recognized that there are many subtle effects which threaten the accuracy of these measurements. Some are related to beta decay itself, since the atom or the molecule containing the decaying nucleus can be left in an excited state, leading even in this case to dangerous distortions of the Kurie plot (see Section 5.1). Other effects are due to the scattering and absorption of the electrons in the source itself. And last but not least, systematic effects are also caused by the imperfect characterization of the detector response. In the past 30 years, many experiments using tritium were performed. Starting from the ’90s, magnetic spectrometers were gradually abandoned for electrostatic retarding spectrometers with adiabatic magnetic collimation [27, 28]. Many improvements in the detectors, in the tritium source, and in the data analysis and processing allowed the experiments to constantly improve the statistical sensitivity and to minimize the systematic uncertainties, as it is shown in Figure 6. Today, owing to a continuous and strenuous investigation of all experimental effects and systematic uncertainties, measurements reported by the two most sensitive experiments [13, 14] are compatible with a zero mass, with the systematic errors reduced to the same level of statistical ones.

Nevertheless, today direct neutrino mass measurements remain affected by an intrinsic potential bias. As it already happened in the past, in a sensitive experiment small miscorrections of instrumental effects may again either mimic or cancel the traces of a small positive neutrino mass. A weak unexpected effect not included in the data analysis may compensate and hide the signal of a small mass within the statistical sensitivity of an experiment, which would therefore report values nicely compatible with the null hypothesis and thus quote just an upper limit. On the other hand, in a future experiment with a statistical sensitivity approaching the range predicted by oscillation parameters, a slightly excessive correction for an expected effect could mimic the signal for a tiny mass which would not contradict the community expectations. For these reasons, direct neutrino mass measurements call for a continuous crosscheck from different independent experiments to confirm both positive and negative findings.

Already in the ’80s when the negative squared masses and the positive claim from ITEP were puzzling the neutrino community, De Rújula proposed the use of other beta decaying isotopes with low decay energy. In [30], it was noticed that has an end-point around 2 keV, much more favorable than the one of tritium. This isotope was at that time discarded because of its long half-life around years. The focus of [30] was therefore on the isotope , which decays by Electron Capture (EC) with a very low transition energy. In the EC process [31]the available decay energy iswhere indicates the mass of the atoms in the initial and final states. Neglecting the nuclear recoil, the energy is shared between the neutrino and the radiation emitted in the deexcitation of the daughter atomHere, includes the energy of X-rays, Inner Bremsstrahlung photons, and Auger and Coster-Kronig electrons emitted in the atomic deexcitation of the daughter atom and adds up to the binding energy of the captured electron, allowing for a small indetermination due to the natural width of the atomic energy levels. Because of energy conservation, the end-points of the spectra of these electrons or photons—where the massive neutrino emitted in the EC is at rest—are sensitive to the neutrino mass. It is worth noting here that the kinematics of EC decay probes the mass of the neutrino , whereas the one of regular beta decays probes the mass of the antineutrino : as already recalled above, the CPT invariance implies that the two measured masses are identical.

In particular, two measurements were discussed in 1981 for : the end-point of the IBEC (Inner Bremsstrahlung in EC) spectrum [30]and the end-point of the SEEEC spectrum [32]Even if at that time the value of EC was largely unknown, this decay was already considered very promising for a sensitive neutrino mass measurement, since it was clear that its value is one of the lowest in nature. Both processes (18) and (19) start with a first intermediate atomic vacancy caused by EC, where denotes that the state is not necessarily on-shell. The energy of the vacant state has its own natural width. Because of the low value, this first vacancy H can be created only in one of the M1, M2, N1, N2, O1, O2, or P1 shells of the Dy daughter atom.

In the IBEC process (18) a photon is emitted during the virtual transition of an electron from to the intermediate state , from which the electron was captured. For each possible final vacancy and for , the spectrum of the emitted photons is not made of monoenergetic lines at , where is the ionization energy of H shell in Dy, but it is a continuum with a kinematic limit , and the total photon spectrum is therefore a superposition of several spectra with different end-points. The spectral end-points follow the three-body statistical shapeIn general, since IBEC is a second-order effect, its intensity is very low. However, the photon emission may experience large resonant enhancements for photons with energies equal to the ones of the characteristic X-ray transitions of the daughter atom. In particular, De Rújula has shown for that when is one of the N1, N2, O1, and O2 shells, then the dominant resonance close to the end-point is with the X-ray transitions , that is, when the intermediate vacancy of the virtual transition H corresponds to M1 shell. In this case, the distance between the resonance and the end-point is , which for is equal to a few hundred electronvolts. Unfortunately, calculations [33, 34] showed that, with around 2.8 keV, an IBEC measurement with is not going to be statistically competitive with the tritium experiments, also because of complex destructive interference patterns.

The SEEEC process (19) is analogous to the IBEC with the role of the IB photon played by an Auger (or Coster-Kronig) electron. The spectrum of the ejected electron is a continuum with an end-point at , for . Also, in this case, the kinematics of a 3-body decay process applies, and a phase space term appears in the spectral shape of ejected electrons. The continuous spectra show many resonances for different combinations of H, H1, and H2, but close to the end-point the dominant ones result from the M1 capture and are at . These resonances provide an enhancement of the spectrum close to end-point, thereby increasing the statistical sensitivity to . The inclusive spectrum of all the ejected electrons is quite complicated because of the many possible end-points and resonance peaks: nevertheless, the authors in [32] argue that the end-point region of this spectrum is unaffected by all the atomic details, since it is dominated by the upper tails of few resonances and maintains its usable sensitivity to , although the estimated , depending on the value, may be substantially lower than for tritium.

One stressed advantage of IBEC and SEEEC measurements is that, unlike what happens in tritium beta decay, the probability of atomic excitations in the final state—such as shake-up or shake-off processes—is strongly suppressed and estimated to be <1/ (see also Section 7.4).

More than 30 years later, none of the above suggestions has been successfully exploited to perform an experiment with a competitive sensitivity on . Of the various attempts to perform an IB end-point measurements [3438], only the one of Springer et al. [34] reported a limit on of about 225 eV obtained by fitting the end-point of the X-ray spectrum.

Most of the measurements performed on to directly measure the neutrino mass followed instead another proposal from Bennett et al. [39] in 1981. In [39], it is suggested that and the transition energy can be determined or constrained by measuring the ratios of absolute capture rates (a better treatment includes factors for the nuclear shape factor)where neutrino momentum is given by is the fraction of occupancy of the th atomic shell, is the Coulomb amplitude of the electron radial wave function (essentially, the modulus of the wave function at the origin), and is an atomic correction for electron exchange and overlap. Following this idea, practically all the experimental researches on EC of so far focused on the atomic emissions—photons and electrons contributing to in (17)—following the EC and used the capture ratios to determine [3438, 4042]. Unfortunately, the accuracy achieved for and with this method is adversely affected by the limited knowledge of the atomic parameters in (21).

As repeatedly underlined by De Rújula and Lusignoli [43], there is one experimental approach to the measurement of the neutrino mass from EC which overcomes all the difficulties above: the calorimetric measurement of all the energy released in the EC ( in (17)) except for the energy of the neutrino. This will be discussed in Section 7.1.

Today all expectations for a new direct measurement of the neutrino mass with a substantially improved statistical sensitivity are directed to the KATRIN experiment [44]. KATRIN uses a large electrostatic spectrometer which will analyze the tritium beta decay end-point with an energy resolution of about 1 eV and with an expected statistical sensitivity of about 0.2 eV. KATRIN reaches the maximum size and complexity practically achievable for an experiment of its type and no further improved project can be presently envisaged. As an alternative for the study of tritium end-point, Project 8 proposes a new experimental approach based on the detection of the relativistic cyclotron radiation emitted by the beta electrons [45], which is presently under development [46].

5. Calorimetric Measurements

5.1. General Considerations

In the global effort to cure the weaknesses of direct neutrino mass measurements with spectrometers yielding negative which started to show up since the ’80s, Simpson first proposed the calorimetric approach [47]. In an ideal calorimetric experiment, the source is embedded in the detector and therefore only the neutrino energy escapes detection. The part of the energy spent for the excitation of atomic or molecular levels is measured through the deexcitation of these states, provided that their lifetime is negligible with respect to the detector time response. In other terms, the kinematical parameter which is effectively measured is the neutrino energy (or ), in the form of a missing energy, a common situation in experimental particle physics. The advantages of a calorimetric measurement are (1) the measurement of all the energy temporarily stored in excited states, (2) the absence of source effects, such as self-absorption, and (3) the lack of backscattering from the detector. The effect of final states on the tritium beta spectrum was discussed thoroughly in many works [24, 26, 48]. In the following for simplicity we consider the so-called sudden approximation or first-order perturbation of an atomic tritium beta decay, neglecting the sum over the mass eigenstates . Due to the presence of atomic or molecular excited final states of the beta decay, the measured beta spectrum is a combination of different spectra characterized by different transition energies , where is the energy of the th final excited state of the decaywith describing the transition probability to the final th excited state. The spectral shape induced by the presence of excited final states can be misleading when trying to extract the value of the neutrino mass. In fact, assuming that the neutrino mass is null and summing up over all the final states, from (23), one obtainswhich approximates the single beta spectrum (4) with a negative squared neutrino mass equal to , where is the variance of the final state spectrum given by (Figure 7), and with an end-point shifted by . In case of a tritium atom, the distribution of the electronic final states can be found by solving analytically the Schrödinger equation and one can calculate .

Indeed, tritium experiments use molecular tritium sources and in particular mostly is adopted. To prevent systematic uncertainties which may give rise to a negative squared neutrino mass, the analysis of experimental spectra requires a complete and precise knowledge of the spectrum of the excitations—both atomic and molecular—of the daughter molecule. For molecular tritium, this spectrum can be calculated only numerically with an accuracy that has a direct impact on the experiment systematics.

The situation changes completely in the calorimetric approach. Even in this case the observed spectrum is a combination of different spectra. It can be obtained by operating the following replacements:motivated by the distinguishing feature of the calorimeters to measure simultaneously the beta electron energy and the deexcitation energy of the final state.

By combining (23) and (25), one getsObserving that and expanding in a series of powers of , one obtainsApart from the sum term, for a null neutrino mass, (27) describes a beta spectrum with a linear Kurie plot in the final region (); Figure 7 shows that the influence of the excited final states on the calorimetric beta spectrum is confined at low energy. Therefore, a calorimeter provides a faithful reconstruction of the beta spectral shape over a large energy range below the end-point. This is not true for spectrometers for which the measured spectrum at the end-point presents a deviation of the same size of that caused by a finite neutrino mass. Furthermore, it is apparent from Figure 7 that the presence of an excited state causes the spectrum of a spectrometer to mimic a lower along with a negative . The possibility to observe a substantial undistorted fraction of the spectrum is very useful to check systematic effects and to prove the general reliability of a calorimetric experiment.

As a general drawback, calorimeters present a major inconvenience which may be a serious limitation for the approach. In a calorimeter, the whole beta spectrum is acquired and the detector technology poses important restraints to the source strength. This in turn limits the statistics that can be accumulated. The consequences on the achievable statistical sensitivity are discussed in the next section. First of all the counting rate must be controlled to avoid distortions of the spectral shape due to pile-up pulses. Then, the concentration of the decaying isotope may not be freely adjustable. For example, at the time of Simpson experiments, the only way to make a sensitive calorimetric measurement was to ion-implant tritium in semiconductor ionization detectors such as Si(Li) or High Purity Ge (HPGe). There is however a trade-off between the required tritium implantation dose, that is, the tritium concentration, and the acceptable radiation damage. The tritium activity is then limited by the detector size in relation to its energy resolution.

This first generation of calorimetric experiments exploited Si(Li) or Ge detectors with implanted tritium but suffered for their intrinsic energy resolution which is limited to about 200 eV at 20 keV. With these experiments, a limit on of about 65 eV was set [47]. At the same time, these experiments showed that the calorimetric approach does not cancel all the systematic uncertainties. As it was already recognized by Simpson in [47], one source of systematic uncertainty relates to the precise evaluation of the resolution function of these solid states detectors. The resolution function is obtained through X-ray irradiation from an external source. The response of the detector may be different for X-rays entering the detector from one direction and the betas emitted isotropically within the detector volume. Moreover, the beta emission is localized in the deep region of the detector where an incompletely recovered irradiation damage may lead to incomplete charge collection, while X-ray interactions are distributed in the whole detector volume.

Soon it became clear that calorimeters may also be affected by solid states effect. The “17 keV neutrino saga” [49, 50] started off from an unexpected feature observed first by Simpson in the low energy part of the tritium spectrum measured with the implanted Si(Li) detectors [51]. While a neutrino with a mass of 17 keV was finally deemed inexistent and the observed kink ascribed to a combination of various overlooked instrumental effects in spectrometric experiment [52], the evidence in calorimetric measurements remained unexplained. The invoked explanations include environmental effects in silicon and germanium and remain of interest for future calorimetric experiments. One of these solid state effects was first described by Koonin in 1991 [53]: it is a solid state effect known as Beta Environmental Fine Structure (BEFS), which introduces oscillatory patterns in the energy distribution of the electrons emitted by a beta isotope in a lattice. It is an effect analogous to the Extended X-ray Absorption Fine Structure (EXAFS) and it will be addressed in more detail in Section 6.3.

So far, only tritium beta decay was considered, but all the arguments above apply to other isotopes undergoing nuclear beta decay. In particular, as it will be shown quantitatively in the next section, isotopes with a transition energy lower than that of tritium are better suited for a calorimetric experiment. The rest of the present work will focus on two such isotopes, and , which have values around 2.5 keV. In fact, already in the ’80s many authors realized that low temperature detectors could offer a solution for making calorimetric measurements with high energy resolution and could be used either for tritium or, better, for the lower beta emitters and (Section 5.3).

A final remark from the discussion above is that the spectrometer and the calorimeter methods have both complicated but totally different systematic effects. Therefore, once it is demonstrated that the achievable sensitivities are of the same order of magnitude in the two cases, it is scientifically very sound to develop complementary experiments exploiting these two techniques.

5.2. Sensitivity of Calorimeters: Analytical Evaluation

It is useful to derive an approximate analytic expression for the statistical sensitivity of a calorimetric neutrino mass experiment (see, e.g., [54]). The primary effect of a finite mass on the beta spectrum is to cause the spectrum to turn more sharply down to zero at a distance below the end-point (Figure 8(b)). To rule out such a mass, an experiment must be sensitive to the number of counts expected in this interval. The fraction of the total spectrum within of the end-point is given byFor , this is approximatelyFor a finite mass, it is found also thatIn addition to the counting statistics, the effect must be detected in the presence of an external background and of the background due to undetected pile-up of two events (Figure 8). Decays which occur within a definite time interval cannot be resolved by a calorimetric detector, giving rise to the phenomenon of pile-up. This implies that a certain fraction of the detected events is the sum of two or more single events. In particular, two low energy events can sum up and contribute to a count in the region close to the transition energy, contaminating the spectral shape in the most critical interval. In a first approximation the external background can be neglected. The pile-up spectrum can then be approximated by assuming a constant pulse-pair resolving time, , such that events with greater separation are always detected as being doubles, while those at smaller separations are always interpreted as singles with an apparent energy equal to the sum of the two events. In reality, the resolving time will depend on the amplitude of both events, and the sum amplitude will depend on the separation time and the filter used, so a proper calculation would have to be done through a Monte Carlo applying the actual filters and pulse-pair detection algorithm being used in the experiment. However, this approximation is good enough to get the correct scaling and an approximate answer. In practice, depends on the high frequency signal-to-noise ratio, but it is of the order of the detector rise time.

With these assumptions, for a pulse-pair resolving time of the detector , the fraction of events which suffer from not-identified pile-up of two events is, for a Poisson time distribution,where is the source activity in the detector and is the time separation between the two events. The beta spectrum of the unresolved pile-up events is given by the convolution productThe coincidence probability, in the first approximation, is given by . As shown in Figure 8(b), a fraction of these events will fall in the region within of the end-point and can be approximated byMeasuring a length of time , the signal-to-background ratio in the region within of the end-point can be expressed aswhere is the number of detectors and is the exposure. This ratio must be about 1.7 for a 90% confidence limit. Therefore, in absence of background, an approximated expression for the 90% CL limit on —can be written as [54]The two terms in (35) arise from the statistical fluctuations of the beta and pile-up spectrum, respectively, in (34). Equation (35) shows the importance of improving the detector energy resolution and of minimizing the pile-up by reducing the detector rise time. On the other hand, it shows also that the largest reduction on the limit can only come by substantially increasing the total statistics .

If the pile-up is negligible, that is, when the conditionis met, from (35), one can write the 90% confidence limit sensitivity aswhere energy interval in (37) cannot be taken smaller than about 2 times the detector energy resolution .

It is then apparent that to increase the sensitivity one has both to improve the energy resolution and to augment the statistics; however, there is a technological limit to the resolution improvements; thus, the statistics is in fact the most important factor in (37). For a more complete treatment, also in the presence of a not-negligible pile-up, refer to [54].

A similar approach for assessing the statistical sensitivity of EC decay cannot be pursued with the same simplicity because of the more complex spectrum (see Section 7.1). Nevertheless, it is worth anticipating that with some approximations—discussed in Section 7.1—one can at least easily show thatwhere is the energy of the Lorentzian peak whose high energy tail dominates the end-point region, that is, the M1 peak in (51). Equation (38) is to be compared to (37), which givesFrom (38), it is apparent that for EC experiments in general—and for in particular—not only is it winning to have the lowest possible , but also the end-point energy must be as close as possible to the binding energy of the deepest shell accessible to the EC.

5.3. LTD for Calorimetric Neutrino Mass Measurements

In 1981, De Rújula was already discussing with Fiorini the possibility of performing a calorimetric measurement of the Electron Capture process in , apparently without any useful conclusion. It was only 3 years later—in 1984—that two independent seminal papers proposed for the first time the use of phonon-mediated detectors operated at low temperatures (simply called here low temperature detectors, LTDs) for single particle detection with high energy resolution. Fiorini and Niinikoski [55] proposed to apply these new detectors to various rare events searches in a calorimetric configuration, while McCammon et al. [56, 57] initiated the application to X-ray detection. It was immediately clear to McCammon et al. that this could be extended to the spectroscopy of an internal beta source by realizing high energy resolution calorimeters with implanted tritium [58].

In 1985, few years after De Rújula suggested the use of and for a sensitive neutrino mass measurement in [30], Blasi et al. came up with the first operative proposal for an experiment using LTDs to measure spectrum calorimetrically [59]. In the same year, also Coron et al. started a research program aiming at exploiting LTDs to perform the calorimetry of EC decay [60], which was soon discontinued for what concerns . In the following years, the Genova group pioneered the development of LTDs that aimed at a direct neutrino mass measurement using the beta decay. The experiment was later called MANU and produced its first result in 1992. Some years later, in 1993, the Milano group, that mostly focused on carrying out a - search with LTDs, also opened a research line to develop high energy resolution LTDs for a calorimetric measurement of beta decay. This project was named MIBETA and came to the first measurement in 1999. In 2005, the MANU and MIBETA experiments merged in the international project MARE. In parallel to the work on , starting from 1995, the Genova group was also carrying on a research for a calorimetric measurement of the EC decay. This activity, later on, was first absorbed in MARE and then transferred into the HOLMES project. In 2012, the Heidelberg group, former member of the MARE collaboration, presented its own R&D program for a calorimetric experiment, ECHo. Recently, also the Los Alamos group started a preliminary work for a experiment, with a project named NuMECS. All these experiments and projects will be discussed in the next two sections.

Two other groups that participated in the efforts to develop LTDs for neutrino mass measurements in these three decades are worth mentioning. The Oxford group developed arrays of indium based Superconducting Tunnel Junctions (STJ) to search for the 17 keV neutrino in the beta decay [61] and to measure precisely the exchange effect in the low energy part of the spectrum of the same decay [62]. The Duke University group developed transition edge sensors (TESs) based detectors for measuring calorimetrically the tritium decay [63, 64], but this project was abandoned before obtaining a statistically meaningful sample.

All these activities were triggered in the early ’80s by the lucky coincidence to have the need for a tool to perform calorimetric measurements of new low beta isotopes just at the time when a new promising particle detection technology was appearing on the scene. It took more than 20 years for the LTD technology to actually be mature enough to sustain the ambitions of calorimetric neutrino mass experiments; nowadays, LTDs can indeed deliver to this science case what they have been developed for. In particular, LTDs provide better energy resolution and wider material choice than conventional detectors. The energy resolution of few electronvolts is comparable to that of spectrometers and the restrictions caused by the full spectrum detection are lifted by the parallelization of the measurement with large arrays of detectors. Still, the detectors time constants of the order of microseconds and, correspondingly, the read-out bandwidth remain the most serious technical constraints to the full exploitation of LTDs in this field.

5.3.1. LTD Basic Principles

A complete overview of LTDs can be found in [65], while their state-of-the-art is well summarized in the proceedings of the biyearly international workshop on low temperature detectors [66].

LTDs were initially proposed as perfect calorimeters, that is, as devices able to thermalize thoroughly the energy released by the impinging particle. In this approach, the energy deposited by a single quantum of radiation into an energy absorber (weakly connected to a heat sink) determines an increase of its temperature . This temperature variation corresponds simply to the ratio between the energy released by the impinging particle and the heat capacity of the absorber; that is, . The only requirements are therefore to operate the device at low temperatures (usually <0.1 K) in order to make the heat capacity of the device low enough and to have a sensitive enough thermometer coupled to the energy absorber. Often LTDs with a total mass not exceeding 1 mg and few hundred micron linear dimensions are called low temperature (LT) microcalorimeters.

In the above linear approximation, using simple statistical mechanics arguments, it can be shown that the internal energy of an LTD weakly linked to a heat sink fluctuates according towhere is the equilibrium operating temperature and is the Boltzmann constant, independent of the weak link . Equation (40) is often referred to as the thermodynamical limit to the LTD sensitivity and the internal energy fluctuations as Thermodynamic Fluctuation Noise (TFN). Although, strictly speaking, (40) is not the best energy resolution achievable by an LTD, it turns out that when a sensitive enough thermometer is considered and all sources of broadband noise are included in the calculation, the real thermodynamical limit of the energy resolution of an LTD can be expressed as [56]where now is the heat sink temperature, is the heat capacity at , and is a numerical parameter of order one which is derived from the LTD thermal details and for the optimal operating temperature. A detailed analysis of the optimal energy resolution for various thermometers can be found in [65].

From the above and (41), it is evident that the LTD absorber with its and the thermometer with its sensitivity are the crucial ingredients for obtaining high energy resolution detectors. A sensitive thermometer is the one which allows the transduction of the fluctuations of the TFN to a signal larger than the other noise sources intrinsic to the thermometer itself and to the signal read-out chain. Today, this condition has been met—and (41) is achieved—for LT microcalorimeters using at least three types of optimized thermometers: semiconductor thermistors, transition edge sensors (TESs), and Au:Er metallic magnetic sensors. The thermal sensor of an LTD does not only affect the achievable energy resolution, but also determine the speed of the detector; that is, it determines the time scale of the signal formation with the details of the thermal mechanisms entering in the temperature transduction. Although the detector speed is a crucial parameter in calorimetric neutrino mass experiments, a complete technical treatment for the three sensor technologies is out of the scope of the present work. Here, it is enough to say that the three technologies above are sorted from the slowest to the fastest: the numerical values for the achievable speeds (from hundreds of nanoseconds to hundreds of microseconds) will be given in the following sections. Each sensor technology has its pros and cons which have driven the choice for the various neutrino mass experiments. The traded-off parameters, which include the achievable performances, the ease of fabrication, and the read-out technology, will be discussed in Section 5.3.3.

The next section is dedicated to the other critical component, that is, the absorber.

5.3.2. Energy Absorber and Thermalization Process

Under many respects, the absorber of LTDs plays the most crucial role in calorimetric experiments. First of all, (41) shows that the absorber heat capacity sets the achievable energy resolution. When designing LTDs, usually the absorber is chosen to be made out of a dielectric and diamagnetic material, so that is described only by the Debye term, which is proportional to at low temperatures, and can be extremely small for a good material with large Debye temperature . Insulators and semiconductors are often good examples of suitable dielectric and diamagnetic materials. Metals are instead discarded because of the electron heat capacity, which is proportional to and remains large also at very low temperatures, thereby dominating the total of the absorber. Superconductors are in principle also suitable, since the electronic contribution to the specific heat vanishes exponentially below the critical temperature , and only the Debye term remains. For microcalorimeters, the situation is different because their reduced size allows tolerating also the heat capacity of a metal so that other considerations may be adopted to select the absorber material. Microcalorimeters for calorimetric measurements of the tritium, , or , decay spectra must contain the unstable isotope in their absorbers. As it will be discussed in more detail in the following, while tritium and can be included by various means in materials with no special relation with hydrogen or holmium, is naturally found in physical and chemical forms suitable for making LTDs, that is, superconducting metal and dielectric compounds. In addition to the electronic and phononic heat capacities considered above, other contributions caused by nuclear heat capacity or by impurities may become important in certain conditions [67]. As it will be discussed later, this can be the case for metallic rhenium and embedded .

The above leads to the conclusion that there is a large flexibility in the choice of the material for the absorber of microcalorimeters for calorimetric neutrino mass experiments: dielectrics and normal or superconducting metals have all indeed been used. In spite of this apparent flexibility, in such experiments, it turned out that the ideal energy resolution (41) is quite hard to achieve because of the details of the chain of physical processes which transform the energy deposited as ionization into the altered equilibrium thermal distribution of phonons—that is, above—sensed by the thermometers (see [6770] and references therein). This chain—also called the thermalization process—is responsible for the introduction of a fluctuation in the deposited energy which is finally converted in the measured : the so-called thermalization noise. The chain starts with the hot electron-hole pairs created by the primary ionizing interaction: on a time scale of  s, this energy is degraded and partitioned between colder electronic and phononic excitations by means of electron-electron and electron-phonon scattering. The chain then proceeds with the conversion of the electronic excitations into phonons accompanied by a global cooling of all excitations, and it ends with the new thermal distribution of phonons which corresponds to a temperature increase above the equilibrium operating temperature. The total time scale of this latter process and its details strongly depend on the material. Only when the time elapsed between the primary interaction and the signal formation is long enough to allow the phonon system to relax to the new quasi-equilibrium distribution, the detector works really as a calorimeter. In commonly used thermal sensors, the measured physical quantity is sensitive to the temperature of sensor electrons: therefore, at the end of the thermalization, there must be a last heat flow from the absorber phonons to the sensor electrons through a link which ultimately acts as a throttle for the signal rise. The extra noise shows up every time the deposited energy is not fully converted into heat [56], that is, into the new quasi-equilibrium thermal distribution, and gets trapped in long—compared to thermalization and signal formation time scales—living excitations. In a simplified picture, if is the fraction of deposited energy which actually goes into heat, the achievable energy resolution may be written aswhere is the Fano factor and is the average excitation energy of the long living states. The second term is given by the statistical fluctuation of the number of long living states. The parameters and are peculiar of each type of material. The parameter may depend on the operating temperature and on the signal time scale: often, the thermalization slows down at low temperatures and the signal time scale must be adapted accordingly. Under all these respects, metals are the ideal material because they show fast and complete thermalization at every temperature—that is, is achieved on time scales of the order of nanoseconds or less—thanks to the strong interactions between electrons and phonons. Microcalorimeters with metallic absorbers in electrical contact with the sensor are often called hot-electron microcalorimeters [71, 72]. In hot-electron microcalorimeters the thermalization ultimately warms up the absorber electronic system and the hot absorber electrons can directly warm up the sensor electrons without throttling, therefore showing a very fast response time.

On the contrary, dielectrics often suffer from a large thermalization noise translating in a degraded energy resolution which increases with the deposited energy. In dielectrics, impurities and defects can act as traps which lie energetically inside the forbidden band-gap. Following the primary ionization created by the incident particle, electrons and holes can get trapped before their recombination to phonons. Experimentally, it is found that can be as large as few tens of electronvolts, so that the second term (42) may easily dominate the energy resolution also for values approaching unity. Semiconductors may be better than dielectrics, owing to their smaller band-gap. But only metals, semimetals (such as bismuth), and zero-gap semiconductors (such as HgTe) have been successfully employed in microcalorimeters showing energy resolutions close to the thermodynamical limit (41) [70].

In principle, superconductors should provide a further improvement thanks to their band-gap of few millielectronvolts: unfortunately, the thermalization in superconductors is a complex process in which can be very small. In superconductors, the electronic excitations produced in the thermalization process described above are broken Cooper pairs, also called quasi-particles [7375]. Microscopic calculations from the Bardeen-Cooper-Schrieffer theory predict that, indeed, a large part of the energy released inside the absorber can be trapped in quasi-particles states which can live for many seconds at temperatures below 0.1 K. The energy release inside a superconductor leads to a long living state far from equilibrium in which many Cooper pairs are continuously broken by phonons produced when quasi-particles recombine. A model describing this situation was proposed by Kaplan et al. [76], who found that the time the quasi-particles need to recombine would be somewhere between 1 and 10 seconds. Analogous results were obtained by the analysis of Kozorezov et al. [75]. The global result of these models is that in superconductors is expected to be very small on a time scale useful for an LTD. Despite these theoretical considerations, it is an experimental fact that some superconducting materials perform well as absorbers in cryogenic detectors. Indeed, deviations from the predicted temperature dependence of quasi-particle lifetime have been reported: for example, tin has been used for making LTDs with an energy resolution approaching the thermodynamical limit, thanks to a fast and complete energy thermalization. This is apparently one characteristic shared also by other soft superconductors such as lead and indium. So far, no generally accepted explanation has been given for these apparent discrepancies between experimental results and theory, and the topic of quasi-particles recombination in LTDs remains an active field of research.

There are two other important sources of energy resolution degradation which are often observed in LTDs [56]. The first is the escape from the absorber of high energy phonons during the first stages of the thermalization process, which adds another fluctuation component to the finally thermalized energy. The second is the accidental direct detection of high energy phonons by the thermal sensors, which determines an excess systematic broadening of the energy resolution because its probability varies with the interaction position.

5.3.3. Temperature Sensors, Read-Out, and Signal Processing

The LTDs used for neutrino mass calorimetric measurements fall in the category of the low temperature microcalorimeters and are designed to provide energy resolutions better than about 10 eV, possibly approaching the thermodynamical limit. As shown in Section 5.2, the detector speed—that is, the detector signal bandwidth, or its rise time —is another parameter guiding the design. Furthermore, neutrino mass experiments with LTDs need to use large arrays of detectors. This calls for ease of both fabrication and signal read-out. Along with the selection of the absorber material containing the source, the above points are the main guidelines for the design of an LTD based neutrino mass experiment. The choice of the sensor technology is one of the first steps in the design. To date, only three technologies have been exploited. These are the semiconductor thermistors, the transition edge sensors, and the magnetic metallic sensors, and they will be briefly discussed here (more details can be found in [65]). The possibility of employing other technologies, such as the one of superconducting microwave microresonators, is also investigated, but its perspectives are not clear yet [77]. The application of LTDs to the spectroscopy of and decays fully overlaps the range of use of microcalorimeters developed for soft X-ray spectroscopy; therefore, in the following, the discussion will be restricted to thermal sensors for X-ray detection.

Semiconductor Thermistors. These sensors are resistive elements with a heavy dependence of the resistance on the temperature. Usually, they consist of small crystals of germanium or silicon with a dopant concentration slightly below the metal-to-insulator transition [56]. The sensor low temperature resistivity is governed by variable range hopping (VRH) conduction and it is often well described by the expression , where and are parameters controlled by the doping level [78] (Figure 9). Semiconductor thermistors are high impedance devices—1–100 M—and are usually parameterized the sensitivity , defined as , which typically ranges from 1 to 10. Semiconductor thermistors can be realized also in amorphous film form, like NbSi. Silicon thermistors are fabricated using multiple ion implantation in high purity silicon wafers to introduce the dopants in a thin box-like volume defined by photolithographic techniques. Germanium thermistors are fabricated starting from bulk high purity germanium crystals doped by means of neutron irradiation (nuclear transmutation doping, NTD) [79, 80]. Single NTD germanium sensors are obtained by dicing and further processing using a combination of craftsmanship and thin film techniques. In early times, the weak coupling to the heat sink was provided by the electrical leads used for the read-out; nowadays, microelectronic planar technologies and silicon micromachining are used to suspend the sensors on thin silicon nitride membranes or thin silicon beams. Thermistors are read-out in a constant current biasing configuration which allows converting the thermal signal in a voltage signal (Figure 9). Because of their high impedance, thermistors are best matched to JFETs. Semiconductor thermistor presents few drawbacks. First of all, their high impedance requires the JFET front end to be placed as close as possible—centimeters—to the devices to minimize microphonic noise and bandwidth limitations due to signal integration on parasitic electrical capacitance. Since commonly used silicon JFET must operate at temperatures not lower than about 110 K, this becomes quickly a technical challenge when increasing the number of detectors. Secondly, it has been experimentally observed that conductivity of semiconductor thermistors deviates from linearity at low temperatures [81, 82]. The deviation is understood in terms of a finite thermal coupling between electrons and phonons, whose side effect is to intrinsically limit the signal rise times to hundreds of microseconds for temperatures below 0.1 K. Semiconductors are now an established and robust technology, and arrays of microcalorimeters based on these devices have been widely used for X-ray spectroscopy [65] achieving energy resolutions lower than 5 eV with tin or HgTe absorbers.

Superconducting Transition Edge Sensors (TESs). TESs are also resistive devices made out of thin films of superconducting materials whose resistivity changes sharply from 0 to a finite value in a very narrow temperature interval around the critical temperature (Figure 10). The superconducting material can be an elemental superconductor (such as tungsten or iridium), although it is more often a bilayer made of a normal metal and a superconductor. With bilayers, of the superconductors is reduced by the proximity effect and can be controlled by adjusting the relative thicknesses of the two layers. Common material combinations used to fabricate TES bilayer with between 0.05 and 0.1 K are Mo/Au, Mo/Cu, Ti/Au, or Ir/Au. TES fabrication exploits standard thin film deposition techniques, photolithographic patterning, and micromachining. Sensors can be designed to have, at the operating point, a sensitivity as high as 1000 and a resistance usually less than 1 . The most common ways to isolate TES microcalorimeters from the heat sink are the use of thin silicon nitride membranes or thin silicon beams. TESs are read-out at a constant voltage and their low impedance is ideal to use SQUIDs to amplify the current signal induced by a particle interaction (Figure 10). The constant voltage biasing provides the condition to achieve the extreme electrothermal feedback (ETF) regime [83] which leads to substantial improvements in resolution, linearity, response speed, and dynamic range. This regime also eases the operation of large pixel count arrays because ETF produces a self-biasing effect that causes the temperature of the film to remain in stationary equilibrium within its transition region. With respect to semiconductor thermistors, TESs offer many advantages: (1) large arrays can be fully fabricated with standard micro-fabrication processes, (2) the larger electron-phonon coupling allows signal rising as fast as few microseconds, and (3) the low impedance reduces the sensitivity to environmental mechanical noise. The main drawbacks of TESs are the limited dynamic range, the adverse sensitivity to magnetic fields of TES and SQUID, and the not fully understood physics of superconducting transitions and excess noise sources [66]. TES microcalorimeter arrays are being actively developed as X-ray spectrometers for many applications, which include material analysis and X-ray astrophysics [70]. TES sensors are particularly well suited to be coupled to metallic (gold) or semimetallic (bismuth) absorbers, providing fast response and energy resolutions lower than few electronvolts (Figure 12).

Magnetic Metallic Sensors. These sensors are quite different from the previous two, and their successful development is more recent [84]. They are paramagnetic sensors exposed to a small magnetic field. The temperature rise causes a change in the sensor magnetization, which is sensed by a SQUID magnetometer (Figure 11). The nondissipative read-out scheme avoids the noise sources typical of dissipative systems, such as the Johnson noise of semiconductor thermistors and of TESs. State-of-the-art sensors use paramagnetic ions localized in an Au metallic host (Au:Er sensors). The use of a metallic host ensures a very fast sensor response time, since the spin-electron relaxation time for Au:Er is around 0.1 μs at about 0.05 K. Microcalorimeters with magnetic metallic sensors (Magnetic Metallic Calorimeters, MMCs) are usually fully made out of gold to obtain both a fast and efficient energy thermalization to the absorber electronic system and a quick equilibration with the sensor electrons. Despite its high sensitivity, the paramagnetic sensor has an intrinsically large heat capacity; therefore, the gold absorbers may be relatively large without adversely affecting the MMC performance. These microcalorimeters, in general, do not need special measures for thermally isolating the devices from the heat sink because the signal is predominantly developed in the electronic system and the electron-phonon coupling is rather weak and slow at low temperatures. An interesting feature of MMCs is the availability of a complete and successful modeling, allowing a precise design tailored to each specific application. The microfabrication of MMCs is somewhat more cumbersome than for TES microcalorimeters but, for the large part, can be carried out with a standard microfabrication process [85, 86]. Presently, the most used design for arrays of MMCs has planar sensors on meander shaped pickup coils and achieves record energy resolutions of few electronvolts for soft X-rays (Figure 12) accompanied by large dynamic range and good linearity.

Signal Read-Out. Neutrino mass experiments are necessarily carried out using LTD arrays with a large pixel count, and this calls for the implementation of an efficient multiplexing system for reading out many sensors with the smallest possible number of amplifiers. This, in turn, reduces the number of read-out leads from room temperature to the array and the power dissipation at low temperature. Therefore, in order to be of some use for future experiments, a sensor technology must be compatible with some sort of multiplexed read-out, not causing restrictions on the available signal bandwidth and degradation of the resolving power. This makes the semiconductor thermistors not appealing for sensitive neutrino mass experiments since their inherent high impedance prevents the implementation of an effective multiplexed read-out. The opposite is true for the other two technologies owing to the use of a SQUID read-out.

TES arrays with SQUID read-out can be multiplexed according to three schemes [70]: Time Division Multiplexing (TDM) [87], Frequency Division Multiplexing (FDM) [88], and Code Division Multiplexing (CDM) [89]. The three schemes differ by the set of orthogonal modulation functions used to encode the signals. TDM and FDM (in the MHz band) are the most mature ones, and they have been already applied to the read-out of many multipixel scientific instruments. The more recently developed CDM combines the best features of TDM and FDM and is useful for applications demanding fast response and high resolution.

Recent advancements on Microwave Multiplexing (MUX) suggest that this is the most suitable system for neutrino mass experiments, since it provides a larger bandwidth for the same multiplexing factor (number of multiplexed detector signals). It is based on the use of rf-SQUIDs as input devices, with flux ramp modulation [90] (Figure 13). The modulated rf-SQUID signals can be read out by coupling the rf-SQUID to superconducting quarter-wave coplanar waveguide (CPW) resonators in the GHz range and using the homodyne detection technique. By tuning the CPW resonators at different frequencies, it is straightforward to multiplex many RF carriers. The feasibility of this approach has been demonstrated in [90] only with two channels, but it is making quick progress as shown with the multiplexed arrays of TES bolometers for millimeter astronomy of MUSTANG2 [91].

MUX is suitable for a fully digital approach based on the Software Defined Radio (SDR) technique [92, 93]. The comb of frequency carriers is generated by digital synthesis in the MHz range and upconverted to the GHz range by -mixing. The GHz range comb is sent to the cold MUX chips coupled to the TES array through one semirigid cryogenic coax cable, amplified by a cryogenic low noise High Electron Mobility Transistor (HEMT) amplifier, and sent back to room temperature through another coax cable. The output signal is downconverted by -mixing, sampled with a fast analog-to-digital converter, and digital mixing techniques are used to recover the signals of each TES in the array (channelization).

Because of their excellent energy resolution combined with a very fast response time, the multiplexed read-out of MMCs is more demanding than that of TESs. To date, although some results have been obtained also with TDM, MUX is the most promising approach for multiplexing MMCs [94], even if its development for these devices is still in progress.

Signal Processing. One of the conditions to obtain a thermodynamically limited energy resolution is to process the microcalorimeter signals with the Optimal Filter (OF) [56, 95]. For this purpose, the signal waveforms must be fully digitized and saved to disk for further offline processing. This approach allows one also to apply various specialized signal filters to the same waveform, with the aim of improving time resolution—thereby reducing —rejecting spurious events, and gating background induced events with a coincidence analysis. The storage of the raw data needed for offline signal processing and for building the energy spectrum to be analyzed sets a practical limit to the lower energy limit of the final energy spectrum. While for rare event searches using LTD, such as dark matter or - searches, there is no issue to save digitized waveforms for later offline analysis, this becomes quickly unpractical for sub-eV neutrino mass experiments: in fact the pulses collected over the whole spectrum would amount to about , and their storage could fill up hundreds of petabytes—way more than LHC data! The only viable strategy is then to save just the relevant event parameters calculated by the pulse processing software, such as energy, arrival time, and few more useful shape parameters. In addition, it is likely that only a fraction of the spectrum of the order of 10% will be selected and saved for the neutrino mass analysis. For and , this means that the analysis will be forcibly limited to an energy interval which extends, respectively, about 1200 eV and 750 eV—that is, just to the right of the M1 peak in the spectrum (Section 7.1)—below the spectrum end-point.

5.4. Additional Direct Neutrino Measurements with LTDs

LTD calorimeters can offer the opportunity to perform other interesting investigations on the data collected for the neutrino mass measurement: these include the searches for massive sterile neutrinos (Section 2), when the entire energy spectrum is available for analysis, and for the cosmic relic neutrinos (Cosmic Neutrino Background, CB). These by-products of the neutrino mass measurements are largely out of the scope of the present work and are discussed here briefly only for the sake of completeness.

The calorimetric spectra of or are suitable to investigate the emission of heavy sterile neutrinos with a mixing angle . Assuming the electron neutrino is a mixture of two mass eigenstates and , with masses , then and the measured energy spectrum is . The emission of heavy neutrinos would manifest as a kink in the spectrum at an energy of for heavy neutrinos with masses between about 0 and  keV, where is the experimental low energy threshold. It is worth noting that the strategy is of course very analogous to the one adopted by Simpson [47, 51] which started off the already mentioned saga of the 17 keV neutrino. Moreover, such search may be affected by systematic uncertainties due to the background and due to the ripple observed in the spectrum and caused by the BEFS (Section 6.3). An alternative and possibly more robust approach to the search of sterile neutrino emission in has been proposed in [96].

Cosmology predicts that there are about 55 neutrinos/cm3 in the universe as leftovers of the Big Bang. Their average temperature today is about 1.95 K and therefore their observation is extremely difficult. It has been proposed that the CB could be detected via the induced beta decay on beta decaying isotopes: for example, . This reaction could be detected as a peak at an energy of in beta decay spectra. The expected rate can be calculated starting from the beta decay lifetime, and for 100 g of tritium, it would be of about 10 counts/year [97]. Unfortunately, 100 g is times the amount of tritium contained in KATRIN, and the situation is not more favorable for other isotopes like or . The reactions and are expected to give yearly about [98, 99] and events per gram of target isotope [100, 101], respectively. Recently, a dedicated experiment called PTOLEMY has been proposed [102]: it combines a large area surface-deposition tritium target, the KATRIN magnetic/electrostatic filtering, LTDs, RF tracking, and time-of-flight systems.

The possibility to detect heavy sterile neutrino Warm Dark Matter (WDM) via the above induced beta decays in and has also been investigated, but, again, the expected rates are hopelessly low [103, 104].

6. Past Experiments

6.1. Rhenium Experiments with LTDs

was mentioned in [30] as an interesting alternative to tritium because its transition energy of about 2.5 keV is one of the lowest known. Thanks to this characteristic, the useful fraction of events close to the end-point is 350 times higher for than for tritium. In addition, the half lifetime of about years together with the large natural isotopic abundance (62.8%) of provides useful beta sources without any isotopic separation process. The beta decay rate in natural rhenium is of the order of 1 Bq/mg, almost ideally suited to calorimetric detection with LTDs.

As soon as the idea of developing LTDs for X-ray spectroscopy with energy resolutions caught on, metallic rhenium became one of the most appealing materials also for making X-ray absorbers. First of all metallic rhenium is a superconductor with a critical temperature of about 1.69 K; therefore, ideally, it is a good candidate for photon detection free of thermalization noise. Then, the combination of high , high density ( g/cm3), and high Debye temperature ( K) makes metallic rhenium a unique material for designing X-ray detectors with low heat capacity —that is, high sensitivity—and high photon stopping power. Unfortunately, it soon became clear that metallic rhenium absorbers do not behave as expected and metallic rhenium was abandoned in favor of other more friendly materials as absorber for X-ray microcalorimeters. The results of early efforts on rhenium absorbers for X-ray microcalorimeters are reported in [105, 106]. The long time constants (up to 100 ms) and a significant deficit in the signal amplitude are the distinguishing features of microcalorimeters with metallic rhenium absorbers. In the same years, the Genova group was finding similar results, as discussed below.

Although the explanation of the observed behavior most probably resides in the superconductivity of rhenium, also the heat capacity may contribute to poor and inconsistent performance. In fact, according to [107] the specific heat of rhenium in the normal state is given bywhere the last two terms are the contributions from normal conduction electrons and phonons, respectively. The first two terms are due to the nuclear heat capacity, which arises from the interaction between the large nuclear quadrupole moment of the two natural isotopes of rhenium—both with nuclear spin of 5/2—and the electrical field gradient at the nucleus in the noncubic rhenium lattice. When rhenium is in the superconducting state, the nuclear heat capacity term should vanish since the slow spin-lattice relaxation thermally isolates the nuclear spin system. In the superconducting state, a irreproducible small fraction of the normal-state nuclear heat capacity may still be observed if trapped magnetic flux causes regions in the specimen to remain normal.

In spite of these difficulties, research on LTDs with metallic rhenium went on for the purpose of making detectors for calorimetric neutrino mass experiments, although other dielectric materials were also tested (see Section 6.5).

6.2. Beta Decay Spectrum

beta decayis a unique first forbidden transition. Unlike nonunique transitions, the nuclear matrix element is computable, even if the calculation is not straightforward as in the case of tritium. In the literature, it is possible to find detailed calculations of both the matrix element and the Fermi function for this process [108, 109]. The electron and the neutrino are emitted in and states, respectively, or vice versa. Higher partial waves are strongly suppressed because of the low transition energy. The distribution of the kinetic energies of the emitted electrons, calculated neglecting the neutrino mixing, is as follows (according to [108]):where is the electron momentum, is the neutrino momentum, and and are the relativistic Coulomb factors, which take into account the distortion of the electron wave function due to the electromagnetic interaction of the emitted electron in and states with the atomic nucleus. In general, the Coulomb factor takes the formwithwhere is the gamma function, is the fine structure constant, and is the nuclear radius. It can be found numerically that component of the spectrum is dominant; that is, [108]This has been confirmed experimentally in [110] (see Section 6.5). It can also be shown that (45) can be approximated by the expression where the correction factor is shown in Figure 14.

6.3. Statistical Sensitivity and Systematics

An accurate assessment of calorimetric neutrino mass experimental sensitivity requires the use of Monte Carlo frequentist approach [54]. The parameters describing the experimental configuration are the total number of decays , the FWHM of the Gaussian energy resolution , the fraction of unresolved pile-up events , and the radioactive background . The total number of events is given by , where , , and are the total number of detectors, the beta decay rate in each detector, and the measuring time, respectively. As discussed in Section 5.2, , where is the time resolution of the detectors. function is usually taken as a constant, , where is the average background count rate for unit energy and for a single detector, and is the experimental exposure. A set of experimental spectra are simulated and fitted with , , , , and as free parameters. The 90% CL statistical sensitivity of the simulated experimental configuration can be obtained from the distribution of found by fitting the spectra. The statistical sensitivity is then given by , where is the standard deviation of distribution.

The symbols in Figure 15 are the results of Monte Carlo simulations for various experimental parameters and for compared to the analytic estimate.

As an example, Table 1 reports two experimental configurations which could allow one to achieve statistical sensitivities of about 0.2 and 0.1 eV, respectively. The two sensitivities could be attained measuring for 10 years, respectively, and detectors, while the total mass of natural metallic rhenium in the two cases would be about 400 g and 3.2 kg, respectively.

A flat background remains almost negligible as long as it is much lower than the pile-up contribution at the end-point; that is, . For the two experiments in Table 1, this translates in a constant background, lower than about and  counts/day/eV, respectively, which should be achievable without operating the arrays in the extreme low background conditions of an underground laboratory.

Given the strong dependence of the sensitivity on the total statistics, for a fixed experimental exposure —that is, for a fixed measuring time and a fixed experiment size—and for fixed detector performance and , it always pays out to increase the single detector activity as high as being technically feasible, even at the expenses of an increasing pile-up level. Of course, since the rhenium specific activity is practically fixed, the ultimate limit to is set by the tolerable heat capacity of the absorber.

With the same Monte Carlo approach, it is also possible to investigate the source of systematic uncertainties peculiar to the calorimetric technique. As shown in [54], it appears that the most crucial and worrisome sources of systematics are the uncertainties related to the Beta Environmental Fine Structure (BEFS), the theoretical spectral shape of the beta decay, the energy response function, and the radioactive background. These sources are briefly discussed in the following.

The BEFS is a modulation of the beta emission probability due to the atomic and molecular surrounding of the decaying nuclei [53], which is explained by the electron wave structure in terms of reflection and interference. The BEFS oscillations depend on the interatomic distance, while their amplitude is tied to the electron-atom scattering cross section: although the phenomenon is completely understood, its description is quite complex and the parameters involved are not all known a priori. So far, it could be detected only in the low energy region— keV—of the spectra, where both the beta rate and the BEFS are larger, but as far as the effect on the neutrino mass determination is concerned, the oscillation extends way up to the end-point (Figure 16). For a safe extrapolation up to the end-point and to minimize the systematic uncertainties, the BEFS must be characterized using much higher statistics beta spectra and independent EXAFS analysis of the material containing rhenium.

The theoretical description of the decay spectrum given in Section 6.2 is slightly contradicted by experimental observation, since available high statistics spectra are in fact better interpolated as , that is, with . This deviation from theory has not found a plausible explanation yet (Fedor Šimkovic, private communication) and it will become troublesome when larger statistics experiments will call for more accurate description of the spectrum.

The detector response function is probed by means of X-ray sources which are not exactly monochromatic and which do not replicate the same type of interactions in the absorber as for beta decay. In fact, the X-ray interactions happen at a shallow depth, whereas the beta decays are uniformly distributed in the volume; moreover, in the case of X-rays, the energy is deposited by a primary photoelectron followed by a cascade of secondary X-rays and Auger electrons, whereas in beta decay all the energy is deposited along one single track. It is therefore extremely important—yet challenging—to fully understand the measured response function in order to disentangle the contributions to its shape caused by the external X-rays.

In calorimetric experiments, since the beta source cannot be switched off, the environmental and cosmic background in the energy range of the beta spectrum cannot be directly assessed. Therefore, a constant background is usually included in the fit model as the safest hypothesis. This hypothesis may happen to be not accurate enough for future high statistics measurements.

6.4. MANU

The research program in Genova which leads to the MANU experiment started in 1985 [59] with a focus on the use of metallic rhenium absorbers. At that time, there was absolutely no knowledge about the behavior of this material as absorber for LTDs. Therefore, the first years were devoted to study the heat capacity and the thermalization efficiency of metallic rhenium.

The outcomes of the preliminary phase are summarized in [112]. The thermalization efficiency was studied for many superconductors in form of small single crystals (cubic millimeters) of Al, Pb, In, Ti, Nb, Va, Zn, and Re, and a quasi-universal dependence on the ratio was found, with dropping sharply for lower than about . In particular, rhenium thermalization was investigated for single- and polycrystals between 50 mK and 200 mK. The rise time was limited to about 200 μs, preventing the assessment of the thermalization efficiency at shorter time scales. Also, for rhenium, it was found that full thermalization is attained for an operating temperature above about 83 mK. The effect of magnetic fields was also investigated, and an unexpected and unexplained reduction of for magnetic fields increasing up to 20 Gauss was found.

The first observation of spectrum was reported in [113]. After this, a period was spent to optimize the microcalorimeter performance, also exploiting the gained understanding of metallic rhenium absorbers: energy resolutions as good as about 30 eV were demonstrated with small—about μg—rhenium absorbers. In 2001, the results of the high statistics measurement of MANU were published [114]. The MANU experiment’s microcalorimeter was a NTD germanium thermistor coupled with epoxy resin to a 1.572 mg rhenium single crystal. Two ultrasonically bonded aluminum wires provided both the path for the electrical signal and the thermal contact to the heat sink at 60 mK (Figure 17). The detector had a thin shield against environmental radiation made out of ancient Roman lead. A weak source ( counts/s) allowed the monitoring of the gain stability during the measurement, while the energy calibration was established through a removable fluorescence source emitting the K lines of Cl, Ca, and K. Signals were read out by a cold stage with unitary gain using a JFET at about 150 K, digitized at 12 bits in 1024 long records, and processed with an Optimal Filter. Further processing was used to detect pile-up events [115]. The high statistics measurement lasted for about 3 months and the detector performance is listed in Table 2 [114]. Fe  line had a perfectly Gaussian shape with tails lower than 0.1%. The calibration with the fluorescence source showed that the energy resolution is practically constant with energy, and the deviation from linearity of the energy response was of about 0.16% at spectrum end-point.

The fit of the spectrum (Figure 18) gave a squared neutrino mass of  eV2, which translated in an upper limit at 95% CL or 19 eV at 90% CL [116].

The results reported in [114] were the most precise measurements of the transition energy and of half-life at the time of publishing; half-life in particular is of great interest for geochronology for determining the age of minerals and meteorites.

This high statistics measurement allowed also the first time observation of the BEFS [118] and the setting of a limit for the emission of sterile neutrinos with masses below 1 keV [119] (Figure 19).

6.5. MIBETA

The Milano program for a neutrino mass measurement with started in 1992 with an R&D to fabricate silicon implanted thermistors in collaboration with FBK [121]. The final objective was to make large arrays of high resolution microcalorimeters using micromachining [122]. NTD germanium based microcalorimeters were also tested. In light of the encouraging results obtained at Genova, at first, the program concentrated on metallic rhenium absorbers. Many single- and polycrystalline samples were tested with disappointing results: small signals, long time constants, and inconsistently varying pulse shapes. A possible correlation with the sample purity and with residual magnetic fields was individuated, but this was not enough to improve the results. Better detector responses were seen only at temperatures approaching 200 mK, too high for obtaining the necessary sensitivity as in (40). The research program therefore moved onto the systematic testing of dielectric rhenium compounds as microcalorimeter absorbers. From the beginning, the most suitable compounds looked like those based on the (perrhenate) anion. A nonexhaustive list of tested compounds includes , , , , and . The tests with failed since this compound sublimates in vacuum at room temperature. The second-to-fourth materials, despite the good theoretical expectations and the large signal-to-noise ratio, showed a quite poor energy resolution—exceeding 100 eV at 6 keV—which could be explained to be due to a large thermalization noise.

Silver perrhenate , on the other hand, immediately exhibited good properties with limited thermalization noise. The calibration peaks were sufficiently symmetric, and energies resolutions as good as 18 eV FWHM at 6 keV were achieved. crystals are transparent, crumbly, and slightly hygroscopic, with a specific activity of about  Hz/μg [123].

The MIBETA experiment ran between 2002 and 2003 an array of 10 microcalorimeters for a high statistics measurement, which was preceded by a campaign of measurements dedicated to tuning the set-up and to reducing the background [124].

The array was made of AgReO4 crystals with masses ranging from 250 to 300 μg to limit event pile-up, for a total mass of 2.683 mg. The crystals were attached to silicon implanted thermistors with epoxy resin, and four ultrasonically bonded aluminum wires were used both as signal leads and heat links to the heat bath, stabilized at 25 mK (Figure 20). The 10 microcalorimeters were enclosed in two copper holders without lead shielding, to avoid the background caused by lead fluorescence at 88 keV, which in turn provokes escape peaks in very close to the beta end-point. The stability and performance of all detectors were monitored with a movable multiline fluorescence source at 2 K, which was activated for 25 min every 2 h to emit the K lines of Al, Cl, Ca, Ti, and Mn. When not used for the calibration, the primary source was pulled inside a massive shield of ancient Roman lead [125], in order to minimize the contribution to the radioactive background caused by IB of . The data acquisition program controlled the movements of the source and tagged the events collected during the calibrations. The first stage of the electronic chain used 10 JFETs cooled to about 120 K and placed few centimeters below the detectors. A 16-bit data acquisition system digitalized and saved to disk the signals for an Optimal Filter based offline analysis.

The high statistics measurement of MIBETA lasted for about 7 months. In the final analysis, the data from two detectors, with poorer energy resolution, were not included. The total active mass was therefore 2.174 mg, for a activity of 1.17 Bq. The final beta spectrum obtained from the sum of the 8 working detectors corresponds to about 8745 hours×mg [117]. The performance of the detectors was quite stable during the run and is reported in Table 2.

All X-ray peaks in the calibration spectrum showed tails on the low energy side, and the thermalization noise of caused their width to increase with the energy (Figure 21). The fit of spectrum (Figure 21) gave a squared neutrino mass of , which translates in an upper limit at 90% CL [117]. The systematic error was dominated by the uncertainties on the energy resolution function, on the background, and on the theoretical shape of the spectrum.

Additional lower statistics measurements with the same set-up were carried out to study and reduce the background and to investigate the energy response function. In particular, using the escape peaks caused at about 17 keV by the irradiation with a gamma source (see Figure 22) as comparison, it was possible to partly understand the complex shape of the X-ray calibration peaks and to establish that at least the longest of the observed tails were due to surface effects [126].

Although BEFS (see Section 6.3 and Figure 16) is almost one order of magnitude fainter in compared to metallic rhenium, it was observed also in the high statistics spectra of MIBETA [110] (Figure 22). In particular, the BEFS ripple interpolation returns a to branching ratio in beta emission of about , which is compatible with the expected prevalent emission (see Section 6.2).

6.6. MARE

The MANU and MIBETA results, together with the constant advance in the LTD technology, made it reasonable to propose a larger scale project: the Microcalorimeter Arrays for a Rhenium Experiment (MARE). The ambition of MARE was to establish a sub-eV neutrino mass sensitivity through a gradual deployment approach. The project was started in 2005 by a large international collaboration [127, 128] and it was organized in two phases.

The final objective of a sub-eV statistical sensitivity on the electron neutrino mass was the goal of the second phase. To accomplish this, the program was to gradually deploy several large arrays—about elements each—of detectors, with energy and time resolutions of the order of 1 eV and 1 μs, respectively. Each pixel was planned to have a source activity of about few counts per second in order to collect a total statistics of about beta decays in up to ten years of measurement time (see Figure 23) [127]. Figure 23 shows also the MARE sensitivity to the emission of heavy sterile neutrinos with masses below 2 keV (Section 5.4).

Phase 1—also called MARE-1—had the task to ascertain the most suitable technical approach for the final experimental phase, also with the help of smaller scale experiments. An R&D program was started with the aim of improving the understanding of the superconducting rhenium absorbers and of their optimal coupling to sensors and developing the appropriate array technology and multiplexed read-out scheme [127]. At the same time, two intermediate size experiments carried out with the available technologies aimed to reach a neutrino mass sensitivity of the order of 1 eV and to improve the understanding of all the systematics peculiar of the calorimetric approach with . Furthermore, MARE-1 started to explore the alternative use of for a calorimetric measurement of the neutrino mass. Given the unavoidable competition with the KATRIN experiment, the time schedule for MARE was quite tight.

The physics of metallic rhenium as absorber for the MARE detectors was the focus of the Genova and the Heidelberg groups. The best technologies available for the MARE-2 arrays were (1) the transition edge sensors (TESs) with Frequency Division Multiplexing, investigated by the Genova group and Physikalisch-Technische Bundesanstalt (PTB, Berlin, Germany); (2) the Metallic Magnetic Calorimeters (MMCs) with Microwave SQUID Multiplexing, developed by the Heidelberg group; and (3) Microwave Kinetic Inductance Detectors (MKID) with Microwave Multiplexing, explored by the Milano group. The Genova group, in collaboration with Miami and Lisbon, planned an experiment consisting of an array of 300 TES detectors with 1 mg rhenium single crystals [129]. With energy and time resolutions of about 10 eV and 10 μs, respectively, the sensitivity attainable in 3 years of measuring time was estimated to be around 1.8 eV at 90% CL, for a statistics of about decays. The Milano group, together with the NASA/GSFC and Wisconsin groups, deployed an array of silicon implanted thermistors coupled to absorbers. The experiment used 8 of the 36 pixel arrays that NASA/GSFC had developed for the XRS2 instrument [130]. With 288 pixels attached to about 500 μg crystals, and with energy and time resolutions of about 25 eV and 250 μs, respectively, a sensitivity around 3.3 eV at 90% CL was expected in 3 years of measuring time, with a statistics of about decays.

Unfortunately, the MARE-1 outcomes were quite disappointing, and MARE-2 ended up being cancelled before taking off. The lack of success of the MARE initiative was mostly the consequence of the final acknowledgment of the impossibility to fabricate rhenium microcalorimeters matching the specifications set by the aimed sub-eV sensitivity. The systematic investigations carried out at Heidelberg with rhenium absorbers coupled to MMC, despite some noteworthy progress, arrived to conclusions similar to those of past works: rhenium absorbers behave inconsistently, showing a large deficit in the energy thermalization accompanied by long time constants [131]. Therefore, the challenging idea of improving and scaling up the pioneer experiments using metallic rhenium absorbers turned out to be a dead-end road.

Indeed, also the other experimental efforts of MARE-1 encountered several difficulties [132]. For example, the setting up of arrays of crystals turned out to be more troublesome than expected (Figure 24). The freshly polished surfaces of crystals shaped to cuboids resulted to be incompatible with the sensor coupling methods used successfully in MIBETA with as-grown small crystals. Despite the use of a micromachined array of silicon implanted thermistors, the performance of the pixels was irreproducible and, while gradually populating and testing the XRS2 array with crystals, the performance of the instrumented pixels started to degrade. This made the array finally unusable. Given the sorts of the MARE project, also this branch of the MARE-1 program was thus dropped in 2013.

6.7. Future of Rhenium Experiments

From the MARE experience, it is clear that a large scale neutrino mass experiment based on beta decay is not foreseeable in the near future. It would require a major step forward in the understanding of the superconductivity of rhenium, but, after more than 20 years of efforts, this is not anymore in the priorities of the LTD scientific community. Besides the intrinsic problems of metallic rhenium, there are other considerations which make rhenium microcalorimeters not quite an appealing choice for high statistics measurements. Because of the large half-life of , the specific activity of metallic rhenium is too low to design pixels with both high performance and high intensity beta sources. activity required by a high statistics experiment must be therefore distributed over a large number of pixels—of the order of —while the difficulties inherent with the production of high quality metallic rhenium absorbers contrast with the full microfabrication of the arrays. MARE-1 also demonstrated that is not a viable alternative to metallic rhenium. For these reasons, the new hope for a calorimetric neutrino mass experiment with LTDs is .

7. Current Experiments

7.1. Calorimetric Absorption Spectrum of   EC

De Rújula introduced the idea of a calorimetric measurement of EC decay already in 1981 [30], but it was only one year later that this idea was fully exploited in the paper written by Lusignoli [43]. The EC decayhas the lowest known value, around 2.5 keV, and its half-life of about 4750 years is much shorter than one. In [43], the authors compute the calorimetric spectrum and give also an estimate of the statistical sensitivity to the neutrino mass at the spectrum end-point, including the presence of the pile-up background. Unfortunately, at that time, the experimental measurements of the value were scattered between 2 keV and 3 keV causing large uncertainties on the achievable statistical sensitivity.

A calorimetric EC experiment records all the deexcitation energy and therefore it measures the escaping neutrino energy ; see (17). The deexcitation energy is the energy released by all the atomic radiations emitted in the process of filling the vacancy left by the EC decay, mostly electrons with energies up to about 2 keV (the fluorescence yield is less than ) [32]. The calorimetric spectrum has lines at the ionization energies of the captured electrons. These lines have a natural width of a few eV; therefore, the actual spectrum is a continuum with marked peaks with Breit-Wigner shapes (Figure 25). The spectral end-point is shaped by the same neutrino phase space factor that appears in a beta decay spectrum, with the total deexcitation energy replacing the electron kinetic energy . For a nonzero , the deexcitation (calorimetric) energy distribution is expected to bewhere (with the Fermi constant and the Cabibbo angle ), is the binding energy of the th atomic shell, is the natural width, is the fraction of occupancy, is the nuclear shape factor, is the Coulomb amplitude of the electron radial wave function (essentially, the modulus of the wave function at the origin), and is an atomic correction for electron exchange and overlap. The sum in (51) runs over the Dy shells which are accessible to the EC with the available (M1, M2, N1, N2, O1, O2, and P1). The expression (51) is derived in [43], where numerical checks to test the validity of the approximations made are also presented.

Until about 2010, only three calorimetric absorption measurements were reported in the literature:(1)the ISOLDE collaboration used a Si(Li) detector with an implanted source [40, 133];(2)Hartmann and Naumann used a high temperature proportional counter with organometallic gas [42];(3)Gatti et al. used a cryogenic calorimeter with a sandwiched source [134].However, none of these experiments had the sensitivity required for an end-point measurement; therefore, they all gave results in terms of capture rate ratios. The most evident limitations of these experiments were statistics and energy resolution. One further serious trouble for Si(Li) and cryogenic detectors was the incomplete energy detection caused by implant damage and weak thermal coupling of the source, respectively.

Recently, a new generation of calorimetric holmium experiments has been stimulated by the MARE project. In fact, despite the shortcomings of the previous calorimetric experiments and theoretical and experimental uncertainties, a calorimetric absorption experiment seems the only way to achieve sub-eV sensitivity for the neutrino mass. Moreover, low temperature X-ray microcalorimeters have reached the necessary maturity to be used in a large scale experiment with good energy and time resolution; hence, they are the detectors of choice for a sub-eV holmium experiment.

Thanks to the short lifetime, the limited number of nuclei needed for a neutrino mass experiment— nuclei for 1 decay/s—can be introduced in the energy absorber of a low temperature microcalorimeter. Therefore, holmium experiments can leverage the microcalorimeters development for high energy resolution soft X-ray spectroscopy, whereas rhenium experiments would need a dedicated development of detectors with metallic rhenium absorbers. Small footprint kilo-pixel arrays can be fully fabricated with well established microfabrication techniques.

Indeed, in microcalorimeters with metallic absorbers such as gold, the relatively high concentration of holmium could cause an excess heat capacity due to hyperfine level splitting in the metallic host [67] and thereby degrade the microcalorimeter performance. Low temperature measurements have been already carried out in the framework of the MARE project to assess the gold absorber heat capacity (at temperatures  mK), both with holmium and with erbium implanted ions [135]. Those tests did not show any excess heat capacity, but more sensitive investigations need to be carried out.

The Genova group pioneered the application of LTDs to the measurement of the calorimetric spectrum of [134] and continued this research until it converged in the MARE project [136]. For long, the focus has been on the production of isotope, on the chemistry of metallic holmium, and on the techniques to embed the isotope in the detector absorbers.

The new experiments, now ready to start the production of high resolution detectors for the high statistics calorimetric measurement of EC decay, will be the subject of the next sections.

7.2. The Value of   Decay

Until very recently, the question of the exact value of the EC decay was not settled. Although the results showed a general tendency to accumulate around 2.8 keV, especially restricting to the calorimetric measurements [40, 131, 134], the reliability of the capture ratios as tool for determining remained questionable. Indeed, has never been measured directly from the end-point of EC spectrum, but only from the capture ratios , whose accuracy is limited (Section 4). The currently recommended value of is  keV [137], but it is deduced from a limited set of data. The statistical sensitivity of experiments depends strongly on how close the end-point and the M1 capture peak are. To a good degree of approximation, the Lorentzian tail of the M1 peak centered at dominates the end-point, and for equal to zero one haswhere and . It can be shown that, in these conditions, the neutrino mass sensitivity is . The uncertainty on , therefore, turns into the difficulty to design experiments and to predict their sensitivity reach (Figure 26). Indeed, the shift of attention from to has been eased by the reasonable hope that a very low could greatly enhance the achievable sensitivity of experiments.

Very recently, value was determined from a measurement of the - mass difference using the Penning trap mass spectrometer SHIPTRAP [140]. The measured value confirms the most recurrent measured in recent calorimetric experiments, although chemical shifts may still be expected for embedded in the LTD absorbers. The knowledge of the value is indeed a crucial ingredient for the optimal design of an experiment, while its limited precision and accuracy prevent from using it as fixed parameter when the experimental data are interpolated to assess the neutrino mass [141]. Nevertheless, a comparison of from the interpolation with a value obtained with an independent measurement—such as the - mass difference—is a powerful tool to pinpoint systematic effects.

In any case, the direct assessment of from the end-point of the calorimetric spectrum remains the first important goal of the upcoming high statistics measurements.

7.3. Statistical Sensitivity

While the complexity of both the EC and the pile-up spectra makes an analytical estimate of the statistical sensitivity an impossible task, a Monte Carlo approach analogous to the one described in Section 6.3 can give useful results [142]. Most of the considerations made for are also valid in the case of . The general conclusion about the importance of the total statistics is well exemplified by Figure 27 for the now established value of 2800 eV: indeed, the high value raises the stakes of the experimental challenge and the prospects for a sub-eV sensitivity are scaled down. Table 3 shows the exposures required for two possible experiments aiming at a sensitivity of 0.2 and 0.1 eV, respectively (see also Figure 28). Although it may be possible to design microcalorimeters with a high activity, sub-eV sensitivity will likely require arrays with total number of channels of the order of . Indeed, there are several limitations to the possible activity , such as the effect of nuclei on the detector performance or detector crosstalk and dead time considerations. As shown in Section 6.3, a high activity causes an increase of and thereby a reduced sensitivity to radioactive background. This, along with the relative thinner aspect ratio of microcalorimeters with , makes it likely that it is not strictly necessary to operate arrays in an underground laboratory [142].

No high statistics measurement faced so far the task of a careful estimation of systematic uncertainties. Nevertheless, it is fair to say that there are some substantial differences between the systematic uncertainties expected for and experiments, which are worth mentioning. To avoid spectral distortions due to the escape of radiation, the absorbers must provide encapsulation with a minimum thickness of few microns. For gold absorbers, Monte Carlo simulations indicate a thickness of 2 m for a 99.99998% (99.927%) absorption of 2 keV electrons (photons).

Furthermore, M1 and M2 peaks in the calorimetric spectrum provide a useful tool to evaluate the detector response function overcoming the problems related to the use of an external X-ray source (Section 6.3). The same peaks can also be exploited for energy calibration, for tracking and correcting gain drifts, and for easing the summation of the spectra measured with the many pixels of the arrays.

The following section addresses the accuracy of the description of the calorimetric spectrum of which is likely to be a relevant source of systematic uncertainties.

7.4. A Better Description of the EC Spectrum

While the question of the actual value of the EC transition is now settled, many authors are still debating about the precise shape of the calorimetric spectrum. Indeed, (51) is only an approximation. Already in the original work [43], the applicability of two approximations was demonstrated: the neglect of possible interference between the capture from different levels and the inclusion of transitions with off-shell intermediate states such as K and L.

Riisager in 1988 [143] discussed the distortions of the Lorentzian peak shape expected when considering that, in the atomic radiation cascade, the atomic phase space available at each step is altered by the natural width of previous transitions.

More recently, beginning with Robertson papers [144, 145], some authors started to recognize that the sum in (51) must be extended to more transitions which initially were deemed as negligible [146]. This is caused by the incomplete overlap between Ho and Dy atomic wave functions. Recalling that calorimeters measure the neutrino energy , while writing (51), it was assumed thatwhere is the binding energy of shell H in a Dy atom, that is, the energy to fill the hole H in atom. But this is not correct. The hole H is in a neutral Dy atom with an extra (the eleventh) 4f electron, because the parent Ho atom has an electronic configuration which differs from the one of Dy in the number of electrons (11 versus 10) (see also [147]). Following [31], this can be expressed aswhere is a correction which accounts for the imperfect atomic wave functions overlap. So, the capture peaks in the calorimetric spectrum are expected to be shifted by a small amount which Robertson [145] calculated as , where is the binding energy of the electron in the Ho atom. The atomic wave function mismatch goes along with the possibility of shake-up and shake-off processes, adding more final states to the EC transition and, therefore, more terms in the sum in (51). These processes are the ones responsible for the presence in the final states of two (or more) vacancies created in the Dy atom, along with the extra electron. The second vacancy is left by an atomic electron which has been shaken by the sudden change in the wave functions to a higher bound unoccupied state (shake-up) or to the continuum (shake-off). In the case of shake-up processes, the neutrino energy is given by [145]and the contribution to (51) is just another Lorentzian peak term with . The case of the shake-off process is more complex because it is a three-body processThe corresponding contribution to (51) is not a narrow line since adds up to the observable atomic deexcitation . The actual shape of the energy spectrum of the shaken-off electrons can be calculated as shown in [148].

In general, the probability for the multihole processes is small and it can be estimated to be of the order [146]. The precise calculation of the 2- or 3-hole processes probability is treated in many recent papers [145, 149, 150], with the purpose to improve past results from [151], although, so far, all calculations apparently consider only shake-up processes. In Figure 29, the dashed line is the EC calorimetric spectrum calculated including 2-hole excitations and using the parameters calculated in [149].

The very recent paper [152] extends the work presented in [148] and attempts also to assess the effect of the so far neglected shake-off processes on the end-point region of the spectrum. Although the authors state that their preliminary theoretical estimates should not be fully trusted, the intriguing result of their analysis is that the end-point count rate might be largely dominated by the shake-off processes, with a predicted enhancement of about a factor 40.

The awareness of all the above corrections to (51) triggered some skepticism about the actual feasibility of a neutrino mass measurement from the end-point of the calorimetric EC spectrum. The main argument is that, since the neutrino mass is searched as the difference between the observed experimental spectrum and the theoretical one for , the a priori knowledge of the latter one is an absolute condition. Indeed an inaccurate and unreliable theoretical description of all these additional spectral components in the end-point region may induce systematic uncertainties in the neutrino mass determination. However, it can be argued that in the very last portion of the spectrum—where the phase space factor would be affected by —the various suggested spectral components are smooth and featureless and could be safely obtained by extrapolating the high statistics data collected on the right shoulder of M1 peak—where all the 2-hole processes leave their footprints.

The multihole processes bring also a more subtle threat to the neutrino mass measurements with the underlying pile-up spectrum: as it can be seen in Figure 29, these higher order transitions cause additional peaks to appear in the end-point vicinity. Although the shape of the pile-up spectrum with its disturbing peaks can be accurately deduced from the high statistics measurement of the spectrum of itself, it is worth noting that in [152] it is shown that the predicted count rate enhancement in the end-point region of the spectrum is accompanied by an increase of the signal-to-background ratio. Moreover, when considering the shake-off transition, most of the additional peak features in the pile-up spectrum are smoothed away.

In the near future, the experiments getting prepared for the neutrino mass measurement will explore the spectrum above the M peaks with increasing statistical precision thereby shedding a light on its actual shape. This will allow a more accurate prediction of the statistical sensitivity and a meaningful attempt to quantify the possible systematic uncertainties.

So far, the attention focused on the high order corrections to the spectrum due to the atomic processes following the EC, while the solid state effects such as the BEFS have been neglected. Although the relatively lower importance of these effects with respect to the multihole transitions presently justifies postponing their analysis, in order to properly analyze high statistics neutrino mass measurements, it will be mandatory to fully assess how the BEFS manifests itself. case is more complex than one (Section 6.3). One marked difference is that the calorimetric spectrum of is a sum of few different spectra, each made up of few different sequences of atomic transitions filling one or more holes, while the spectrum of is just one beta transition. The energy spectrum of every atomic electron ejected to the continuum is modulated by the BEFS, but the largest contributions for are expected from the interference patterns in the spectra of the electrons shaken-off in the 2-hole processes with one M1 hole or M2 hole. However, it is likely that in the calorimetric spectrum of the pattern resulting from the summation of the many independent contributions is going to be smoothed and diluted. Moreover, the BEFS depends on the matrix hosting nuclei—which is gold for most of the planned experiments—and on the exact position occupied host lattice by the nucleus. In particular, the BEFS amplitude depends on the lattice structure of the environment closely surrounding the nuclei which is determined by the type of detector absorber and can be affected by the isotope embedding technique. For example, the local damage associated with ion implantation may suppress the BEFS.

7.5. Production

is not a naturally occurring isotope: it was discovered at Princeton in a sample of that was neutron irradiated in a nuclear reactor [153]. To carry out neutrino mass experiments with , the isotope must be produced in fairly large amounts. Upcoming medium size experiments will have to contain about nuclei of —that is, about 3 μg—for a total activity of the order of  Bq. The isotope production and separation are critical steps in every plan for an ambitious holmium neutrino mass experiment. There are many nuclear reactions which can be exploited to produce . A comprehensive critical evaluation of all possible production routes is presented in [154] although, presently, not all the cross sections of the considered processes are experimentally known.

In general, the production process starts with a nuclear reaction, which can be either direct—such as —or indirect—such as . These reactions unavoidably coproduce other long living radioactive species—also owing to the presence of unnecessary isotopes in the target material—which need to be removed to prevent interferences to the neutrino mass measurement. Chemical separation of holmium can remove most of them, with the notable exception of the beta decaying isomer ( years,  keV). Geant4 Monte Carlo simulations performed for gold absorbers show that each Bq of can contribute, by about 1 count/eV/day, to the background level in the end-point region of spectrum [155]. Therefore, must be removed by means of a further isotope mass separation step. The key parameters of the entire process are the isotope production rate, the / ratio, and the efficiencies of chemical and mass separations. They determine the amount of starting material that is required to have the target number of nuclei to be embedded in the detector absorbers. Of course, also the final embedding process causes further isotope losses which must be considered, although in some approaches the embedding is part of the production process—for example, when the embedding is achieved by means of the same accelerator used for mass separation. When all efficiencies entering in the process are considered, the needed for the next high statistic measurements is likely to increment to tens or hundreds of MBq.

Early experiments used the same process with which the isotope was discovered, that is, neutron irradiation of . Another route used for past experiments is based on proton spallation with Ta targets. The experiments presented in the following use either neutron irradiation of enriched targets or proton irradiation of natural Dy targets.

Neutron irradiation of an enriched sample is a very efficient route. The starting material is usually enriched which is available as by-product of the production of isotopes for medical applications. The large thermal neutron cross section  barns together with the availability of high thermal neutron flux nuclear reactors—as the one of the Institut Laue-Langevin (ILL, Grenoble, France) with a thermal neutron flux of about  n/s/cm2 [156]—gives an estimated production rate of about 50 kBq()/week/mg, for enriched at 30% in . This rate may be reduced by the yet unknown cross section of the burn-up process . Neutron irradiation causes also the production of owing to the presence of impurities such as and in the enriched target. If the route prevails, for a 10% isotopic abundance in the target, a coproduction of about can be expected. One drawback of this route is the cost for the enriched procurement.

production via proton irradiation of natural Dy target depends on the proton energy and has a production rate which is not competitive with high-flux reactors, especially for large amounts. In [154], the production rate as a function of the total cumulative charge is estimated to be about few Bq()/Ah/g for 24 MeV protons. is produced by the neutrons from the reaction in captures on or on contaminations. Monte Carlo simulations give a coproduction lower than . In spite of its low efficiency, the use of a natural target and the limited coproduction make this route appealing for small scale experiments.

7.6. ECHo

ECHo is a project carried out by the Heidelberg group in collaboration with many other European and Indian groups [157]. The midterm goal of this project—ECHo-1k—is a medium scale experiment with an array of 1000 MMCs, each implanted with 1 Bq of [158]. With energy and time resolutions of at least 5 eV and 1 μs, respectively, a statistical sensitivity of about 20 eV at 90% CL is expected after one year of measurement (Table 4). The microcalorimeters are derived from the gold detectors with Au:Er sensors designed and fabricated by the Heidelberg group for soft X-rays spectroscopy.

So far, the results of two prototypes with in the absorbers have been presented. For the first prototype, isotope was implanted at the isotope separation online facility ISOLDE (CERN). Here, produced by spallation with protons on Ta was accelerated, mass separated, and implanted in the absorbers of four detectors. A total activity of  Bq was enclosed between two gold films with dimensions . The results of the characterization of these detectors are reported in [131, 159] and include an energy resolution of about 8 eV and a remarkable rise time of about 130 ns. In the high statistics spectrum, the peaks due to a contamination of coproduced are visible, although decaying with time. In addition, there are structures on the high energy side of the N1 peak which are tentatively interpreted to be due to higher order EC transitions. From this measurement, the intensities of N1 and M1 lines give keV [131].

For the second prototype, isotope is produced at the ILL high-flux nuclear reactor by neutron irradiating an enriched target. The sample is purified at Mainz both before and after irradiation. The in the target is then mass separated and implanted offline at ISOLDE in the absorbers of two maXs-20 chips. The maXs-20 chips are arrays of 16 MMCs designed and optimized for soft X-ray spectroscopy [160]. About 0.2 Bq is encapsulated between two gold layers with dimensions . Preliminary measurements (see Figure 30) show an energy resolution of about 12 eV and a strong reduction of the background and confirm the structures on the right side of N1 [161]. The persistence of these structures, in spite of the improvements in the background and in the instrumental line shape, supports their interpretation as being due to processes related to the EC decay. Another preliminary analysis discussed in [148] interprets these as the broad structures expected for shake-off transitions.

Present ECHo activities are aimed at running ECHo-1k in the next years (2016–2018) and include the development of the microwave multiplexed read-out of the MMCs [94], the optimization of MMCs design, and the production of 10 MBq of high purity .

7.7. HOLMES

HOLMES is an experiment carried out by the Genoa and Milano groups in collaboration with NIST, ILL, PSI, and Lisbon [155, 162]. The baseline program is to deploy an array of about 1000 TES based microcalorimeters each with about 300 Bq of fully embedded in the absorber, with the goal of energy and time resolutions as close as possible to 1 eV and 1 μs, respectively (Table 4). In this configuration, HOLMES can collect about decays in 3 years of measuring time and the expected statistical sensitivity is about 1.5 eV at 90% CL. The choice of this configuration is driven by the aim of collecting the highest possible statistics with a reasonable exposure. Despite the high pile-up level and the technical challenge that derives from it, this provides a net improvement on the achievable sensitivity and a lower impact of the radioactive background.

The amount of needed for the experiment is estimated to be about 100 MBq and it is being produced at ILL by neutron irradiation of an enriched target, subjected to chemical prepurification and postseparation at PSI (Villigen, Switzerland). A custom ion implanter is being set up in Genova to embed the isotope in the detector absorbers. It consists of a Penning sputter ion source, a magnetic/electrostatic mass analyzer, an acceleration section, and an electrostatic scanning stage. The full system is being designed to achieve an optimal mass separation of versus . The implanter will be integrated with a vacuum chamber for the simultaneous evaporation of gold, first to control the concentration and then to deposit a final Au layer to prevent from oxidizing. The cathode of the ion source will be made of high purity metallic holmium to avoid end-point deformations due to the different shifts in diverse chemical species. The metallic holmium will be obtained by thermal reduction at about 2000 K, using the reaction [132].

HOLMES uses TES microcalorimeter arrays with MUX read-out, both fabricated at NIST (Boulder, USA). The DAQ exploits the Reconfigurable Open Architecture Computing Hardware (ROACH2) board equipped with a Xilinx Virtex 6 field programmable gate array (FPGA) [92], which has been developed in the framework of CASPER (Collaboration for Astronomy Signal Processing and Electronic Research).

Presently, the collaboration is working on the optimization of the isotope production processes. Two samples have been irradiated at ILL and processed at PSI. ICP-MS is used to assess the amount of produced and the efficiency of the chemical separation. From preliminary assessments, the total available activity is about 50–100 MBq.

The optimization of the pixel design is also in progress [163] and Figure 31 shows the design that best matches HOLMES specifications. The absorber is made of gold and to avoid interference to the superconducting transition, it is placed side-by-side with the Mo/Cu sensor on a silicon nitride membrane. The design also includes features to control the microcalorimeter speed. Energy and time resolutions are within factors 2-3 of the target ones, owing also to new algorithms for pile-identification [164, 165].

HOLMES is expected to start data taking in 2018, but a smaller scale experiment with a limited number of pixels will run in 2017, with the aim of collecting a statistics of about decays in a few months.

7.8. NuMECS

NuMECS is a collaboration of several US institutions (LANL, NIST, NSCL, and CMU) with the goal to critically assess the potential of holmium calorimetric neutrino mass measurements [166]. The NuMECS program includes the validation of the isotope production, purification, and sensor incorporation techniques, the scalability to high resolution LTD arrays, and the understanding of underlying nuclear and atomic physics.

Recent work has successfully tested production via proton irradiation of a natural dysprosium target. About 3 MBq of has been produced by irradiating about 13 g of high purity natural dysprosium with μAh of 25 MeV protons at the Los Alamos National Laboratory Isotope Production Facility (IPF). At the same time, a cation-exchange high performance liquid chromatography (HPLC) procedure for the chemical separation of holmium has been developed and a separation efficiency of about 70% has been measured.

For the present testing phase, NuMECS uses TES microcalorimeters fabricated by NIST and specially designed to be mechanically robust. The TES sensor is at the end of a silicon beam, close to a pad used for testing the attachment of a wide range of absorbers (Figure 32).

To incorporate the isotope in the microcalorimeter absorber, NuMECS exploits the drying of solutions containing isotope onto thin gold foils. After testing many procedures, the best results were recently obtained by deposition of an aqueous solution on nanoporous gold on a regular gold foil, followed by annealing in dilute atmosphere at C. The microcalorimeter absorber is made by folding and pressing a small piece of the gold foil.

Figure 32 shows a spectrum collected in 40 hours [167]. activity is about 0.1 Bq. Peaks have a low energy tail and show an excess broadening, explained as caused by thermalization noise. A fit of the M1 peak gives of about 43 eV, inclusive of the peak natural width. All peaks are found within 1% of the tabulated positions. Remarkably, the spectrum does not show any of the satellite peaks predicted in [168], although the statistics is still limited. There is instead an unexplained shoulder on the high energy side of the N1 peak, which resembles the structure observed by ECHo and interpreted as shake-off transition in [148].

NuMECS future plans include the deployment of four 1024 pixel arrays, aiming at a statistical sensitivity of about 1 eV.

8. Summary and Outlook

The use of and as an alternative to tritium for the direct measurement of the neutrino mass was proposed in the same years when the low temperature detector technology was moving the first steps. The idea of making low temperature detectors with rhenium absorbers immediately caught on, both because it appeared to be of almost immediate realization and because it could have an appealing impact on X-ray spectroscopy. Unfortunately, in the long run, the technological difficulties inherent to the use of superconducting rhenium caused the interest of the low temperature detector community to fade away and the neutrino mass projects to have the same fate as the X-ray applications of rhenium detectors.

measurements took more time to take off, as if they were awaiting the readiness of the technology of microcalorimeter arrays applied to high resolution spectroscopy of soft X-rays. Now, neutrino mass experiments are ready to leverage this mature technology, and the interest of the low temperature detector community is high, as demonstrated by the number of parallel efforts. Despite the unluckily high value, the good prospects to perform high statistics neutrino mass measurements in the next couple of years are also attracting the attention of the neutrino physics community as a valid complementary alternative to KATRIN.

Competing Interests

The author declares that there are no competing interests.

Acknowledgments

The author would like to thank Andrea Giachero, Marco Faverzani, Elena Ferri, Andrei Puiu, and Monica Sisti, who in various ways supported him with the writing of this paper; Maurizio Lusignoli and Alvaro De Rújula, for the many useful discussions; Adriano Filipponi, for the unvaluable discussions on the BEFS; and Loredana Gastaldo, Michael Rabin, and Mark Philip Croce, for providing him with updated information on their experiments.