Abstract

Lepton Flavour Violation in the charged lepton sector (CLFV) is forbidden in the Minimal Standard model and strongly suppressed in extensions of the model to include finite neutrino mixing. On the other hand, a wide class of Supersymmetric theories, even coupled with Grand Unification models (SUSY-GUT models), predict CLFV processes at a rate within the reach of new experimental searches operated with high resolution detectors at high intensity accelerators. As the Standard model background is negligible, the observation of one or more CLFV events would provide incontrovertible evidence for physics beyond Standard model, while a null effect would severely constrain the set of theory parameters. Therefore, a big experimental effort is currently (and will be for incoming years) accomplished to achieve unprecedented sensitivity on several CLFV processes. In this paper we review past and recent results in this research field, with focus on CLFV channels involving muons and tau's. We present currently operating experiments as well as future projects, with emphasis laid on how sensitivity enhancements are accompanied by improvements on detection techniques. Limitations due to systematic effects are also discussed in detail together with the solutions being adopted to overcome them.

1. Introduction

In the minimal Standard model (from now on SM) processes with Lepton Flavour Violation involving charged particles (from now on CLFV) are not allowed at all, since the fermion generations are put in by hand in separate doublets and the neutrinos are assumed to be massless. Different lepton generations (electron, muon and tau and their neutrinos) are completely decoupled and in all processes allowed in the model the number of members of different generations is separately conserved (Lepton Flavour Conservation). However, it is experimentally proved from reactor [16], accelerator [711], solar [1227] and atmospheric [2834] neutrino experiments that the Lepton Flavour Violation does take place in the neutral sector: neutrinos are definitely massive and oscillate between different flavours, while their total number is conserved. Then, the natural expectation is that CLFV reactions should be observed even in the charged sector, but, despite a long-term experimental effort, no positive result was obtained. This indicates that CLFV effects are tiny and very difficult to measure; nevertheless, the interest for this search is enormous since when one introduces new particles beyond the SM (as, for instance the supersymmetric partners of the ordinary particles) CLFV processes emerge as one of the distinctive features of Beyond Standard model (from now on BSM) theories. The experimental searches for CLFV reactions and their impact on the BSM models are the subject of this paper.

2. Theoretical Issues about CLFV

2.1. CLFV in the Standard Model

Although forbidden in the SM, space for CLFV processes can be easily created if one includes neutrino masses and mixing, which are known to be nonvanishing. The experimentally measured values of mixing angles are large (for a review see [35] and the references therein), with the exception of , which was constrained for several years by the CHOOZ result [36, 37] and was only recently measured [46, 11]. While the absolute values of neutrino masses are still unknown, their mass differences were experimentally measured to lie in the sub- range. In the frame of this extended SM, loop diagrams appear, which give rise to CLFV reactions. For instance, in Figure 1 we show how can take place: in the first Feynman vertex a muon converts into a muon neutrino by radiating a virtual boson, which emits a photon by inner bremsstrahlung; then, the muon neutrino converts to an electron neutrino via the neutrino mixing and the electron neutrino reabsorbs the virtual boson at the end of the loop in the second Feynman vertex, forming an outgoing electron. The branching ratio () of this process can be simply estimated by noting that it is essentially given by the product of three factors: (1) the usual muon decay, (2) an electromagnetic vertex for photon emission and (3) the neutrino mixing. The last factor contains the neutrino squared mass difference , the energy scale where the mixing takes place (i.e., the boson mass ) and the time scale of the mixing process, which, by the uncertainty principle, is proportional to ; then, this factor is essentially given by [38]. A more accurate calculation gives the following result (see [39] and references therein): where is the fine structure constant, are the neutrino mass-squared differences, are elements of the neutrino mixing matrix and is the boson mass. By substituting the numerical values one obtains: which is experimentally inaccessible. The reason for this is clearly the very small value of , compared with the electroweak mass scale. Just to give a simple idea of what the result (2) means, one can note that the presently available highest intensity muon beams are at level of  muons per second, so even assuming that this number could be increased by some orders of magnitude, the observation of a single decay would require ~1035 years. Then, we can conclude that CLFV processes in the SM, even if possible in principle, are forbidden in practice, so if such effects are experimentally observed, they must originate outside the SM.

2.2. CLFV in Supersymmetric Theories

The SM is a long-dating theory which succeeded in being experimentally verified with high accuracy in several experiments. The last (but clearly not least) experimental confirmation came from observation of Higgs Boson at LHC [40, 41], the unique, but fundamental ingredient of the model which was not yet discovered. Nevertheless, SM is generally regarded as a low-energy approximation of a more fundamental unified theory of all forces in nature. The reason for this is that the SM does not provide answers to several fundamental questions, like the origin and the number of generations, the particle mass spectrum, the quantisation of the electric charge, the hierarchy problem, the amount of CP violation, and so forth. In the unified schemes the distinction between leptons and quarks is partially eliminated and transitions which imply the violation of the Lepton and Baryon Number symmetries (or both) appear. For instance, one of the most famous predictions of such models is the proton decay, even if with a huge life-time. The key point of Grand Unified Theories (from now on GUT) is that all coupling constants of electromagnetic, weak and strong interactions evolve with energy until they reach a common value at some unification scale , while the unification of electromagnetic and weak interactions occurs at the electroweak scale . The existence of two mass scales, coupled with the effects of radiative corrections, leads to the “hierarchy” and “fine tuning” problems, which are solved if GUT models are embedded in Supersymmetric (from now on SUSY) frames. Theories where the GUT principle is inserted in the SUSY scheme are called SUSY-GUT and, if also the gravity is included in the symmetry group, Supergravity (from now on SUGRA) theories. In SUGRA models the natural mass scale for unification is even larger, because of the weakest coupling of gravity: .

Supersymmetry is the preferred environment for SM extensions. In this frame each ordinary particle has a SUSY partner, with a completely different mass, which belongs to the opposite spin group of the particle itself: SUSY fermions are counterparts of ordinary bosons and SUSY bosons are counterparts of ordinary fermions. This introduces a symmetry between bosons and fermions, which has the fundamental property of producing cancellations, at each order, of divergent diagrams, solving the “hierarchy” and “fine tuning” problems; then, the renormalisability of a theory based on the SUSY frame is guaranteed.

However, the symmetry between fermions and bosons is manifestly broken in nature, so that SUSY-breaking terms must be included in the theory. In the Minimal Supersymmetric model (from now on MSSM) the scale of SUSY breaking is around , but in other schemes the symmetry breaking occurs at much higher energies (TeV). SUSY particles of masses ~1 TeV could be produced in high energy collisions and observed at accelerators as LHC (until now, no positive effect was observed [4266]), but for higher mass scale the direct production is not possible and such energy regions can be explored only indirectly by looking at lower energy phenomena, as CLFV. The interplay between the high-energy, the high-intensity and the high-precision frontiers is one of the main elements of the future roadmap of particle physics.

In SUSY (namely, SUSY-GUT) theories the slepton mass matrix is diagonal in the flavour space at the Planck (GUT) scale, but radiative corrections generate relevant off-diagonal terms in the evolution from GUT scale to electroweak scale. Such terms cause a strong enhancement of expected s of CLFV processes with respect to SM. (On the other hand, diagonal terms induce nonzero electric dipole moments as well as sizeable deviations of muon magnetic moment with respect to SM predictions [67]). CLFV processes are generated by slepton mixing and radiative corrections in loop diagrams, like those shown in Figures 2 and 3 for two different models for the decay.

After a pioneeristic work by Lee [68], several authors calculated the expected for CLFV processes, using various symmetry groups. In general, different SUSY models predict different s, since the mixing mechanisms involve different SUSY particles and different members of slepton doublets (for a recent review see [69]). For instance, Figure 2 shows that in the SUSY-GUT SU(5) model only right-handed components of sleptons are subject to mixing, while in SO(10) mixing is effective also for the left-handed components, as shown in Figure 3. The presence of heavier particles in the loop enhances the expected , usually proportional to the square of the particle mass. In SUSY-GUT the decay branching ratio ranges from to in SU(5) models [7072] and from to in SO(10) models [70, 71].

Nevertheless, general requirements of SUSY models, like the request of a stable theory without need of parameter fine tuning, introduce severe constraints, thus narrowing the allowed range of CLFV processes s. For example we show in Figure 4 [73] the correlation between the expected s for , and conversion in Ti as a function of for the same range of SUSY parameters and in Figure 5 [73] the expected as a function of the lightest slepton mass in a SO(10) based SUSY-GUT model.

2.3. Connection with Neutrino Oscillations

The simple inclusion of neutrino mixing in the SM has no relevant effects in the prediction of CLFV branching fractions. However, the situation changes when neutrino oscillations are embedded in SUSY frames. The most widely accepted explanations of neutrino mass pattern are the see-saw mechanisms which, with the addition of large mass right-handed neutrinos, give rise to off-diagonal mass terms; these terms provide a further source of CLFV processes.

For instance the branching ratio is enhanced by , the matrix element responsible for the mixing needed to explain the solar neutrino deficit [74]. Figure 6 shows the predicted branching ratio as a function of the mass of the right-handed gauge singlet for the three solutions of the solar neutrino problem, LOW, LMA and Vacuum. After the SNO and Kamland results, only a fraction of LMA solution survived, which corresponds to higher branching ratios, while the other solutions are completely ruled out. Moreover, the predicted bands are associated with values increasing along the diagonal from right to left; since is excluded by LEP data [75], the predictions close to the lower bounds of the uncertainty bands are highly disfavoured.

Figure 7 [76] shows that in SUSY see-saw schemes the expected s for and processes are well correlated and that such predictions tend to form separate clusters, corresponding to different values of . Higher values favour higher values for both the and the decays. For comparison, the same correlation is shown in Figure 8, for a different class of SUSY see-saw models, where the absence of positive signals of SUSY particles at LHC [4266] is taken into account by allowing that only one of three squark generations has a mass in the few TeV scale [77]. In non-GUT SUSY models the predicted s for CLFV processes are generally more dependent on the choice of parameters, but in any case the recent measurements of [46, 11] in the range (7–10)° favour more optimistic predictions [76].

2.4. Effective Lagrangian for CLFV

Since several BSM scenarios were proposed, each one producing its own predictions for CLFV reaction rates, it is useful to discuss the sensitivity of searches for CLFV in a (almost) model-independent way. This allows also to compare different CLFV channels, one with each other, and to determine the level of information about BSM theory parameters which can be extracted from any individual search and by appropriate combinations of multiple searches. This comparison is usually done by means of an “effective lagrangian,” which explicitly contains a dimensional parameter related to the scale of new physics () and a dimensionless parameter () which gives the relative weight of the possible CLFV inducing mechanisms [39, 78]. Such lagrangian contains several terms [79], but two subsets can be extracted to illustrate some general aspects of the search for CLFV. In the subsets shown here the leptonic operators mediate the transitions between electrons and muons, but the extension to transitions between tau and lighter leptons is straightforward.

The first subset is The first operator in (3) has a magnetic dipole structure and directly mediates and decays and conversions in nuclei at order . The second term involves a four-fermion current and mediates conversions at leading order and at the one-loop level. It is clear that the first operator dominates if , while the second dominates if ; for , both terms are important. Figure 9(a) shows the sensitivity of and conversion experiments in the plane. The region which can be explored by an experiment of a given sensitivity lies below the line corresponding to that sensitivity. For instance, using the expected sensitivity of the upgraded MEG experiment (few , see later) one can conclude that for not-too-large ’s this project should probe values up to (2–4) . A conversion experiment can be competitive if its sensitivity is higher by at least a couple of orders of magnitude. On the other hand, for only conversion experiments are sensitive to the thousands-of-TeV mass scale.

The second subset is and is particularly useful to discuss the sensitivity of experiments. The first operator is the same as that in (3), but the second one is based on a four left-handed lepton current, without quarks. Since this operator contains only leptons, it mediates at the tree level and at the one-loop level. Figure 9(b) shows the sensitivity of and experiments in the plane, as predicted by (4). A experiment can explore mass scales up to for all values if its sensitivity is as good as .

The message which can be extracted by both subsets (3) and (4) is that, despite the enormous importance of a positive observation of a CLFV reaction, the amount of information about new physics which can be extracted by a single measurement is rather limited. While a negative signal would exclude some regions in the plane, a single positive signal would not allow to measure and separately. Then, to learn more about BSM physics, one needs to combine results coming from experiments which explore different CLFV channels, as , , conversion and tau lepton flavour violating decays. Searches for new physics not directly related to CLFV, like search for SUSY particles at LHC, measurements of , Electrical Dipole Moment and so forth, can also contribute to form an as much as possible comprehensive picture of BSM particle world. As an example of this interplay, we show in Figure 10 the predicted decay rate obtained by scanning mSUGRA parameter space for and [80]. Red points correspond to PMNS-like mixing and blue points to CKM-like mixing. In the right side the same colours are used to show the distribution of the models in the plane, taking into account the bound imposed by the MEG result (now superseded). The region below the red line is excluded by direct SUSY searches performed at LHC [4266]. Reference [81] is an example of a combined analysis which takes into account recent results on neutrino oscillations, CLFV, cosmological bounds, measurement of Higgs mass and direct searches for Supersymmetric particles.

3. Experimental Searches: Generalities

The search for CLFV processes dates back to late s [82] and had a fundamental role in the development of particle physics. The absence of positive observations of was one of the main arguments in favour of the emission of two neutrinos in the muon decay (for spin conservation); this led to the conclusion that the muon is not an excited state of the electron and that two different types of neutrinos exist. Moreover, the formulation of the Standard model, where the lepton flavour conservation is set in by hand from the beginning, was clearly driven by the experimental evidence of the absence of CLFV reactions.

The CLFV channels which have been studied experimentally include rare muon and tau decays (, , , , and ), rare kaon decays (, ), direct conversions between leptons of different flavours in nuclear fields (, ) and more exotic processes involving hadronic resonances or heavy quarks. The possible production of SUSY particles at high energy colliders opens also the possibility of searching for their CLFV decays. Figure 11 shows the experimental limits, as a function of time, for branching fractions of CLFV processes involving muons and tau’s.

4. The Muonic Channel

Muons are very sensitive probes to study CLFV processes, since intense muon beams can be obtained at meson factories (PSI, TRIUMPH, LANL, etc) by hitting light targets with low energy protons (<1 GeV) or at proton accelerators (J-PARC, Fermilab, etc) as by-products of high energy collisions. Moreover, the relatively long muon lifetime ( [83]) makes the detection of muon induced events easier than that of reactions induced by more unstable particles. Because of energy-momentum conservation, only a few channels are allowed for CLFV reactions involving muons, the most important ones being the and CLFV decays and the conversion in a nuclear field. The present experimental limits are reported in Table 1. Note that s on and decays are normalised to the SM muon decay, while conversion is normalised to the rate of muon capture in the material where the process is searched for.

4.1.

The decay is the historical channel for studying CLFV decays: the first attempt was done in by Hincks and Pontecorvo [82] using cosmic rays and since then this search was repeated several times. (Negative muons are not used in the search since they are efficiently captured in nuclear matter.) [8492].

Positive muons coming from decay of positive pions produced in proton interactions on fixed target are brought to stop and decay at rest, emitting simultaneously a photon and a positron in back-to-back directions. Since the positron mass is negligible, both particles carry away the same kinetic energy: . The signature is very clear but, because of the finite experimental resolution, it can be mimicked by two types of background:(a)the correlated background, due to the radiative muon decay (from now on RMD): ; the of RMD process is of that of usual muon Michel decay for [83];(b)the accidental or uncorrelated background, due to the coincidence, within the analysis window, of a positron coming from the usual muon decay and a photon coming from RMD, positron-electron annihilation in flight, positron bremsstrahlung in a nuclear field and so on.While signal and RMD rates are proportional to the muon stopping rate , the accidental background rate is proportional to , since both particles come from the beam; the accidental background is therefore dominant and sets the limiting sensitivity of an experiment searching for decay. Then, a continuous muon beam is better suited than a pulsed beam to avoid stripping particles in short bunches and must be carefully chosen to optimise the signal-to-noise ratio. (Presently, the most intense continuous muon beam is the PSI line, used by the MEG experiment (see later), which can provide >108 stopped positive muons per second. However, we can note that most of the meson factory machines are usually coupled with other facilities, like spallation neutron sources, which put severe constraints on the fraction of original proton beam which can be lost on the pion/muon production target. Dedicated muon production systems, like that of the MuSIC project [93], would improve the pion/muon production efficiency by (2-3) orders of magnitude with respect to present machines, reaching a similar muon intensity without need of very powerful proton accelerators. A project of a positive muons per second beam line, to be extracted from the spallation neutron source, is also under investigation at PSI (see Section 4.2). The number of background events depends on the sizes of the signal region, which are determined (at fixed signal detection efficiency) by the experimental resolutions: better resolutions allow smaller signal windows, reducing the number of background events. Physical effects in the target which degrade the resolution, as multiple scattering and energy loss, are dumped by using “surface” muons; that is, muons produced by pions stopped very close to the surface of pion production target. Such muons are fully polarised along the momentum axis and have an almost monochromatic momentum of (the corresponding kinetic energy is ), even if in order to maximise the muon intensity a bit reduced value () is used. Their range in ordinary matter is , so that they can be stopped in relatively thin muon targets, coupled with appropriate degraders. Moreover, with appositely suited beam lines, the rate of muon production and the ratio between muons and contaminating positrons can be made to increase with a power of the momentum, reach a maximum at , and then drop. Another possibility, especially in presence of very intense muon beams, is to use “sub-surface” muons, produced below the pion target surface, which have a bit smaller momentum ( or lower), but a reduced range straggling in the stopping target. Direct measurements [94] show that the range straggling is proportional to .

Table 2 shows the figures of merit and the corresponding C.L. upper limits on obtained by recent experiments. We included also the final goal and the improvement expected at the end of upgrade phase [94] of the MEG experiment.

4.1.1. The MEGA Experiment

The MEGA experiment [88, 89], located at Los Alamos Meson Physics Facility (LAMPF), used polarised surface muons, stopped in a 76 micron target, inclined by with respect to muon beam direction to have enough mass in the crossing direction to stop muons and, at the same time, reduce the amount of matter along the positron path. The muon decay products were detected by a high precision magnetic spectrometer formed by two separate parts.(a)A low mass system of Multiple Wire Proportional Chambers (MWPC) to track the positron orbits, coupled with 87 plastic scintillators for timing measurement; the amount of material in MWPC corresponded to only radiation lengths, in order to minimise energy loss, multiple scattering and photon annihilation in flight and to improve positron resolution and photon background. Azimuthal angle and radial information was extracted from the anodic wire address and from signal induced on stereo strips in cathodic foils, respectively.(b)A gamma-ray detector to measure photon energy, direction, conversion time and location, formed by three coaxial, cylindrical pair spectrometers, each one composed by a scintillation barrel and 250 micron of Pb-conversion foils, sandwiching a MWPC, and three layers of drift chambers. Signals of three photon spectrometers were fed in a hardware trigger, designed to identify positron-electron pairs coming from a photon of at least . The photon vector was reconstructed by assuming a coincident vertex with the positron. The photon detector almost surrounded the positron detection system to maximise the solid angle acceptance.

The system was embedded in a 1.5 T solenoidal magnetic field; the detection system was cylindrical in shape with cylinder axis ( in length and in diameter) parallel to the solenoidal field. A schematic view of the MEGA experiment is shown in Figure 12. The muon stopping rate was , but the duty cycle was only (6-7)% so that the instantaneous trigger rate was . The small duty cycle was caused by the pulsed structure of the LAMPF beam and by a large crowding of the spectrometer due to the solenoidal magnetic field, which allowed low longitudinal momentum positrons to spiral in the chamber system several times. The final data storage rate was thanks to a online filter.

Two auxiliary measurements were done: a photon calibration based on the Charge Exchange (CEX) reaction: and a dedicated run to detect the RMD signal. The CEX reaction, usually based on a liquid hydrogen target, is a widely used technique to calibrate detectors for photons of tens-of-MeV energies: the decay produces two photons, with a flat spectrum within the kinematical limits imposed by energy-momentum conservation. Photons at the lower bound of the spectrum (~(50–60) MeV) can be easily obtained and singled out by using in coincidence an independent electromagnetic calorimeter in a back-to-back configuration. The CEX run was used to extract energy and timing resolution for 52.8 MeV photons obtaining (FWHM) and . Special runs at much lower muon stopping rate (60 times lower), with reduced magnetic fields ( of nominal value) and without online filter, were needed to identify a signal due to RMD events above a huge uncorrelated background. This signal appears in the relative timing plot as a Gaussian shape with .

The analysis used five kinematical variables: the photon energy , the positron energy , the relative timing , the relative angle , and the photon trace-back angle , defined as the difference between the photon direction reconstructed by the line-of-flight and by tracing electron-positron pairs. The resolution in was dominated by multiple scattering in Pb converters.

The positron momentum resolution was obtained by fitting the spectrum of Michel decay positrons . The line shape expected for the signal was determined by folding a Gaussian + polynomial curve with the theoretical spectrum and the detector acceptance, as shown in Figure 13. The gaussian sigma was in the range (0.21–0.36) MeV, corresponding to (0.9–1.6)% FWHM, depending on the number of positron turns in the field and on the number of crossed chambers. The resolution on the relative angle measurement was extracted by Monte Carlo (from now on MC) simulation, getting for close to 180 degrees.

A sample of events was written on magnetic tapes, which was reduced to 3971 events by a preprocessing with loose cuts in energies, relative timing and angle. The remaining dataset was enough large to study the background. The integrated acceptance was evaluated by MC and corrected by visual scanning of MC and data events, obtaining a global efficiency of .

The 3971 survived events were analysed by a maximum likelihood procedure. The PDFs were extracted from experimental data for uncorrelated background, calculated for RMD taking into account the detector response and extracted from MC simulations and calibration data for signal. The best fit for the number of signal events was consistent with zero; a simultaneous fit to RMD events gave a result in agreement with MC expectations ( measured, expected). A C.L. level upper limit on the number of signal events was then extracted: , which, taking into account the normalisation factor, converted in an upper limit on the rate of process: [88, 89].

Compared with previous experiments (see Table 2), MEGA improved significantly the energy resolution for positron and photon and, to a lesser extent, the relative angle resolution, while the relative timing resolution was similar to that of the previous projects, since it was limited by the pair spectrometer technique. The experiment operated at a much more intense muon stopping rate (two or three orders of magnitude higher than that of previous experiments), which, in principle, would have allowed to improve the Upper Limit on by a corresponding amount. However, this was not the case, since the experiment could not efficiently handle the huge number of positron tracks in the spectrometer. The small duty cycle and global efficiency worsened the result by more than one order of magnitude, with respect to the project proposal.

4.1.2. The MEG Experiment

The MEG experiment [95] is searching for decay since several years, with an expected final sensitivity of (at C.L.) with respect to the usual muon decay.

The experiment, schematically shown in Figure 14, uses the secondary muon beam line extracted from the PSI (Paul Scherrer Institute, [96]) proton cyclotron, the most powerful continuous hadronic machine in the world (the maximum proton current is for a proton energy of ; the corresponding power is ). A positive muon beam is stopped in a micron polyethylene target, slanted by with respect to the beam axis, in a spot. The positron momentum is measured by a magnetic spectrometer, composed by an almost solenoidal magnet (COBRA) with an axial gradient field and by a system of sixteen ultrathin drift chambers (from now on DC). The axial gradient was chosen to obtain a rough measurement of the positron momentum, almost independent of the zenith emission angle, and to remove low longitudinal momentum positrons, one of the main reasons of the small duty cycle problem suffered by MEGA. The longitudinal magnetic field varies from at detector center to at both ends; conventional Helmholtz coils compensate the stray field in the location of photon detector photomultipliers. The chamber wires provide the measurement of the positron azimuthal and radial coordinate, while vernier cathode pads on chamber walls allow the measurement of the coordinate along the wire direction. The positron timing is measured by two arrays of plastic scintillators (Timing Counters, from now on TC), each one formed by 15 scintillating bars. The photon energy, interaction point and timing are measured in a 800  volume liquid xenon (from now on LXe) scintillation detector, equipped with a thin window in the photon entrance face. The LXe as scintillating medium was chosen because of its large light yield (comparable with that of NaI) in the VUV region (), its homogeneity and the fast decay time of its scintillation light (45ns for photons and 22 ns for particles [97, 98]). The LXe volume is viewed by Hamamatsu PMTs, specially produced to be sensitive to UV light and to operate at cryogenic temperatures. Possible water or oxigen impurities in LXe are removed by circulating the liquid through a purification system. A FPGA-FADC based digital trigger system was specifically developed to perform a fast estimate of the photon energy, timing and direction and of the positron timing and direction; the whole information is then combined to select events which exhibit some similarity with the decay [99]. The signals coming from all detectors are digitally processed by a [100] custom made waveform digitiser system (Domino Ring Sampler, DRS) to identify and separate pile-up hits.

Several calibration tools (LEDs, point-like sources deposited on tungsten wires [101], Am-Be sources, Michel decays, through going cosmic ray muons, a neutron generator, and ’s from CEX reaction (5) [97], monochromatic -lines from nuclear reactions induced by a Cockroft-Walton accelerator (from now on CW) [102], monochromatic positron beams which undergo Mott scattering…) are frequently used to measure and optimise the detector performances and to monitor their time stability. The drift chamber alignment is obtained by using cosmic rays and Michel positron tracks, combined with optical surveys. Resolutions on photon energy, vertex and intrinsic timing are extracted by CEX measurements, on positron energy and direction by looking at tracks which cross the spectrometer twice (double turn method) and on positron timing by looking at tracks traversing at least two bars [91, 92]. The LXe versus TC relative time stability is continuously checked by means of pairs of -lines produced by interactions of CW-protons on boron targets.

The measured experimental resolutions are: for positrons, for photons, , and for positron-photon relative angles and timing. The relative timing resolution is measured by looking at RMD events, which emerge in the normal data stream as a nice gaussian peak above the uncorrelated background.

The data are analysed with a combination of blind and likelihood strategy. The kinematical variables are positron () and photon () energies, relative timing () and relative polar angles ( and ). Events are preselected on the basis of loose cuts, requiring the presence of a track and . Preselected data are processed several times with improving calibrations and algorithms and events falling within a tight window (“blinding box”, from now on BB) in the plane are hidden. The BB is defined as and . The remaining preselected events fall in “sideband” regions and are used to optimise the analysis parameters, study the background and evaluate the experimental sensitivity under the zero signal hypothesis. When the optimisation procedure is completed, the BB is opened and a maximum likelihood fit is performed, in order to extract the number of Signal (), RMD () and Accidental Background () events. The Probability Distribution Functions (PDFs) are determined by using calibration measurements and MC simulations for , theoretical formulae folded with experimental resolution for and sideband events for (In RMD events the kinematical boundaries introduce a correlation between , , , and which must be taken into account in the PDFs.) Correlations between variables induced by reconstruction procedures (i.e., Kalman filter for tracking) are included in PDF definition. The normalisation factor needed to convert an upper limit on into an upper limit on is obtained by two different methods, one based on Michel positrons, collected by a specialised prescaled trigger and one based on the identification, in the distribution, of RMD events above the flat background. Note that, differently from what happened for MEGA, the RMD signal in MEG is easily visible in the relative timing distribution because of the much better timing resolution (5-6 times higher) and of the smaller crowding of positron spectrometer; then, no RMD specific runs are needed. Different groups operate independent analyses, which differ in the used PDFs, in the statistical approach (frequentistic and bayesian) and in the handling of sideband information. The statistical consistency between the numbers extracted by these analyses is a condition established by the collaboration to publish the results.

The analysis procedure was applied for the first time to the data collected in , with reduced statistics and not optimal apparatus performances and a first result was published [90]: at C.L. A much more significant result was published in [91], based on data collected in and and corresponding to a total number of muons stopped on target, of them collected in . In the alone sample a possible excess of events was observed, which disappeared in higher statistics dataset; the combined result established an upper bound four times better than the MEGA limit [88, 89].

In the analysis was improved by the introduction of better quality algorithms for the treatment of photon pile-up rejection, DCH noise rejection and positron tracking, which increased the efficiencies and the global resolution of the experiment. Then, data collected in and were reanalysed with these new algorithms and, later on, the full blind and likelihood procedure was applied to the data collected in , corresponding to stopped muons. The sensitivity of the experiment, evaluated by using a large ensemble of simulated experiments with zero signal hypothesis, was for 2009-2010 dataset in the old analysis. With the new analysis algorithms, this sensitivity improved to and reached for the whole 2009–2011 dataset, in agreement with what expected from the increased statistics. Figure 15 shows the results of the maximum likelihood fit to the five kinematical variables for 2009–2011 data. The best fit for the number of signal events is or within the physical domain. The distributions of BB events for the combined 2009–2011 dataset in the (left) and in the (right) planes ( is the positron-photon stereo angle) are shown in Figure 16 [92]. Since no excess of events was found, a new upper limit on was set: an improvement of a factor with respect to the pre-MEG era. The actual upper limit (7) is 25% lower than the sensitivity, while the previous limit (6) was 50% higher than the sensitivity obtained with the old analysis. Both these results were due to statistically reasonable event fluctuations, a negative one in the former case and a positive one in the latter case. The experiment ended its data taking in the summer of ; the final data sample is expected to be about two times larger than the 2009–2011 dataset and the projected final sensitivity is ~5 × 10−13.

4.1.3. The MEG Upgrade

A major improvement of MEG sensitivity, to be accomplished in a reasonably short running time (~3-4 years), requires a higher muon stopping rate and improved detectors efficiencies, in order to enhance the signal while keeping the accidental background at a sufficiently low level. So an increase in the muon beam intensity must be accompanied by a consistent improvement of the experimental resolutions. The MEG measured resolutions and efficiencies are compared in Table 3 with the values initially foreseen in the MEG proposal.

The resolutions of the positron spectrometer are quite worse than the designed values. This is true also for the photon energy and for the relative timing. However in the latter case the resolution is again substantially affected by the drift chambers tracking performances since contains the length of the positron track from the target to the timing counter, which is measured by the positron tracker.

Concerning efficiencies, there is substantial room for improvements on the tracker side. The low tracking efficiency is mainly due to the position of the chambers front-end electronics and mechanical support which intercept a big fraction of positrons in their path to the timing counter. Another critical point of these chambers is the use of cathodes in the form of conductive thin foils. The foils are segmented so that the charge induced on the several segments (Vernier pads) is used to precisely reconstruct the Z-coordinate.

The coordinate perpendicular to the wire is instead precisely reconstructed by using the drift time information. The drawback of using cathode foils is twofold:(i)the amplitude of the signals induced on the cathodes is only of a few mV; therefore even a small noise can easily spoil the Z-coordinate reconstruction;(ii)the operation of the chambers presents some instabilities: their use in a high radiation environment leads to the formation of deposits on the cathodes surfaces which in turn give rise to discharges preventing the use of the chamber. This implies a fortiori the impossibility of operating these chambers at higher muon stopping rates.

Based on these arguments, a new tracking chamber was conceived to overcome all the listed problems, namely, with improved efficiency, momentum and angular resolutions and able of steady operation at high rates. The planned resolutions for the proposed tracker, together with a thinner stopping target will yield a substantial improvement in the determination of the positron kinematical variables. A combination was proposed of a surface muon beam (in the present MEG beam configuration) with a target thickness of 140 and an angle of with respect to the muon beam, for a total running time of 3-4 years.

Other major upgrades of the current detector are(i)upgrading the liquid xenon detector, in order to improve the photon energy and position resolutions, by using a larger number of photo sensors with smaller dimension;(ii)building a new pixelated timing counter to improve the resolution in the measurement and eliminate the present cumbersome PMTs helium protection;(iii)building a new mixed trigger/digitiser DAQ board in order to fulfil the needs of a much increased number of channels to be read-out and of a higher bandwidth of the DRS analog front-end. The high resolutions on the and variables, needed for reaching the goals of the improved MEG, have to be maintained during the experiment. This is obtained with the already discussed calibration methods which were fully developed and used during the experiment.

The upgraded proposal [94], with an estimated sensitivity of in years of data taking, was approved by PSI in .

4.1.4. Long Term Future for

Supersymmetry is a wide class of theories, depending on a large number of parameters; then, changing some of them one can vary the expectations for BSM reaction rates by orders of magnitude. For instance, values of are obtained in some SUSY-GUT SU(5) models with various assumptions about the Bino mass and/or the universal trilinear scalar coupling [72]. Future accelerators, like NuFact at CERN [103, 104] and Project-X/Proton Improvement Plan (PIP)-II at Fermilab [105, 106], are expected to deliver high intense (~1015 p/s) proton beams, with energy of tens of GeV or higher; then, secondary muon beams up to ~1014μ/s could be available in the future. Then, one can ask whether CLFV searches can take benefit from these future high intensity machines and eventually at which level.

Unfortunately, for experiments searching for decay, this is not an easy task, since, as observed before, these experiments are unavoidably faced with the bottle neck caused by accidental background. Since such background scales with , a simple increase of the muon stopping rate does not improve the sensitivity. The MEG upgrade project, which was designed to gain an order of magnitude in sensitivity with respect to what was expected for the original experiment, requires significant changes of the detector, even if with limited costs and in a rather short time scale. More ambitious goals demand substantial progresses in experimental techniques, since the performances to be reached by the MEG upgrade subdetectors are at the limit of present technologies. Some possibilities are under study, as, for example, the use of high-resolution spectrometers and of finely segmented and/or active targets. Note that active targets are typically obtained by using scintillating fibres, whose sizes cannot be reduced too much (otherwise, the signal would be too small to be detected); then, this appears in contrast with the request of smaller dimension targets. Finely segmented targets also require a very high tracking resolution in order to unambiguously identify the target element where the positron comes from. Therefore, pushing the sensitivity of this search below seems at the moment rather unlike.

4.2.

In the decay the final state is composed by charged particles only. (As for , this process is searched for by using positive muons; then, we will refer to it as or .) In many models, for instance SUSY-GUT, its is related to by a factor , since the positron-electron pair is thought to originate from a virtual photon; then, a sensitivity of ~10−15 is needed to be competitive with a ~10−13 sensitivity for . However, in other models the process receives contributions also at the tree level by diagrams which include new couplings and new intermediate particles, like doubly charged Higgs particles, scalar neutrinos and so on (see [107] and reference therein). Such diagrams can enhance the rate up to exceed for some particular choices of SUSY parameters. We remind to the discussion in Section 2.4 for the amount of information which can be extracted by combining data of experiments searching for and . Figure 17 shows the Feynman diagram for decay in the case of photon domination (top) and at the tree level (bottom). The search for the process is based on kinematical criteria: all possible triplets of electron tracks are formed and candidate events are selected requiring a zero total momentum, an invariant mass equal (within the resolution) to the muon mass , and three simultaneous tracks, originating from a common vertex; the energy of each track must be because of phase space constraints. As for , the background for decay has a a correlated component, coming from the internal conversion of radiative muon decay and an uncorrelated component, given by the accidental coincidence of a Michel positron and a positron-electron Bhabha pair produced by the scattering of another Michel positron on target or on other detector materials. As for the process, the accidental background scales quadratically with the muon rate and is the dominating one. The possibility of improving the experimental limit is then, also in this case, related to improvements in detector technologies. Since no photons must be detected, one does not need an electromagnetic calorimeter with its limited resolution and the experimental detection relies on spectrometric techniques only; however, the spectrometer must have a wide acceptance, a large solid angle (not far from ) and a relatively low momentum threshold. Therefore, for an intense muon beam a very high rate is expected in the tracking system, which can cause relevant problems of dead time, trigger and pattern recognition.

The present experimental limit [108] dates back by years and only recently a new experiment was approved at PSI to significantly improve this upper bound.

4.2.1. Past Experiments: SINDRUM

The SINDRUM experiment [108] searched for decay by using a ~5 × 106 subsurface positive muon beam, stopped in a double-cone shaped target of . A sketch of the SINDRUM experiment is shown in Figure 18. Muon decay products were detected by a magnetic spectrometer, made by five Multi Wire Proportional Chambers and a cylindrical array of scintillators, arranged in a solenoidal magnetic field. The spectrometer solid angle was . The experiment was equipped with a trigger hodoscope, which selected events with at least two positively and at least one negatively charged track within . The experimental resolutions at were , , and . About events survived the online selections and were processed by a 3D reconstruction procedure to select tracks with the correct time and vertex topology and which satisfied the kinematical constraints. Triplets formed by two positively (positrons) and one negatively charged (electron) tracks were classified in “uncorrelated” and “correlated,” depending on the relative timing and the matching at event vertex of the three tracks. As already observed, correlated events are thought to come from a RMD process, with the inner bremsstrahlung photon converting in a positron-electron pair. The events were looked for in the (total energy, total momentum) plane and compared with a large sample of simulated decays; the results are shown in Figure 19 (taken from [69]). No experimental event felt in the region containing of simulated events. Then, taking into account the experimental acceptances and efficiencies and the total number of stopped muons, an upper bound was set at C.L. [108].

4.2.2. The Future: Mu3e

A new experiment searching for process, called Mu3e, was approved in January at PSI, aiming at a sensitivity of [107, 109]. The experiment is planned in two phases: the first one will be based on the present muon beam and it is expected to reach a sensitivity ~10−15; the second one will use a higher intensity muon beam (still under project) with upgraded detector performances to arrive at the project sensitivity. This new beam line (High intensity Muon Beam-Line, HiMB) would use surface muons produced in the target of the Swiss Spallation Neutron Source (SINQ, [110]) and would deliver a muon decay rate , an order of magnitude more intense than the present line.

The main challenges this experiment will be faced with are a high rate capabilitity (to sustain muon decay rates at level of ), a timing resolution ~100 ps and a vertex resolution ~200 μm (to suppress the accidental background), and a momentum resolution ~0.5 MeV/c (to reject the RMD induced background). Both the momentum and vertex resolution demand an extremely low material budget to minimise the multiple scattering. To satisfy these requests, the detector will take advantage from recent tracking technologies and high resolution timing detectors. A schematic view of the Mu3e experiment is shown in Figure 20. The target, made by aluminum foils, will have a double hollow cone shape, with length, diameter, and a different thickness in the front () and in the rear () cone, to obtain a more homogenous distribution of decays within the two cones and to reduce the multiple scattering effects for decay particles traversing the target. The large size will allow an efficient separation of decay events; a capability of vertex separation better than by using track extrapolation is envisaged. The stopping efficiency will be .

The target will be surrounded by a cylindrical multilayer tracking and timing system, formed by two inner and two outer pixel layers, interleaved by scintillating fibres. In the second phase of the experiment this system will be complemented by a system of scintillator tiles and a further pixel detector to measure momentum of recurling particles. By combining the recurl pixel layer with the inner and outer pixel layer information, a multiple measurement of particle momentum will be available, which will allow to cancel, at the first order, the multiple scattering effects, thus improving the momentum resolution.

The two inner layers will cover in length and a radius of and of , respectively; the two outer layers will be in length and and in radius. The pixel sizes will be with thickness, arranged in high voltage monolithic active pixel sensors (HV-MAPS) of (inner layers) and (outer layers) size. HV-MAPS have high electric field and high charge collection efficiency and combine the advantages of hybrid pixels sensors with integrated analog and digital electronics. Note that the pixel size is smaller than the expected uncertainty in the vertex reconstruction due to the multiple scattering (); then, the pixel size will not be a limiting factor for the position resolution. Pixel sensors should provide also a resolution measurement of positron timing. The expected total number of pixels is million. Because of the high power consumption () the pixel detectors will be equipped with a helium-based cooling system. The geometrical acceptance of the tracker will be ~50% and the material budget will amount to radiation lengths per layer. The mechanical frames, made by Kapton foils, are light and rigid and have been optimised for a small radiation length. The mechanical prototype of the inner pixel detector is shown in Figure 21 [107]. The recurl pixel layers will have a structure identical to that of the outer layers. The expected momentum resolution of the tracker is 0.7 MeV/c in the first phase (without recurl pixel layers) and  MeV/c in the second phase.

The timing detectors are needed to suppress the combinatorial background at high muon rates. The scintillating fibre hodoscope will be located between the inner and outer pixel layers, around the target, at a radius of and with a total length of . They should provide a timing resolution of few × 100 ps even for low momentum particles. Thin () scintillating fibre layers will be used to minimise the momentum degradation induced by traversed material; the exact number of fibre layers is under study. The scintillating tiles will be located within the pixel recurl station. Since a small amount of material will be no more necessary, the tiles can be much thicker: the individual tail size is expected to be . The larger thickness will result in a much higher number of scintillation photons and then in a better timing resolution <100 ps. All timing detectors will be equipped with SiPMs, with good photon efficiency and timing resolution. The tracking and timing system will be immersed in a solenoidal magnetic field, known with a accuracy.

The experiment will have a triggerless continuous readout and a track based online event filter. Timestamps, generated by a system clock, and pixel addresses provided by HV-MAPS will be collected with a new version (DRS5) of the custom sampling chip DRS [100] developed at PSI, at a rate of and processed by a system of FPGAs and Graphical Processing Units (GPUs), which will perform track reconstruction and momentum determination in real time. The rate of data storage is expected to be ~10 MBytes/s.

The detector commissioning is scheduled for , together with the first data acquisition at a reduced rate; the physics run of first phase at stopped muons/s rate is envisaged for . The construction of recurl pixel stations and tile detectors will be conducted in parallel and is expected to finish in . The second phase of data taking will start in or , depending on the availability of HiMB.

4.3. Conversion

The conversion is a CLFV process which could take place when negative muons are stopped in the nuclear matter. Stopped negative muons form muonic atoms in the ground state ( = mass-, = proton-number of nucleus) according to the reaction: Then the bound muons get captured by the nuclei (first reaction in (10)) or decay in orbit into an electron and two neutrinos (second reaction in (10)): The relative weight of the first reaction increases with the nuclear charge : for instance, the capture probability of muonic titanium () is , which corresponds to a muon lifetime of , and that of muonic gold () is 97%. Assuming that CLFV can occur at some level, the muons may also convert into single electrons: This process is known as conversion. The final nuclear state can be in the ground or in an excited state. The first case, which is called “coherent” capture, is usually dominant, with an enhancing factor given by the number of nucleons in the nucleus. The coherent capture is advantageous from the experimental point of view, since the outgoing electron is monochromatic. Its energy is given by where and are the binding and recoil energy of the nucleus. Since and depend on the capturing nucleus (as a first approximation, and ), the value is for Al, for Ti, for Au and for Pb.

The theoretical predictions for conversion range by some orders of magnitude, depending on the mechanisms which mediate the process. In SUSY frames this transition is dominated by the exchange of a virtual photon (dipole transition) and but in other models more exotic schemes, like Leptoquarks, Heavy Neutrinos, a second Higgs doublet and so forth, are invoked [111]; moreover is a function of the nuclear . We stress again that the search for decay and the search for conversion provide complementary information. Figure 22 shows an example of predictions for the process on Ti in SUSY models [112]. In conversion experiments, a pulsed negative muon beam is formed from the decay of pions produced in proton collisions on fixed target and brought to stop in a layer of thin targets, where muon captures take place. The signal is given by a single monochromatic electron, with energy as expressed in (12).

Note that the conversion experimental sensitivity is not limited by the accidental background, because there is only one particle in the final state. Electrons in the signal energy window can originate from a couple of beam-related background sources, the muon decay in orbit (MDIO, the second process in (10)) and the radiative muon (RMC, ) and pion (RPC, ) captures. Sporadic high energy electrons can also come from muons decaying in flight and cosmic rays.

The RPC background can be taken under control by reducing the pion contamination in the beam (“beam purity”) by means of moderators inserted within the beam line and that due to muons decaying in flight by selecting a muon beam with momentum , in order to reduce the Lorentz boost of decaying electrons. The energy spectrum of electrons from MDIO can reach if neutrinos carry away very little energy and the energy-momentum conservation is ensured by the recoiling nucleus. Close to the end point , the energy spectrum of MDIO electrons behaves as .

Another technique for reducing the beam related background is based on the observation that muonic atoms have some hundreds of ns lifetime (for instance, in Ti and in Al); then, one can use a pulsed beam with very short buckets (100 ns), leave pions decay and search for process in a delayed time window. This requires, however, that the fraction of protons arriving on the pion production target between two separate bunches (“out-of-time” protons) is as small as possible (~10−9): this “extinction factor” is one of the key parameters in determining the final sensitivity of conversion experiments.

Finally, a high resolution tracking detector is needed for reducing the spill-in of MDIO background electrons into the signal window and cosmic ray induced events are rejected by using veto counters and external shieldings and by the identification of their characteristic signals in tracking devices and calorimeters.

4.3.1. Past Experiments: SINDRUM II

The SINDRUM II [115, 116] experiment at PSI searched for conversion in Ti, Pb and Au, exploring also the possibility of a conversion. In this particular process the change of lepton electrical charge is compensated by an appropriate change of the nuclear electrical charge, but, differently from most of other CLFV processes, not only the muonic and electronic leptonic flavours, and , are separately violated, but also their sum, with a variation .

In the SINDRUM II experiment a high intensity muon beam was stopped in a target and the energy of emitted electrons was measured with a cylindrical magnetic spectrometer inside a superconducting solenoid. Figure 23 shows a sketch of the SINDRUM II experiment. The spectrometer was formed by various cylindrical detectors surrounding the target on the beam axis. Two drift chambers provided the tracking while scintillation and Cerenkov hodoscopes were used for the timing of track elements and for electron identification.

A scintillation beam counter in front of the target helped to recognise prompt background electrons produced by radiative capture of beam pions or beam electrons scattering off the target. The RPC background was largely reduced and made negligible by using a thick degrader. The further background induced by cosmic rays was identified by producing additional signals in the spectrometer. The electron momentum was calculated by reconstructing the helicoidal path within the spectrometer.

The spectrometer momentum calibration and resolution were checked by stopping a beam of positive pions in a low mass foam target and measuring the monoenergetic decay positrons, after reversing the magnetic field and scaling it to the lower momentum of the positrons (). The measured energy resolution was not in perfect agreement with the simulation on light target, but in much better agreement in more massive targets, like titanium, where it was completely dominated by the energy loss inside the target itself. The simulation of electrons yielded an overall energy resolution of (FWHM), which was the key factor to remove MDIO electrons, the dominant background source for this experiment.

Figure 24 (top) shows the electron (and positron) energy spectrum measured after removing the prompt forward events, attributed to electrons from decay and identified by using the correlation with the radiofrequency signal [118]. The prompt forward events are shown in the bottom plot. The event distribution was consistent with simulation for pure background, mainly coming from MDIO. Isolated high energy events were identified as cosmic ray muons.

With a total number of stopped muons ~1014 and a typical efficiency ~(10–20)%, SINDRUM II reached a sensitivity at level of few on reactions. SINDRUM II results on various targets are reported in Table 4.

4.3.2. New Projects

Presently there are two ambitious projects of conversion experiments, Mu2E at Fermilab (Illinois) and COMET/PRISM at J-PARC (Japan), the latter one scheduled in two distinct stages. Both projects require a (at least) proton beam extinction factor; this is not a trivial task and both projects are considering the possibility of equipping their accelerators and proton beam lines with a group of kicker magnets in addition to those which are already present.

4.3.3. The Mu2E Experiment

Mu2E [124] (Figure 25) is derived from the original MECO project [125], which was cancelled by budget reasons in . The experiment will use a , proton beam, with bunches, separated by , to produce a pion-muon beam. Secondary particles will be captured by a large acceptance capture solenoid surrounding the proton target and will be driven through a curved transport solenoid, arranged to single out negative muons and reject antiprotons and positive and neutral particles. The required extinction factor will be obtained by using a system of resonant AC dipoles, which sweeps out-of-time protons into collimators. The expected extinction factor is at level of . The charge and momentum selection will be operated by using the shift of the centre of the helicoidal trajectory of a charged particle in the direction perpendicular to the plane of the curved solenoid. Since the shift is a function of particle charge and momentum, interesting particles can be singled out by placing appropriate collimators.

Selected negative muons will be brought to stop in thin aluminum foils and electrons from muon decay or capture will be looked for by using a high resolution (900 keV FWHM at ) spectrometer, with a graded magnetic field, and an electromagnetic calorimeter. The magnetic field configuration would allow selecting high energy () and recovering backward going electrons.

A total number of stopped muons are foreseen in two years of data taking; assuming and a extinction factor, a -event signal is expected, with an estimated background events. On the other hand, in case of no signal Mu2E would set an upper limit: at C.L.

4.3.4. The COMET/PRISM Experiment

At J-PARC proton accelerator facility a two-stage search for conversion is planned: the goal of the first phase is to reach a sensitivity on in the COMET experiment and that of the second phase is to improve this sensitivity by two orders of magnitude in the PRISM/PRIME experiment.

COMET (COherent Muon to Electron Transition, Figure 26, [126]) will use a , pulsed proton beam with 1 μs bunch separation (the lifetime of muons in muonic aluminum) and short buckets (~100 ns). In two-year running time, the expected number of collected stopped muons is . The experiment is conceptually similar to Mu2E, with a pion capture system, a pion decay and muon transport section and the detector. The main differences with respect to Mu2E are(a)a C-shape, bending, transport solenoid instead of the Mu2E S-shape transport solenoid;(b)a curved solenoidal spectrometer instead of the Mu2E straight solenoidal spectrometer.The C-shaped solenoid was chosen to optimise the muon momentum selection by coupling it with a suitable vertical magnetic field, provided by tilted solenoidal coils, which improves the transport of high energy electrons through the collimator system. This enhances the rejection efficiency of muons with momentum higher than , which can produce dangerous high energy electrons by decaying in flight. The curved spectrometer was chosen to reject low energy electrons from MDIO, thus reducing the single counting rates in the detector. A bunch kick injection method will be used to reach the needed extinction factor; recent tests indicate that this technique can reduce the fraction of out-of-time protons at level of .

A preliminary data acquisition is scheduled for without the muon transport system and with a simplified lower resolution detector. A sensitivity of is expected in this phase, which will be also devoted to carry out the R&D of the final project elements. With the completed detector in operation (estimated for ) and an estimated background of events, COMET would be sensitive to .

In the second stage, the pion decay and muon transport sections of the COMET experiment will be modified and coupled with a very intense muon beam source, PRISM (Phase Rotation Intense Slow Muon source, Figure 27) [117]. A beam intensity of is aimed, with a central momentum of . The beam will be passed through a large aperture muon storage ring, equipped with a FFAG (Fixed Field Alternating Gradient) synchrotron, where the survived pions will decay and the momentum spread will be reduced from the original to by using the phase rotation technique. A so small energy spread would allow stopping enough muons in very thin foils, minimising the resolution worsening due to electron interactions in the target. A final momentum resolution of FWHM at is envisaged. The combined effect of the increased resolution and of the intense muon beam would allow to be sensitive to . The experimental demonstration of the phase rotation in the PRISM-FFAG ring is underway.

4.3.5. The DeeMe Experiment

The DeeMe experiment [127] is a less ambitious but of shorter time scale experiment for conversion which plans to reach a sensitivity of , 20 times better than SINDRUM II. The idea is to get electrons from conversion directly from production target (a situation analogue to that of surface muons), without need of complex muon and pion transport sections. The experiment is expected to operate at J-PARC Material and Life Science facility (MLF/MUSE), an intense muon beam source extracted from a , 1 MW Rapid Cycling Synchrotron (RCS). Muons will be produced and stopped in a SiC target and subsequent electrons will be transported by a beam line composed of focusing solenoids, prompt kickers and bending magnets to an electron spectrometer. The spectrometer will be equipped with MWPC and is expected to reach a resolution of at . The resolution is needed to reject the MDIO background, which is the dominant source of high energy electrons for this experiment. Particular care will be devoted to the elimination of the out-of-time proton background due to secondary turns in the RCS accelerator at a level of . The data taking is planned to start in .

4.4. Long Term Future for Conversion

Unlike and , conversion experiments are not rate-limited, since their signal is an isolated high-energy electron. Therefore they can, at least in principle, benefit from the very intense muon beams expected from future high-intensity accelerators. However, this kind of experiments has also some limitations, mainly related to beam purity (extinction factor) and background control. For instance, Project-X/PIP-II at Fermilab is expected to provide at least ten times more muons to the Mu2E experiment and the major challenge for the collaboration will be to maintain the background at a level of <1 event. Other main concerns are the target radiation heating (with risks of melting !) and the beam spread, which could take advantage from PRISM-like ring technology.

4.5. Conversion: A Brief Mention

Few years ago some interest was devoted to the possibility of studying the conversion in nuclei as a promising CLFV channel [128130]. In many supersymmetric models this process can take place through two different reactions, the elastic and the deep inelastic scattering ; various calculations (e.g., [130]) indicate that the cross section for the latter reaction is a steep function of the muon energy and is significantly enhanced for muon energies >50 GeV, thanks to the contribution induced by the sea -quarks.

Note that the experimental approach in the search for conversion is completely different than in case of decay or conversion, since a muon energy of several tens of GeV is required. The expected signal ranges from some hundreds to several tens of thousands of taus for a muon beam intensity of ~1020μ/year, within the reach of a muon or neutrino factory. The signal should be selected by looking at tau decays into hard hadrons, which should be emitted at a relatively large angle from the beam direction and with some missing momentum. However, at so high muon rate the background could be substantial; misidentified hard muons from elastic scattering or hadrons from target could mimic the tau decay signal. Realistic MC simulations and detector designs are mandatory for evaluating the real possibilities of observing the CLFV conversion.

5. The Tauonic Channel

The tau lepton is in principle a very promising source of CLFV decays. Thanks to the large tau mass (), many CLFV channels are open: , , , and ( indicates a light charged lepton, muon or electron and a hadronic state (, , …)), and in several SUSY and SUSY-GUT schemes the of these decays are enhanced with respect to the muon CLFV decays by a factor , . Therefore one expects [131133]: and experiments searching for must reach a sensitivity to be competitive with dedicated muon CLFV decay experiments. Examples of SUSY predictions for for two different values of are shown in Figure 28 [112]. The branching ratios of other CLFV processes are generally expected to be smaller than that of : for instance, is usually disfavoured because of the small coupling between first and third generation and is suppressed by a factor in the amplitude, due to an intermediate virtual photon. However, in models with heavy Dirac neutrinos or inverted slepton hierarchy, values of which exceed are predicted. Therefore, as in the case of muonic channel, the complementarity between various CLFV channels is essential in the search for new physics and for a deeper understanding of the flavour structure.

From the experimental point of view, however, a difficulty immediately arises: tau is an unstable particle, with a very short lifetime ( [83]). Then, tau beams cannot be realised and large tau samples must be obtained in intense electron or proton accelerators, operating in an energy range where the tau production cross section is large, and coupled with refined detectors, with good capabilities on particle identification, tracking reconstruction and calorimetric measurements to select very rare events. Until the end of nineties, the best experimental limit had been set by the CLEO experiment at CERN: [134]; however, the situation improved significantly at the beginning of our millennium when two experiments operating at B factory machines, BABAR [135] (Figure 29) at PEP-II Collider at SLAC (Usa) and Belle [136] (Figure 30) at KEKB (Japan), went online. Both experiments operated at a total center of mass (CM) energy at the peak of the resonance (); at this energy so that the B factories are factories too. Moreover, in electron-positron colliders the initial state is very well known and high resolution detector technologies are employed. Belle and BaBar are large central detectors, equipped with a combination of tracking devices, particle identification (PID) systems, vertex detectors and calorimeters. The main difference is related to the PID technique, based on a threshold Čerenkov counter, the time-of-flight and the tracker for Belle and on a RICH and in the trackers for BaBar. Several CLFV decays were searched for: the gamma-leptonic (), the purely leptonic , and the leptonic-hadronic ones .

In all these searches the event world is divided in two hemispheres, defined by the thrust axis: the “tag side” and the “signal side.” Events with pairs are selected by identifying in the tag side a SM tau decay, while possible CLFV decays are searched for in the signal side. Note that in this case there is no limitation in the sign of tau charge: both positive and negative taus can be looked for in the tag and in the signal side too. The tagging is based on the purely leptonic decay or on decays involving a tau neutrino and at least one prong, while in the signal side CLFV candidates are selected on the basis of the appropriate topology of each individual channel. The preferred tag channel is the tau decay in one prong + neutrino, because this channel has an branching ratio and a reduced missing momentum, since only one neutrino is present. Preliminary topological cuts are applied to CLFV candidates and then a blind strategy is used: the signal region is hidden and sideband data and MC simulations are used to estimate the background and optimise the selection criteria. The selection efficiency for CLFV searches is usually in the range ~(3–10) %.

The Belle and BaBar upper limits on tau CLFV decays continuously improved since , according to the increase of the collected and analysed data samples.

5.1. Searches for Decays

The conversion ( and ) is the most studied CLFV tau decay. The search strategy is based on the identification of the tau decay products, on their invariant mass and on their total energy. In the signal side candidates are preselected by requiring a single muon or electron track and at least one photon. The main background comes from the coincidence of a photon from initial (ISR) or final state radiation and an isolated lepton from the usual tau decay. Radiative processes like or pairs are also relevant background sources. It is important to note that the ISR represents an irreducible and unavoidable noise, which limits the sensitivity of these experiments to decays. We will discuss later this point.

The BaBar data sample analysed so far in the search corresponds to a total luminosity ; the calculated number of decays is . Cuts are imposed in the tag side on missing momentum (due to neutrinos), on detector acceptance and on kinematical variables in order to reduce backgrounds from radiative Bhabha scattering and dimuon events. After applying all preliminary selections, the and candidate events are studied in the plane, where is the difference between the total energy of the () pair and the beam energy in the CM frame and is the () pair invariant mass. For a CLFV tau decay one expects and , but because of the finite resolution one must consider a two-dimensional region. All possible () pairs are formed to take into account the presence of spurious photons from ISR background. Events in a window are blinded and the expected background is evaluated; then, the blinded region is opened and one looks at the events observed in the and windows around the nominal values. Figure 31 shows the distribution of selected events in the plane for (left) and (right): the red dots are experimental points, the black ellipses are the contours and the yellow and green regions contain and of MC signal events. The number of measured events in the ellipses was for and for decay, to be compared with expected backgrounds of () and (), respectively. Since no excess was observed in both cases, the following limits were set: and [121] at C.L.

The Belle data sample for this search is almost equivalent ( decays) and the strategy is quite similar. Kinematical selections on missing momentum and opening angle between particles are used to clean the sample. Since in this detector the radiative pair annihilation processes and constitute an important background source, the request of no muon (electron) tracks in the tag side is added for (). For the search the most important residual background comes from pairs decaying in a muon and a neutrino or a pion and a neutrino (with misidentified pion) coupled with an ISR photon. For the search, on the other hand, the surviving background is dominated by radiative decays of pairs. Figure 32 shows the distributions of Belle events in the plane for (a) and (b). The dashed and dotted-dashed ellipses represent the and contours, the diagonal dashed lines define the band of the shorter ellipse axis and the shaded boxes indicate the signal MC events. The number of signal events was extracted by a maximum likelihood fit obtaining for and for ; the corresponding limits are and [122] at C.L.

5.2. Searches for

The search for decay is potentially more interesting from the experimental point of view, since with only charged particles in the final state the mass resolution is excellent and there are no irreducible sources of noise. The search strategy for both experiments consists of forming all possible triplets of charged leptons with the required total charge and of looking at the distribution of events in the plane, where is the invariant mass of the 3-lepton combination. The main background comes from and Bhabha pairs which can be efficiently rejected by appropriate topological cuts, based on missing momentum, missing mass squared and opening angle between tracks and thrust magnitude, thanks to the high tracking reconstruction capabilities of the two detectors; the residual noise in the signal region ( and ) is therefore very low. Table 5 shows the results of the BaBar ( data sample) [119] and Belle ( data sample) [120] searches for CLFV decays. (Results are shown for , but, as observed before, the charge conjugation is implied: then, for instance includes too.) The C.L. upper limits on CLFV decay range from to for BaBar and from to for Belle, depending on the individual channel.

We must note that the very low background of the searches is an important point in the view of possible improvements obtainable with the highest intensity machines: in fact, the sensitivity scales as the inverse of the total integrated luminosity if (and only if !) the experiment is background free; otherwise, on the basis of the Poisson statistics, one expects a sensitivity improvement proportional to only. Therefore, with today detector technologies the channel looks one of the most promising in future searches for CLFV at higher intensity electron-positron colliders.

5.3. Other Searches Involving Tau Lepton

Finally, both Belle and BaBar searched for CLFV decays, where a charged lepton of the same sign of the tau is emitted together with a combination of pseudoscalar or vector hadrons (e.g., , , . Many of these channels are very clean, without irreducible backgrounds. In the signal side events are preselected searching for an isolated lepton plus the combination of hadronic tracks expected for the individual channel (e.g., for a lepton and a pair). No evidence for CLFV decays was found in all channels; the corresponding C.L. upper limits on processes lie between and . A summary of the experimental results of searches for CLFV tau decays is shown in Figure 33 (adapted from [123]). Combined analyses were also performed to obtain global Belle+BaBar C.L. upper limits on CLFV decays, both in bayesian and frequentistic framework. For a review see [114].

5.4. Future Perspectives
5.4.1. Super B Factories and Tau-Charm Factory

The Super B machines [137, 138] are projects of very intense accelerators, operating at peak, which would reach integrated luminosities , ~50 times larger than the combined Belle+Babar data sample. With such a big increase in luminosity one could expect a sensitivity improvement on the tau CLFV searches by two orders of magnitude. However, it is necessary to remember that the sensitivity scales as only for a background-free experiment; otherwise, it scales only as and the expected improvement is much less significant. In the BaBar and Belle searches, the “golden” channel is affected by a small, but not negligible background and the ISR represents an irreducible noise. Extrapolating the present limits on the basis of the increasing luminosity only, one obtains a predicted sensitivity on decays of , not completely satisfactory. A factor two improvement is expected by the use of polarised beams and of appropriate analysis selections; refinements in detector technologies are also foreseen [139]. On the other hand, the purely leptonic channel is potentially more promising, since background-free searches seem feasible; no background events were observed by Belle and BaBar and the expected number of noise events was (Table 5). Super B projects aim to reach a sensitivity ~2 × 10−10 on CLFV decays. Finally, the decays represent an intermediate situation, since they are almost background free but the efficiencies are largely different from channel to channel. The predicted sensitivities are in the range .

Two Super B projects, one in Italy [137] and one in Japan [138], were under study until , when the Italian project was cancelled for budget reasons. The Japanese project, SuperBelle, is under development and is expected to be online in few years; a luminosity is aimed.

A possible alternative to the Super B factory project is under study from the technical and financial point of view in Italy: the Tau-Charm factory [140], a lower energy machine, operating at a CM energy between (including the peak) and . Such project is expected to reach an integrated luminosity ~10 ab−1 in three years of running; the corresponding number of pairs is ~3 × 1010. With respect to the Super B factories, the lower luminosity is compensated by the higher cross section for tau pairs production: for energies between and , to be compared with at resonance. The sensitivity of a Tau-Charm project to the processes is potentially higher than that of Super B factories, since the ISR, which is the dominant photon background source at , does not give a significant contribution for . Sensitivities at level of on process are expected, which could be improved by a factor ~3–5 by an efficient separation. The leptonic channel would be background free even with this type of accelerator and the aimed sensitivity is .

5.4.2. Searches for CLFV at Large Hadron Collider: ATLAS and CMS

Tau leptons are copiously produced in the LHC accelerator, mainly via B and D decays and, to a much lesser extent, via and decays. Detailed studies of possible detection of decay in CMS and ATLAS were performed [141, 142]: because of the unavoidable background, the sensitivity to this channel is not competitive with that of B factory experiments. The channel looks more promising [69], even if only taus from or decays could produce CLFV processes acceptable by the trigger schemes of such experiments. Dedicated trigger algorithms with improved efficiency for muons from decay of taus originating from B or D mesons are under study. Assuming that the backgrounds can be effectively suppressed by appropriate selection criteria, one obtains C.L. upper limits in the range for an integrated luminosity of , comparable with the sensitivity levels reached by B factory experiments.

CLFV signatures might be also observed at LHC if Supersymmetric particles are discovered, since these particles naturally generate CLFV couplings in the slepton mixing matrix. For instance, excited states of sleptons could give rise to CLFV decays to their bound state in processes like or If new particles are discovered at the TeV scale, it is very likely that precision CLFV experiments will discover CLFV through radiative loops, measuring the CLFV coupling at that scale. On the other hand, the absence of such particles would leave open the space for a higher SUSY scale, at level of thousands of TeV, which can be explored only by the indirect searches performed by CLFV dedicated projects.

5.5. Results of CLFV Searches at LHCb Experiment

The LHCb experiment [143], schematically shown in Figure 34, published in the first bound on decay obtained at a hadronic collider. This result was based on an integrated luminosity of ( at and at ), corresponding to the data collected in and . Tau leptons were produced mainly by the leptonic decay. Differently from what happens at B factories, taus at LHC are not produced in pairs, so, there is no tag-side and the background is more severe; however, thanks to the huge production cross section, the size of tau sample is larger: taus at LHC, to be compared with tau pairs at B factories. Events with three muon tracks of correct signs are selected by looking at their invariant mass and by using two multivariate classifiers ( and ), respectively related to the kinematical and geometrical properties of the decay and to the particle identification. The normalisation is provided by the measured number of decays, used as control sample since the branching fractions and are known, taking into account the appropriate trigger and analysis efficiencies of the two processes. The combinatorial background (accidental coincidences of three muon tracks) is dominating; a lower contribution to the background comes from the decay followed by the decay of the particle in two muons. The experiment established a limit on the branching ratio of at C.L., about a factor worse than the BaBar/Belle results [144]. However, the expected integrated luminosity over the whole LHCb life is , times higher than the luminosity of the analysed sample. Then, assuming that refinements in background control could be realised, a significant improvement of this result seems likely.

6. Conclusions

In the context of searches for BSM physics, CLFV processes represent one of the most promising tools; thus, an extensive program is underway or in project of searches for CLFV reactions in several channels. While the MEG experiment is continuously improving its upper bound on the benchmark decay and B factory experiments have analysed most of their tau sample in search for possible tau CLFV decays, various new projects are in their R&D phase, including the MEG upgrade. In the meanwhile, the first significant results on CLFV at a hadron collider were obtained by the LHCb experiment. We expect that in the next ten years sensitivities at level of few for , ~10−16 for and ~10−17 for conversion will be reached in the muonic sector, while in the tauonic sector projects like SuperBelle or (maybe) Tau-Charm factory could explore the of CLFV tau decays down to or less. These sensitivities cover a large fraction of parameter space of many BSM theories, including several SUSY and SUSY-GUT schemes. So, it is reasonable to presume that in the first quarter of our century CLFV experiments, together with experiments at high-energy colliders, which look at direct production of SUSY particles, will provide a lot of important insights on our comprehension of the particle physics world beyond the SM.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors are grateful to all MEG colleagues and especially to the Pisa colleagues: A. Baldini, C. Bemporad, L. Galli, M. Grassi, and G. Signorelli, for useful discussions and for the long time spent together on all activities of the experiment. Information on conversion experiments and particularly on Mu2E project was provided by F. Cervelli and on Mu3e experiment by A. Papa; members of BaBar group in Pisa (G. Batignani, G. Casarosa, A. Cervelli, M. Giorgi, A. Lusiani, and N. Neri) helped them with information and discussions about tau CLFV searches at B, Super B factory and Tau-Charm factory experiments. Finally, they are grateful to the editorial board of the journals and especially to V. Flaminio for inviting them to write and submit the present paper to their magazine. This work was supported by Italian National Institute for Nuclear Physics (INFN).