Abstract

I give an overview about the features the Mathematica package SARAH provides to study new models. In general, SARAH can handle a wide range of models beyond the MSSM coming with additional chiral superfields, extra gauge groups, or distinctive features like Dirac gaugino masses. All of these models can be implemented in a compact form in SARAH and are easy to use: SARAH extracts all analytical properties of the given model like two-loop renormalization group equations, tadpole equations, mass matrices, and vertices. Also one- and two-loop corrections to tadpoles and self-energies can be obtained. For numerical calculations SARAH can be interfaced with other tools to get the mass spectrum, to check flavour or dark matter constraints, and to test the vacuum stability or to perform collider studies. In particular, the interface to SPheno allows a precise prediction of the Higgs mass in a given model comparable to MSSM precision by incorporating the important two-loop corrections. I show in great detail with the example of the B-L-SSM how SARAH together with SPheno, HiggsBounds/HiggsSignals, FlavorKit, Vevacious, CalcHep, MicrOmegas, WHIZARD, and MadGraph can be used to study all phenomenological aspects of a model.

1. Introduction

Supersymmetry (SUSY) has been the top candidate for beyond standard model (BSM) physics for many years [13]. This has many reasons. SUSY solves the hierarchy problem of the standard model (SM) [4, 5], provides a dark matter candidate [68], leads to gauge coupling unification [915], and gives an explanation for electroweak symmetry breaking (EWSB) [16, 17]. One might even consider the measured Higgs mass as a first hint for SUSY since it falls in the correct ballpark, while it could be much higher in the SM and other BSM scenarios. Before the LHC has been turned on, the main focus has been on the minimal supersymmetric extensions of the SM, the MSSM. The 105 additional parameters of this model, mainly located in the SUSY breaking sector, can be constrained by assuming a fundamental, grand unified theory (GUT) and a specific mechanism for SUSY breaking [1827]. In these cases often four or five free parameters are left and the model becomes very predictive. However, the negative results from SUSY searches at the LHC1 as well as the measured Higgs mass of about 125 GeV [28, 29] put large pressure on the simplest scenarios. Wide regions of the parameter space, which had been considered as natural before LHC has started, have been ruled out. This has caused more interest in nonminimal SUSY models. Beyond-MSSM model can provide many advantages compared to the MSSM: they address not only the two issues mentioned so far. A more complete list of good reasons to take a look on extensions of the MSSM is as follows.

(i)Naturalness. The Higgs mass in SUSY is not a free parameter like in the SM. In the MSSM the tree-level mass is bounded from above by . Thus, about one-third of the Higgs mass has to be generated radiatively to explain the observation. Heavy SUSY masses are needed to gain sufficiently large loop corrections: that is, a soft version of the hierarchy problem appears again. The need for large loop corrections gets significantly softened if - or -terms are present which already give a push to the tree-level mass [3034]. (ii)SUSY Searches. The negative results from all SUSY searches at the LHC have put impressive limits on the Sparticle masses. However, the different searches are based on certain assumptions like a stable, neutral, and colourless lightest SUSY particle (LSP), a sufficient mass splitting between the SUSY states, and so on. As soon as these conditions are no longer given like in models with broken -parity, the limits become much weaker [3537]. Also in scenarios with compressed spectra where SUSY states are nearly degenerated, the strong limits do often not apply [3849]. (iii)Neutrino Data. There is an overwhelming experimental evidence that neutrinos have masses and do mix among each other; see [50] and references therein. However, neutrino masses are not incorporated in the MSSM. To do that, either one of the different seesaw mechanisms can be utilised or -parity must be broken to allow a neutrino-neutralino mixing [5162]. (iv)Strong CP-Problem. The strong CP-problem remains an open question not only in the SM but also in the MSSM. In principle, for both models the same solution exists to explain the smallness of the term in QCD: the presence of a broken Peccei-Quinn (PQ) symmetry [63]. In its supersymmetric version PQ models predict not only an axion but also an axino which could be another DM candidate [6470]. In general, the phenomenological aspects of axion-axino models are often even richer, in particular if the DSFZ version is considered [71, 72]: the minimal, self-consistent supersymmetric DSFZ-axion model needs in total three additional superfields compared to the MSSM [73]. (v)-Problem. The superpotential of the MSSM involves one parameter with dimension mass: the -term. This term is not protected by any symmetry: that is, the natural values would be either exactly 0 or . However, both extreme values are ruled out by phenomenological considerations. The optimal size of this parameter would be comparable to the electroweak scale. This could be obtained if the -term is actually not a fundamental parameter but is generated dynamically. For instance, in singlet extensions an effective -term appears as consequence of SUSY breaking and is therefore naturally [30, 74]. (vi)Top-Down Approach. Starting with a GUT or String theory it is not necessarily clear that only the gauge sector and particle content of the MSSM are present at the low scale. Realistic UV completions come often with many additional matter close to the SUSY scale. In many cases also additional neutral and even charged gauge bosons are predicted [7578]. (vii)-Symmetry. If one considers -symmetric models, Majorana gaugino masses are forbidden. To give masses to the gauginos in these models, a coupling to a chiral superfield in the adjoint representation is needed. This gives rise to Dirac masses for the gauginos which are in agreement with -symmetry [79119]. Dirac gauginos are also attractive because they can weaken LHC search bounds [120122] and flavour constraints [123125].Despite the large variety and flexibility of SUSY, many dedicated public computer tools like Isajet [126132], Suspect [133], SoftSUSY [134136], SPheno [137, 138], or FeynHiggs [139, 140] are restricted to the simplest realization of SUSY, the MSSM, or small extensions of it. Therefore, more generic tools are needed to allow studying of nonminimal SUSY models with the same precision as the MSSM. This precision is needed to confront also these models with the strong limits from SUSY searches, flavour observables, dark matter observations, and Higgs measurements. The most powerful tool in this direction is the Mathematica package SARAH [141145]. SARAH has been optimized for an easy, fast, and exhaustive study of nonminimal SUSY models. While the first version of SARAH has been focused on the derivation of tree-level properties of a SUSY model, that is, mass matrices and vertices, and interfacing this information with Monte-Carlo (MC) tools, with the second version of SARAH the calculation of one-loop self-energies as well as two-loop renormalization group equations (RGEs) has been automatized. With version 3, SARAH became the first “spectrum-generator-generator”: all analytical information derived by SARAH can be exported to Fortran code which provides a fully-fledged spectrum generator based on SPheno. This functionality has been later extended by the FlavorKit [146] interface which allows a modular implementation of new flavour observables based on the tools FeynArts/FormCalcSARAHSPheno. Also different methods to calculate the two-loop corrections to the Higgs states in a nonminimal model are available with SPheno modules generated by SARAH today: the radiative contributions to CP even scalar masses at the two-loop level can be obtained by using either the effective potential approach [147] based on generic results given in [16], or a fully diagrammatic calculation [148]. Both calculations provide Higgs masses with a precision which is otherwise just available for the MSSM. Beginning with SARAH  4, the package is no longer restricted to SUSY models but can handle also a general, renormalizable quantum field theory and provides nearly the same features as for SUSY models. Today, SARAH can be used for SUSY and non-SUSY models to write model files for CalcHep/CompHep [149, 150], FeynArts/FormCalc [151, 152], and WHIZARD/OMega [153, 154] as well as in the UFO format [155] which can be handled, for instance, by MadGraph  5 [156], GoSam [157], Herwig++ [158160], and Sherpa [161163]. The modules created by SARAH for SPheno calculate the full one-loop and partially two-loop-corrected mass spectrum, branching ratios and decays widths of all states, and many flavour and precision observables. Also an easy link to HiggsBounds [164, 165] and HiggsSignals [166] exists. Another possibility to get a tailor-made spectrum generator for a nonminimal SUSY model based on SARAH is the tool FlexibleSUSY [167]. Finally, SARAH can also produce model files for Vevacious [168]. The combination SARAHSPhenoVevacious provides the possibility to find the global minimum of the one-loop effective potential of a given model and parameter point.

The range of models covered by SARAH is very broad. SARAH and its different interfaces have been successfully used to study many different SUSY scenarios: singlet extensions with and without CP violation [169179], triplet extensions [180, 181], models with -parity violation [182188], different kinds of seesaw mechanisms [58, 6062, 189194], models with extended gauge sectors at intermediate scales [195198] or the SUSY scale [34, 199203], models with Dirac gauginos [109, 111, 204206] or vector-like states [207], and even more exotic extensions [208211]. In addition, SARAH can be also very useful to perform studies in the context of the MSSM which cannot be done with any other public tool out of the box. That is the case, for instance, if new SUSY breaking mechanisms should be considered [212219] or if the presence of charge and colour breaking minima should be checked [220, 221]. For the NMSSM, despite the presence of specialized tools like NMSSMTools [222], SoftSUSY [223], or NMSSMCalc [224], the SPheno version created by SARAH is the only code providing two-loop corrections beyond not relying on MSSM approximations [225]. Also the full one-loop corrections to all SUSY states in the NMSSM have first been derived with SARAH [226].

This paper is organized as follows. in the next section an overview about the models supported by SARAH is given. In Section 3, I will discuss the possible analytical calculations which can be done with SARAH and list the possible output of the derived information for further evaluation. The main part of this paper is a detailed example of how SARAH can be used to study all phenomenological aspects of a model. That is done in Sections 48: in Section 4 the implementation of the B-L-SSM in SARAH is described, in Section 5 how the model can be understood at the analytical level in Mathematica is discussed, the SPheno output with all its features is presented in Section 6, in Section 7 I will show how other tools can be used together with SARAH and SPheno to study, for instance, the dark matter and collider phenomenology, and in Section 8, different possibilities to perform parameter scans are presented. I summarize in Section 9. Throughout the paper and in the given examples I will focus mainly on SUSY models, but many statements apply one-to-one also to non-SUSY models.

2. Models

2.1. Input Needed by SARAH to Define a Model

SARAH is optimized for the handling of a wide range of SUSY models. The basic idea of SARAH was to give the user the possibility to implement models in an easy, compact, and straightforward way. Most tasks to get the Lagrangian are fully automatized: it is sufficient to define just the fundamental properties of the model. That means that the necessary inputs to completely define the gauge eigenstates with all their interactions are(1)global symmetries,(2)gauge symmetries,(3)chiral superfields,(4)superpotential. That means that SARAH automatizes many steps to derive the Lagrangian from that input as follows: (1)All interactions of matter fermions and the -terms are derived from the superpotential.(2)All vector boson and gaugino interactions as well as -terms are derived from gauge invariance.(3)All gauge fixing terms are derived by demanding that scalar-vector mixing vanishes in the kinetic terms.(4)All ghost interactions are derived from the gauge fixing terms.(5)All soft-breaking masses for scalars and gauginos as well as the soft-breaking counterparts to the superpotential couplings are added automatically.Of course, the Lagrangian of the gauge eigenstates is not the final aim. Usually one is interested in the mass eigenstates after gauge symmetry breaking. To perform the necessary rotations to the new eigenstates, the user has to give some more information: (1)definition of the fields which get a vacuum expectation value (VEV) to break gauge symmetries,(2)definition of vector bosons, scalars, and fermions which mix among each other. Using this information, all necessary redefinitions and fields rotations are done by SARAH. Also the gauge fixing terms are derived for the new eigenstates and the ghost interactions are added. For all eigenstates plenty of information can be derived by SARAH as explained in Section 3. Before coming to that, I will give more details about what kind of models and what features are supported by SARAH.

2.2. Supported Models and Features

As we have seen in the introduction, there are many possibilities to go beyond the widely studied MSSM. Each approach modifies the on or the other sector of the model. In general, possible changes compared to the MSSM are (i) using other global symmetries to extent the set of allowed couplings, (ii) adding chiral superfields, (iii) extending the gauge sector, (iv) giving VEVs to other particles compared to only the Higgs doublets, (v) adding Dirac masses for gauginos, (vi) considering noncanonical terms like nonholomorphic soft-SUSY breaking interactions or Fayet-Iliopoulos -terms. All of these roads can in principle be gone by SARAH and I will briefly discuss what is possible in the different sectors and which steps are done by SARAH to get the Lagrangian. Of course, extending the gauge sector or adding Dirac masses to gauginos comes inevitable with an extended matter sector as well. Thus, often several new effects appear together and can be covered by SARAH.

2.2.1. Global Symmetries

SARAH can handle an arbitrary number of global symmetries which are either or symmetries. Also a continuous -symmetry is possible. Global symmetries are used in SARAH mainly for three different purposes. First, they help to constrain the allowed couplings in the superpotential. However, SARAH does not strictly forbid terms in the superpotential which violate a global symmetry. SARAH only prints a warning to point out the potential contradiction. The reason is that such a term might be included on purpose to explain its tininess. Global symmetries can also affect the soft-breaking terms written down by SARAH. SARAH always tries to generate the most general Lagrangian and includes also soft-masses of the form for two scalars , with identical charges. However, these terms are dropped if they are forbidden by a global symmetry. By the same consideration, Dirac gaugino mass terms are written down or not. Finally, global symmetries are crucial for the output of model files for MicrOmegas to calculate the relic density. For this output at least one unbroken discrete global symmetry must be present.

By modifying the global symmetries one can already go beyond the MSSM without changing the particle content: choosing a (Baryon triality) instead of -parity [227231], lepton number violating terms would be allowed while the proton is still stable. SARAH comes not only with -parity violating models based on Baryon triality, but also with a variant for Baryon number violation but conserved Lepton number is included.

2.2.2. Gauge Sector

Gauge Groups. The gauge sector of a SUSY model in SARAH is fixed by defining a set of vector superfields. SARAH is not restricted to three vector superfields like in the MSSM, but many more gauge groups can be defined. To improve the power in dealing with gauge groups, SARAH has linked routines from the Mathematica package Susyno [232]. SARAH together with Susyno take care of all group-theoretical calculations: the Dynkin and Casimir invariants are calculated, and the needed representation matrices as well as Clebsch-Gordan coefficients are derived. This is done not only for and gauge groups, but also for and and expectational groups can be used. For all Abelian groups also a GUT normalization can be given. This factor comes usually from considerations about the embedding of a model in a greater symmetry group like or . If a GUT normalization is defined for a group, it will be used in the calculation of the RGEs. The soft-breaking terms for a gaugino of a gauge group are usually included as

Gauge Interactions. With the definition of the vector superfields already the self-interactions of vector bosons as well as the interactions between vector bosons and gauginos are fixed. Those are taken to be I am using here and in the following capital letters and to label the gauge groups and small letter , , and to label the generators, vector bosons, and gauginos of a particular gauge group. The field strength tensor is defined as and the covariant derivative is Here, is the structure constant of the gauge group . Plugging (3) in the first term of (2) leads to self-interactions of three and four gauge bosons. In general, the procedure to obtain the Lagrangian from the vector and chiral superfields is very similar to [233]. Interested readers might check this reference for more details.

Gauge Interactions of Matter Fields. Vector superfields usually do not come alone but also matter fields are present. I am going to discuss the possibilities to define chiral superfields in Section 2.2.4. Here, I assume that a number of chiral superfields are present and I want to discuss the gauge interactions which are taken into account for those. First, the -terms stemming from the auxiliary component of the superfield are calculated. These terms cause four scalar interactions and read Here, the sum is over all scalars in the model, and are the generators of the gauge group for an irreducible representation . For Abelian groups simplify to the charges of the different fields. In addition, Abelian gauge groups can come also with another feature: a Fayet-Iliopoulos -term [234]: This term can optionally be included in SARAH for any .

The other gauge-matter interactions are those stemming from the kinetic terms: with covariant derivatives . The SUSY counterparts of these interactions are those between gauginos and matter fermions and scalars:

Gauge-Kinetic Mixing. The terms mentioned so far cover all gauge interactions which are possible in the MSSM. These are derived for any other SUSY model in exactly the same way. However, there is another subtlety which arises if more than one Abelian gauge group is present. In that case are allowed for field strength tensors of two different Abelian groups and [235]. is in general a matrix if Abelian groups are present. SARAH fully includes the effect of kinetic mixing independent of the number of Abelian groups. For this purpose SARAH is not working with field strength interactions like (9) but performs a rotation to bring the field strength in a diagonal form. That is done by a redefinition of the vector carrying all gauge fields : This rotation has an impact on the interactions of the gauge bosons with matter fields. In general, the interaction of a particle with all gauge fields can be expressed by where is a vector containing the charges of under all groups and is a diagonal matrix carrying the gauge couplings of the different groups. After the rotation according to (10) the interaction part can be expressed by with a general matrix which is no longer diagonal. In that way, the effect of gauge-kinetic mixing has been absorbed in “off-diagonal” gauge couplings. This means that the covariant derivative in SARAH reads where and are running over all groups and are the entries of the matrix . Gauge-kinetic mixing is included not only in the interactions with vector bosons, but also in the derivation of the -terms. Therefore, the -terms for the Abelian sector in SARAH read while the non-Abelian -terms keep the standard form equation (5). Finally, also “off-diagonal” gaugino masses are introduced. The soft-breaking part of the Lagrangian then reads SARAH takes the off-diagonal gaugino masses to be symmetric: .

2.2.3. Gauge Fixing Sector

All terms written down so far lead to a Lagrangian which is invariant under a general gauge transformation. To break this invariance one can add “gauge fixing” terms to the Lagrangian. The general form of these terms is Here, is usually a function involving partial derivatives of gauge bosons . SARAH uses gauge. This means that, for an unbroken gauge symmetry, the gauge fixing terms are For broken symmetries, the gauge fixing terms are chosen in a way where the mixing terms between vector bosons and scalars disappear from the Lagrangian. This generates usually terms of the form Here, is the Goldstone boson of the vector boson with mass . From the gauge fixing part, the interactions of ghost fields are derived by Here, assigns the operator for a BRST transformation. All steps to get the gauge fixing parts and the ghost interactions are completely done automatically by SARAH  and adjusted to the gauge groups in the model.

2.2.4. Matter Sector

There can be up to 99 chiral superfields in a single SUSY model in SARAH. All superfields can come with an arbitrary number of generations and can transform as any irreducible representation with respect to the defined gauge groups. In the handling of nonfundamental fields under a symmetry, SARAH distinguishes if the corresponding symmetry gets broken or not: for unbroken symmetries it is convenient to work with fields which transform as vector under the symmetry with the appropriate length. For instance, a 6 under is taken to be That is, it carries one charge index. In contrast, nonfundamental fields under a broken gauge symmetry are represented by tensor products of the fundamental representation. For instance, a 3 under is taken to be Thus, the triplet can be given as usual as matrix.

For Abelian gauge groups not only one can define charges for superfields which are real numbers, but also variables can be used for that. All interactions are then expressed keeping these charges as free parameter.

For all chiral superfield SARAH adds the soft-breaking masses. For fields appearing in generations, these are treated as Hermitian matrices. As written above, also soft-terms mixing two scalars are included if allowed by all symmetries. Hence, the soft-breaking mass terms read, in general, Note that , label different scalar fields; generation indices are not shown. is 1, if fields and have exactly the same transformation properties under all local and global symmetries, and otherwise 0.

2.2.5. Models with Dirac Gauginos

Another feature which became popular in the last years is models with Dirac gauginos. In these models mass terms between gauginos and a fermionic component of the chiral superfield in the adjoint representation of the gauge group are present. In addition, also new -terms are introduced in these models [98]. Thus, the new terms in the Lagrangian are is the auxiliary component of the vector superfield of the group . To allow for Dirac mass terms, these models come always with an extended matter sector: to generate Dirac mass terms for all MSSM gauginos at least one singlet, one triplet under , and one octet under must be added. Furthermore, models with Dirac gauginos generate also new structures in the RGEs [236]. All of this is fully supported in SARAH.

If Dirac masses for gauginos are explicitly turned on in SARAH, it will check for all allowed combinations of vector and chiral superfields which can generate Dirac masses and which are consistent with all symmetries. For instance, in models with several gauge singlets, the bino might even get several Dirac mass terms.

2.2.6. Superpotential, Soft-Terms, and Noncanonical Interactions

The matter interactions in SUSY models are usually fixed by the superpotential and the soft-SUSY breaking terms. SARAH fully supports all renormalizable terms in the superpotential and generates the corresponding soft-breaking terms: , , and are real coefficients. All parameters are treated by default in the most general way by taking them as complex tensors of appropriate order and dimension. If identical fields are involved in the same coupling, SARAH derives also the symmetry properties for the parameter.

As discussed below, SARAH can also handle to some extent nonrenormalizable terms with four superfields in the superpotential: From the superpotential, all the -terms and interactions of matter fermions are derived. Here is the superpotential with all superfields replaced by their scalar component . is the fermionic component of that superfield.

Usually, the - and -terms and the soft-breaking terms for chiral and vector superfields fix the full scalar potential of the model. However, in some cases also noncanonical terms should be studied. These are, for instance, nonholomorphic soft-terms: Those can be added as well and they are taken into account in the calculation of the vertices and masses and as consequence also in all loop calculations. However, they are not included in the calculation of the RGEs because of the lack of generic results in the literature.

2.2.7. Symmetry Breaking and VEVs

All gauge symmetries can also be broken. This is in general done by decomposing a complex scalar into its real components and a VEV: Assigning a VEV to a scalar is not restricted to colourless and neutral particles. Also models with spontaneous colour or charge breaking (CCB) can be studied with SARAH. Also explicit CP violation in the Higgs sector is possible. There are two possibilities to define that. Either a complex phase is added or a VEV for the CP odd component is defined: Both options are possible in SARAH, even if the first one might often be preferred.

In the case of an extended gauge sector also additional gauge bosons are present. Depending on the quantum numbers of the states which get a VEV these gauge bosons might mix with the SM ones. Also this mixing is fully supported by SARAH. There is no restriction if the additional gauge bosons are ultralight (dark photons) or much heavier (, -bosons).

2.2.8. Mixing in Matter Sector

Mixing between gauge eigenstates to new mass eigenstate appears not only in the gauge but also in the matter sector. In general the mixing is induced via bilinear terms in the Lagrangian between gauge eigenstates. These bilinear terms either can be a consequence of gauge symmetry breaking or can correspond to bilinear superpotential or soft-terms. In general, four kinds of bilinear terms can show up in the matter part of the Lagrangian: Here, , , are vectors whose components are gauge eigenstates. are complex and are real scalars, , , and are Weyl spinors. The rotation of complex scalars to mass eigenstates happens via a unitary matrix which diagonalizes the matrix . For real scalars the rotation is done via a real matrix which diagonalizes : We have to distinguish for fermions if the bilinear terms are symmetric or not. In the symmetric case the gauge eigenstates are rotated to Majorana fermions. The mass matrix is then diagonalized by one unitary matrix. In the second case, two unitary matrices are needed to transform and differently. This results in Dirac fermions. Both matrices together diagonalize the mass matrix : There is no restriction in SARAH of how many states do mix. The most extreme case is the one with spontaneous charge, colour, and CP violation where all fermions, scalars, and vector bosons mix among each other. This results in a huge mass matrix which would be derived by SARAH. Phenomenological more relevant models can still have a neutralino sector mixing seven to ten states. That is done without any problem with SARAH. Information about the calculation of the mass matrices in SARAH is given in Section 3.3.

2.2.9. Superheavy Particles

Extensions of the MSSM can not only be present at the SUSY scale but also appear at much higher scales. These superheavy states have then only indirect effects on the SUSY phenomenology compared to the MSSM: they alter the RGE evolution and give a different prediction for the SUSY parameters. In addition, they can also induce higher-dimensional operators which are important. SARAH provides features to explore models with superheavy states: it is possible to change stepwise the set of RGEs which is used to run the parameters numerically with SPheno. In addition, the most important thresholds are included at the scale at which the fields of mass are integrated out. These are the corrections to the gauge couplings and gaugino masses [237]: is the Dynkin index of a superfield transforming as representation with respect to the gauge group . When evaluating the RGEs from the low scale to the high scale the contribution is positive; when running down, it is negative. Equations (36) assume that the mass splitting between the components of the chiral superfield integrated out is negligible. That is often a good approximation for very heavy states. Nevertheless, SARAH can also take into account the mass splitting among the components if necessary.

Also higher-dimensional operators can be initialized which give rise to terms like (26). However, those are only partially supported in SARAH. This means that only the RGEs are calculated for these terms and the resulting interactions between two fermions and two scalars are included in the Lagrangian. The six scalar interactions are not taken into account. This approach is, for instance, sufficient to work with the Weinberg operator necessary for neutrino masses [238, 239].

2.3. Checks of Implemented Models

After the initialization of a model SARAH provides functions to check the (self-) consistency of this model. The function CheckModel performs the following checks.

What Causes the Particle Content Gauge Anomalies? Gauge anomalies are caused by triangle diagrams with three external gauge bosons and internal fermions [240]. The corresponding conditions for all groups to be anomaly free are Again, are the generators for a fermion transforming as irreducible representation under the gauge group . The sum is taken over all chiral superfields. In the Abelian sector several conditions have to be fulfilled depending on the number of gauge groups: The mixed condition involving Abelian and non-Abelian groups is Finally, conditions involving gravity are If one if these conditions is not fulfilled a warning is printed by SARAH. If some charges were defined as variable, the conditions on these variables for anomaly cancellation are printed.

What Leads the Particle Content to the Witten Anomaly? SARAH  checks that there is an even number of doublets. This is necessary for a model in order to be free of the Witten anomaly [241].

Are All Terms in the (Super)Potential in Agreement with Global and Local Symmetries? As mentioned above, SARAH  does not forbid including terms in the superpotential which violate global or gauge symmetries. However, it prints a warning if this happens.

Are There Other Terms Allowed in the (Super)Potential by Global and Local Symmetries? SARAH  will print a list of renormalizable terms which are allowed by all symmetries but which have not been included in the model file.

Are All Unbroken Gauge Groups Respected? SARAH  checks what gauge symmetries remain unbroken and if the definitions of all rotations in the matter sector and of the Dirac spinors are consistent with that.

Are There Terms in the Lagrangian of the Mass Eigenstates Which Can Cause Additional Mixing between Fields? If in the final Lagrangian bilinear terms between different matter eigenstates are present this means that not the entire mixing of states has been taken into account. SARAH  checks if those terms are present and returns a warning showing the involved fields and the nonvanishing coefficients.

Are All Mass Matrices Irreducible? If mass matrices are block diagonal, a mixing has been assumed which is actually not there. In that case SARAH  will point this out.

Are the Properties of All Particles and Parameters Defined Correctly? These are formal checks about the implementation of a model. It is checked, for instance, if the number of PDGs fits the number of generations for each particle class, if LaTeX names are defined for all particles and parameters, if the positions in a Les Houches spectrum file are defined for all parameters, and so forth. Not all of these warnings have to be addressed by the user, especially if he/she is not interested in the output which would fail because of missing definitions.

3. Calculations and Output

SARAH can perform in its natural Mathematica environment many calculations for a model on the analytical level. For an exhaustive numerical analysis usually one of the dedicated interfaces to other tools is the best approach. I give in this section an overview about what SARAH calculates itself and how that information is linked to other codes.

3.1. Renormalization Group Equations

SARAH calculates the SUSY RGEs at the one- and two-loop level. In general, the -function of a parameter is parametrized by and are the coefficients at one- and two-loop level. For the gauge couplings the generic one-loop expression is rather simple and reads is the Dynkin index for the gauge group summed over all chiral superfields charged under that group and is the Casimir of the adjoint representation of the group. The two-loop expressions are more complicated and are skipped here. They are, for instance, given in [242].

The starting points for the calculation of the RGEs for the superpotential terms in SARAH are the anomalous dimensions for all superfields. These can be also parametrized by I want to stress again that are not generation indices but label the different fields. Generic formulas for the one- and two-loop coefficients and are given in [242] as well. SARAH includes the case of an anomalous dimension matrix with off-diagonal entries: that is, . That is, for instance, necessary in models with vector-like quarks where the superpotential reads is not vanishing but receives already at one-loop contributions .

From the anomalous dimensions it is straightforward to get the -functions of the superpotential terms: for a generic superpotential of the form equations (24) and (26) the coefficients are given by up to constant coefficients. In the soft-breaking sector SARAH includes also all standard terms of the form The generic expressions for ’s, ’s, ’s, and ’s up to two-loop are given again in [242] which are used by SARAH. The -function for the linear soft-term is calculated using [243]. For the quartic soft-term the approach of [244] is adopted. In this approach is defined by The -functions for can then be expressed by and : In principle, the same approach can also be used for and terms as long as no gauge singlet exists in the model. Because of this restriction, SARAH uses the more general expressions of [242].

The running of the Fayet-Iliopoulos -term receives two contributions: The first part is already fixed by the running of the gauge coupling of the Abelian group, the second part, , is known even to three loops [245, 246]. SARAH has implemented the one- and two-loop results which are rather simple: and are traces which are also used to express the -functions of the soft-scalar masses at one- and two-loop; see, for instance, [242].

Finally, the -functions for the gaugino mass parameters are where the expressions for are also given in [242, 243, 247]. has actually a rather simple form similar to the one of the gauge couplings. One finds Therefore, the running of the gaugino masses is strongly correlated with the one of the gauge couplings. Thus, for a GUT model the hierarchy of the running gaugino masses is the same as the one for the gauge couplings.

The expressions presented in the early works of [242, 243, 247] did actually not cover all possibilities and are not sufficient for any possible SUSY models which can be implemented in SARAH. Therefore, SARAH has implemented also some more results from the literature which became available in the last few years. In the case of several ’s, gauge-kinetic mixing can arise if the groups are not orthogonal. Substitution rules to translate the results of [242] to those including gauge-kinetic mixing were presented in [248] and have been implemented in SARAH2. For instance, to include gauge-kinetic mixing in the running of the gauge couplings and gaugino masses (42) and (52) can be used together with the substitutions: Here, and are matrices carrying the gauge couplings and gaugino masses of all groups; see also Section 2.2, and I introduced . The sums are running over all chiral superfields . Also for all other terms involving gauge couplings and gaugino masses appearing in the -functions similar rules are presented in [248] which are used by SARAH.

Furthermore, also the changes in the RGEs in the presence of Dirac gaugino mass terms are known today at the two-loop level; see [236]. SARAH   makes use of [236] to obtain the -functions for the new mass parameters as well as to include new contribution to the RGEs of tadpole terms in presence of Dirac gauginos. The -functions of a Dirac mass terms are related to the anomalous dimension of the involved chiral superfield , whose fermionic component is , and to the running of the corresponding gauge coupling: The tadpole term receives two new contributions from Fayet-Iliopoulos terms discussed above and terms mimicking insertions: Thus, the only missing piece is which is also calculated by SARAH up to two-loop based on [236].

Finally, the set of SUSY RGEs is completed by using the results of [249, 250] to get the gauge dependence in the running of the VEVs. As a consequence, the -functions for the VEVs consist of two parts which are calculated independently by SARAH: is the anomalous dimension of the scalar which receives the VEV . The gauge dependent parts which vanish in Landau gauge are absorbed in . All details about this calculation and the generic results for are given in [249, 250].

I want to mention that SARAH provides the same accuracy also for the RGEs for a nonSUSY model by making use of the generic results of [251254]. These results are completed by [255] to cover gauge-kinetic mixing and again by [249, 250] to include the gauge dependence of the running of VEVs also in the non-SUSY case.

Output. The RGEs calculated by SARAH are outputted in different formats: (i) they are written in the internal SARAH format in the output directory, (ii) they are included in the LaTeX output in a much more readable format, (iii) they are exported into a format which can be used together with NDSolve of Mathematica to solve the RGEs numerically within Mathematica, and (iv) they are exported into Fortran code which is used by  SPheno.

3.2. Tadpole Equations

During the evaluation of a model, SARAH calculates “on the fly” all minimum conditions of the tree-level potential, the so-called tadpole equations. In the case of no CP violation, in which complex scalars are decomposed as the expressions are calculated. These are equivalent to . For models with CP violation in the Higgs sector, that is, either where complex phases appear between the real scalars or where the VEVs have an imaginary part, SARAH calculates the minimum conditions with respect to the CP even and CP odd components: The set of all tadpole equations is in this case .

Output. The tadpole equations are exported into LaTeX format as well as in Fortran code used by  SPheno. This ensures that all parameter points evaluated by  SPheno are at least sitting at a local minimum of the scalar potential. Moreover, the tadpole equations are included in the model files for  Vevacious which is used to find all possible solutions of them with respect to the different VEVs.

3.3. Masses and Mass Matrices

SARAH uses the definition of the rotations defined in the model file to calculate the mass matrices for particles which mix. The mass matrices for scalars are calculated by where can be either real or complex: that is, the resulting corresponds to or of (33). In the mass matrices of states which include Goldstone bosons also the dependent terms are included.

The mass matrices for fermions are calculated as with for Majorana fermions, and and for Dirac fermions.

SARAH calculates for all states which are rotated to mass eigenstates the mass matrices during the evaluation of a model. In addition, it checks if there are also particles where gauge and mass eigenstates are identical. In that case, it calculates also the expressions for the masses of these states.

Output. The tree-level masses and mass matrices are also exported to LaTeX files as well as to Fortran code for  SPheno. In addition, they are used in the  Vevacious output to enable the calculation of the one-loop effective potential. The mass matrices can also be exported to the  CalcHep model files if the user wants to calculate the masses internally with CalcHep instead of using them as input.

3.4. Vertices

Vertices are not automatically calculated during the initialization of a model like it is done for mass matrices and tadpole equations. However, the calculation can be started very easily. In general, SARAH is optimized for the extraction of three- and four-point interactions with renormalizable operators. This means that usually only the following generic interactions are taken into account in the calculations: interactions of two fermions or two ghosts with one scalar or vector bosons (FFS, FFV, GGS, and GGV), interactions of three or four scalars or vector bosons (SSS, SSSS, VVV, and VVVV), and interactions of two scalars with one or two vector bosons (SSV and SSVV) or two vector bosons with one scalar (SVV).

In this context, vertices not involving fermions are calculated by Here, are either scalars, vector bosons, or ghosts. The results are expressed by a coefficient which is a Lorentz invariant and a Lorentz factor which involves , , or . Vertices for Dirac fermions are first expressed in terms of Weyl fermions. The two vertices are then calculated separately. Taking two Dirac fermions and and distinguishing the two cases for fermion-vector and fermion-scalar couplings, the vertices are calculated and expressed by Here, the polarization operators are used.

The user can either calculate specific vertices for a particular set of external states or call functions where SARAH derives all existing interactions from the Lagrangian. The first option might be useful to check the exact structure of single vertices, while the second one is needed to get all vertices to write model files for other tools.

Output. The vertices are exported into many different formats. They are saved in the SARAH internal format and they can be written to LaTeX files. The main purpose is the export into formats which can be used with other tools. SARAH writes model files for  FeynArts,   WHIZARD/OMEGA, and   CalcHep/CompHep as well as in the  UFO  format. The  UFO  format is supported by  MadGraph,   Herwigg+,  and  Sherpa. Thus, by the output of the vertices into these different formats, SARAH provides an implementation of a given model in a wide range of HEP tools. In addition, SARAH  generates also Fortran  code to implement all vertices in  SPheno.

3.5. One- and Two-Loop Corrections to Tadpoles and Self-Energies
3.5.1. One-Loop Corrections

SARAH calculates the analytical expressions for the one-loop corrections to the tadpoles and the one-loop self-energies for all particles. For states which are a mixture of several gauge eigenstates, the self-energy matrices are calculated. For doing that, SARAH is working with gauge eigenstates as external particles but uses mass eigenstates in the loop. The calculations are performed in -scheme using ’t Hooft gauge. In the case of non-SUSY models SARAH switches to -scheme. This approach is a generalization of the procedure applied in [256] to the MSSM. In this context, the following results are obtained: (i)the self-energies of scalars and scalar mass matrices, (ii)the self-energies , , and for fermions and fermion mass matrices, (iii)the transversal self-energy of massive vector bosons. The approach to calculate the loop corrections is as follows: all possible generic diagrams at the one-loop level shown in Figure 1 are included in SARAH. Each generic amplitude is parametrized by Here “Symmetry” and “Colour” are real factors. The loop functions are expressed by standard Passarino-Veltman integrals and and some related functions: , , , , , and as defined in [256].

As first step to get the loop corrections, SARAH generates all possible Feynman diagrams with all field combinations possible in the considered model. The second step is to match these diagrams to the generic expressions. All calculations are done without any assumption and always the most general case is taken. For instance, the generic expression for a purely scalar contribution to the scalar self-energy reads In the case of an external charged Higgs together with down- and up-squarks in the loop the correction to the charged Higgs mass matrix becomes is the charged Higgs-sdown-sup vertex where the rotation matrix of the charged Higgs is replaced by the identity matrix to get the projection on the gauge eigenstates. One can see that all possible combinations of internal generations are included: that is, also effects like flavour mixing are completely covered. Also the entire dependence is kept.

Output. The one-loop expressions are saved in the SARAH internal Mathematica format and can be included in the LaTeX output. In addition, all self-energies and one-loop tadpoles are exported into Fortran code for  SPheno. This enables SPheno to calculate the loop-corrected masses for all particles as discussed below.

3.5.2. Two-Loop Corrections

It is even possible to go beyond one-loop with SARAH and to calculate two-loop contributions to the self-energies of real scalars. There are two equivalent approaches implemented in the SPheno interface of SARAH to perform these calculations: an effective potential approach and a diagrammatic approach with vanishing external momenta. Because of the very complicated form of the results there is no output of the corresponding expressions in the Mathematica or LaTeX format but the results are just included in the Fortran code for numerical evaluation. I will discuss both calculations a bit more.

Effective Potential Calculation. The first calculation of the two-loop self-energies is based on the effective potential approach. The starting points of the calculation are the generic results for the two-loop effective potential given in [16]. These have been translated to four component notations and were implemented in SARAH. When SARAH creates the SPheno output it writes down the amplitude for all two-loop diagrams which do not vanish in the gaugeless limit. This limit means that contributions from broken gauge groups are ignored. The remaining generic diagrams which are included are shown in Figure 2. Using these diagrams includes, for instance, all two-loop contributions which are also taken into account in the MSSM. To get the values for the two-loop self-energies and two-loop tadpoles, the derivatives of the potential with respect to the VEVs are taken numerically as proposed in [257]. There are two possibilities for this derivation implemented in SARAH/SPheno: (i) a fully numerical procedure which takes the derivative of the full effective potential with respect to the VEVs and (ii) a semianalytical derivation which takes analytically the derivative of the loop functions with respect to involved masses but derives the masses and coupling numerically with respect to the VEVs. More details about both methods and the numerical differences are given in [147].

Diagrammatic Calculation. A fully diagrammatic calculation for two-loop contributions to scalar self-energies with SARAHSPheno became available with [148]. In this setup a set of generic expressions first derived in [148] is used. All two-loop diagrams shown in Figure 3 are included in the limit . These are again the diagrams which do not vanish in general in the gaugeless limit. The results of [148] have the advantage that the expressions which are derived from the effective potential are much simpler than taking the limit in other two-loop functions available in the literature [258]. The diagrammatic method gives completely equivalent results to the effective potential calculation but is usually numerically more robust.

The Need for Both Calculations. Since both calculations are based on a completely independent implementation and use a different approach they are very useful to perform cross-checks. For the MSSM and NMSSM both calculations reproduce exactly the results obtained by widely used routines based on [259264]. However, for nonminimal SUSY models there are no references available to compare with. Thus, the only possibility to cross check the results is within SPheno and comparing of the two different methods.

Output. The two-loop expressions for the effective potential, the tadpoles, and the self-energies are just exported to  SPheno  at the moment to calculate the loop-corrected mass spectrum.

3.6. Loop-Corrected Mass Spectrum

The information about the one- and two-loop corrections to the one- and two-point functions introduced in Section 3.5 can be used to calculate the loop-corrected mass spectrum. Sticking to approach of [256], the renormalized mass matrices (or masses) are related to the tree-level mass matrices (or masses) and the self-energies as follows.

3.6.1. Loop-Corrected Masses

Real Scalars. For a real scalar , the one-loop, and in some cases also two-loop, self-energies are calculated by SPheno. The loop-corrected mass matrix squared is related to the tree-level mass matrix squared   and the self-energies via The one-shell condition for the eigenvalue of the loop-corrected mass matrix reads A stable solution of (68) for each eigenvalue is usually just found via an iterative procedure. In this approach one has to be careful how is defined: this is the tree-level mass matrix where the parameters are taken at the minimum of the effective potential evaluated at the same loop level at which the self-energies are known. The physical masses are associated with the eigenvalues . In general, for each eigenvalue the rotation matrix is slightly different because of the dependence of the self-energies. The convention by SARAH and SPheno is that the rotation matrix of the lightest eigenvalue is used in all further calculations and the output.

Complex Scalars. For a complex scalar the one-loop-corrected mass matrix squared is related to the tree-level mass and the one-loop self-energy via The same on-shell condition (68) as for real scalars is used.

Vector Bosons. For vector bosons we have similar simple expressions as for scalar. The one-loop masses of real or complex vector bosons are given by

Majorana Fermions. The one-loop mass matrix of a Majorana fermion is related to the tree-level mass matrix and the different parts of the self-energies by Note that is used to assign tree-level values while denotes a transposition. Equation (68) can also be used for fermions by taking the eigenvalues of .

Dirac Fermions. For a Dirac fermion one obtains the one-loop-corrected mass matrix via Here, the eigenvalues of are used in (68) to get the pole masses.

3.6.2. Renormalization Procedure

I have explained so far how SPheno does calculate the one- and two-loop self-energies and how these are related to the loop-corrected masses. Now, it is time to put this in a more global picture by describing step-by-step the entire renormalization procedure that SPheno uses.(1)Everything starts with calculating the running parameters at the renormalization scale from the given input parameters. Either the parameters can be given directly at as input or they are fixed by some GUT conditions and a RGE running is performed. itself either can be a fixed value or can be dynamically chosen. It is common to choose the geometric mean of the stop masses because this usually minimizes the scale dependence of the Higgs mass prediction.(2)Not all parameters are fixed by the input but some parameters are kept free. These parameters are arranged in a way that all further calculations are done at the minimum of the potential. For this purpose the tadpole equations are solved at tree-level with respect to these free parameters.(3)As soon as all running parameters are known at the SUSY scale, they are used to calculate the tree-level mass spectrum.(4)The tree-level masses are used to calculate the self-energies of the -boson, , where is the pole mass.(5), are used to get the tree-level value of the electroweak VEV . and the running value of are used to get tree-level VEVs , . Note that in this step it is assumed that always two Higgs doublets are present in SUSY models which give mass to up- and down-quark as well as leptons and gauge bosons.(6)Now, all tree-level parameters are known and the tree-level masses and rotation matrices are recalculated using the obtained values.(7)Tree-level masses, rotation matrices, and parameter are used to get all vertices at tree-level. The vertices and masses are then plugged in the expressions for the one- and two-loop corrections to the tadpoles . The conditions to work at the minimum of the effective potential are These equations are again solved for the same set of parameters as at tree-level.(8)The self-energies for all particles are calculated at the highest available loop level as explained above. Note, these calculations involve purely tree-level parameters but not the ones obtained from (73).(9)Equations (67)–(72) are used to get the loop-corrected mass matrices for all particle. Now, the parameters coming from loop-corrected tadpoles are used to express the tree-level mass matrices. All calculations are iterated until the on-shell condition is satisfied for all masses.

3.6.3. Thresholds

So far, I have not mentioned another subtlety: in general, the running SM parameters depend on the SUSY masses. The reason is the thresholds used to match the running parameters to the measured ones. These thresholds change, when the mass spectrum changes. Therefore, the above procedure is iterated until the entire loop-corrected mass spectrum has converged. The calculation of the thresholds is also dynamically adjusted by SARAH to include all new physics contributions. The general procedure to obtain the running gauge and Yukawa at is as follows.

(1) The first step is the calculation of , via Here, and are taken as input and receive corrections from the top loops as well as form new physics (NP): The sum runs over all particles which are not present in the SM and which are either charged or coloured. The coefficient depends on the charge, respectively, colour representation, the generic type of the particle (scalar, fermion, and vector), and the degrees of freedom of the particle (real/complex boson, respectively, Majorana/Dirac fermion).

(2) The next step is the calculation of the running Weinberg angle and electroweak VEV . For that the one-loop corrections and to the -mass and -mass are needed. And an iterative procedure is applied with in the first iteration together with Here, is the Fermi constant and is defined by where are the corrections to the muon decay which are calculated at one-loop as well. The parameter is calculated also at full one-loop and the known two-loop SM corrections are added.

(3) With the obtained information, the running gauge couplings at are given by

(4) The running Yukawa couplings are also calculated in an iterative way. The starting points are the running fermion masses in obtained from the pole masses given as input: with The two-loop parts are taken from [265, 266]. The masses are matched to the eigenvalues of the loop-corrected fermion mass matrices calculated as Here, the pure QCD and QED corrections are dropped in the self-energies . Inverting this relation to get the running tree-level mass matrix leads to , , and . Since the self-energies depend also on the Yukawa matrices, this calculation has to be iterated until a stable point is reached. Optionally, also the constraint that the CKM matrix is reproduced can be included in the matching.

Output. The calculation of the loop-corrected mass spectrum and the thresholds is included in the  SPheno output.

3.7. Decays and Branching Ratios

The calculation of decays widths and branching ratios can be done by using the interface between SARAH  and SPheno. SPheno modules created by SARAH calculate all two-body decays for SUSY and Higgs states as well as for additional gauge bosons. In addition, the three-body decays of a fermion into three other fermions and of a scalar into another scalar and two fermions are included.

In the Higgs sector, possible decays into two SUSY particles, leptons, and massive gauge bosons are calculated at tree-level. For two quarks in the final state the dominant QCD corrections due to gluons are included [267]. The loop induced decays into two photons and gluons are fully calculated at leading-order (LO) with the dominant next-to-leading-order corrections known from the MSSM. For the LO contributions all charged and coloured states in the given model are included in the loop: that is, new contributions rising in a model beyond the MSSM are fully covered at one-loop. In addition, in the Higgs decays also final states with off-shell gauge bosons (, ) are included. The only missing piece is the channel. The corresponding loops are not yet derived by SARAH and the partial width is set to zero.

In contrast to other spectrum generators, SPheno modules by SARAH perform a RGE running to the mass scale of the decaying particle. This should give a more accurate prediction for the decay width and branching ratios. However, the user can also turn off this running and use always the parameters as calculated at in all decays as this is done by other codes.

Output. All necessary routines to calculate the two- and three-body decays are included by default in the Fortran output for  SPheno.

3.8. Higgs Coupling Ratios

With the discovery of the Higgs boson at the LHC and the precise measurements of its mass and couplings to other particles, a new era of high energy physics has started. Today, many SUSY models not only have been confronted with the exclusion limits from direct searches, but also have to reproduce the Higgs properties correctly. The agreement with respect to the mass can be easily read off a spectrum file. For the rates this is usually not so easy. One can parametrize how “SM-like” the couplings of a particular scalar are by considering the ratio Here, is the calculated coupling between a scalar and two SM particles and for a particular parameter point in a particular model. This coupling is normalized to the SM expectation for the same interaction. Nowadays, all are constrained to be rather close to 1 if should be associated with the SM Higgs. SARAH uses the information which is already available from the calculation of the decays to obtain also values for . Of course, also here the channel is missing and is therefore put always to 0.

Output. All necessary routines to calculate the Higgs coupling ratios are included by default in the Fortran output for  SPheno.

3.9. Flavour and Precision Observables

Constraints for new physics scenarios come not only from direct searches and the Higgs mass observation but also from the measurement of processes which happen only very rarely in the SM and/or which are known to have a very high accuracy. These are in particular flavour violation observables. When using the SPheno output of SARAH, routines for the calculation of many quark and lepton flavour (QFV and LFV) observables are generated.

Lepton Flavour Violation. The radiative decays of a lepton into another lepton and a photon and the purely leptonic three-body decays of leptons are included . Also flavour violating decays of the -boson are tested by SARAH/SPheno. Moreover, there are also semihadronic observables in the output: conversion in nuclei , where the considered nuclei are ( = Al, Ti, Sr, Sb, Au, and Pb), as well as decays of ’s into pseudoscalars, with .

Quark Flavour Violation. The radiative -decay and a set of -decays stemming from four-fermion operators are calculated: , , , and . Also Koan decays are considered , , and as well as CP observables (, , and ). Finally, some decays which take place already at tree-level are included: namely, and .

The approach in SARAH to generate the routines to calculate all these observables is similar to the approach used for loop calculations needed for radiative corrections to the masses: generic formulas for all possible Feynman diagrams which contribute to the Wilson coefficients3 of widely used dimension 5 and 6 operators are implemented in SARAH. I show in Figures 4 and 5 only the topologies which are considered because these are already many. For each topology the amplitudes with all possible generic insertions are included. Today, these implementations are mainly based on the FlavorKit functionality discussed below. SARAH generates all possible one-loop diagrams and uses the generic expression to get their amplitudes. In this context, not only all possible particles in the loop are included but also all different propagators for penguin diagrams are considered. Thus, not only photonic penguins which are often considered to be dominant in many processes are taken into account, but also all Higgs and—if existing— penguins are generated. After the calculation of the Wilson coefficients, these are then combined to calculate the observables. This can easily be done by using expressions from the literature which are usually model independent.

With the development of the FlavorKit [146] interface all information to calculate flavour observables is no longer hard-coded in SARAH but is provided by external files. This makes it possible for the user to extent the list of flavour observables when necessary. The FlavorKit is an automatization of the procedure presented in [268] to implement in SARAH and SPheno. Users interested in the internal calculation might take a look at these two references.

Also some other observables are calculated by the combination SARAHSPheno which are measured with high precision: electric dipole moments (EDMs) and anomalous magnetic moments of leptons () and

Output. When generating  SPheno code with SARAH, the above-listed flavour and precision observables are included in the Fortran code. In addition, SARAH writes also LaTeX files with all contribution to the Wilson coefficients from any possible diagram.

3.10. Fine-Tuning

A measure for the electroweak fine-tuning was introduced in [269, 270] is a set of independent parameters. gives an estimate of the accuracy to which the parameter must be tuned to get the correct electroweak breaking scale [271]. Using this definition the fine-tuning of a given model depends on the choice of what parameters are considered as fundamental and at which scale they are defined. The approach by SARAH is that it takes by default the scale at which the SUSY breaking parameters are set. This corresponds to models where SUSY is broken by gravity to the scale of grand unification (GUT scale), while for models with gauge mediated SUSY breaking (GMSB) the messenger scale would be used. For simplicity, I call both . The choice of the set of parameters is made by user. Usually, one uses in scenarios motivated by supergravity the universal scalar and gaugino masses (, ) as well as the parameters relating to the superpotential terms and the corresponding soft-breaking terms (, ) to calculate the fine-tuning. However, since also these parameters are related in specific models for SUSY breaking, it might be necessary to consider even more fundamental parameters like the gravitino mass . In addition, also the fine-tuning with respect to the superpotential parameters themselves as well as to the strong coupling might be included because they can even supersede the fine-tuning in the soft-SUSY breaking sector.

To calculate the fine-tuning in practice, an iteration of the RGEs between and happens using the full two-loop RGEs. In each iteration one of the fundamental parameters is slightly varied and the running parameters at are calculated. These parameters are used to solve the tadpole equations numerically with respect to all VEVs and to recalculate the -boson mass. To give an even more accurate estimate, also one-loop corrections to the mass stemming from can be included.

Output. A fine-tuning calculation is optionally included in the Fortran output for  SPheno.

3.11. Summary

SARAH derives a lot of information about a given model. This information can be used in different interfaces to study a model in all detail. In general, one can get (i) LaTeX files, (ii) a spectrum generator based on SPheno, and (iii) model files for different HEP tools.

LaTeX. All analytical information derived about a model can be exported to LaTeX files. These files provide in a human readable format the following information: (i) list of all superfields as well as component fields for all eigenstates; (ii) the superpotential and important parts of the Lagrangian like soft-breaking and gauge fixing terms added by SARAH; (iii) all mass matrices and tadpole equations; (iv) the full two-loop RGEs; (v) analytical expressions for the one-loop self-energies and tadpoles; (vi) all interactions and the corresponding Feynman diagrams; and (vii) details about the implementation in SARAH. Separated files are also generated for the flavour observables showing all contributing diagrams with their amplitudes.

Spectrum Generator. SARAH  3 has been the first “spectrum-generator-generator”: using the derived information about the mass matrices, tadpole equations, vertices, loop corrections, and RGEs for the given model SARAH writes Fortran source code for SPheno. Based on this code the user gets a fully functional spectrum generator for a new model. The features of a spectrum generator created in this way are as follows.(1)RGE Running. The full two-loop RGEs are included.(2)Precise Mass Spectrum Calculation. SPheno modules created by SARAH include the one-loop corrections to all SUSY particles. For Higgs states the full one-loop and in addition dominant two-loop corrections are included.(3)Calculation of Decays. SPheno calculates all two-body decays for SUSY and Higgs states. In addition, the three-body decays of a fermion into three other fermions and the three-body decays of scalar into another scalar and a pair of fermions are included.(4)FlavorKit Interface. SPheno modules calculate out of the box many flavour observables for a given model.(5)Calculation of Precision Observables. SPheno does also calculate , electromagnetic dipole moments, and anomalous magnetic moments of leptons.(6)Output for HiggsBounds and HiggsSignals.  SPheno generates all necessary files with the Higgs properties (masses, widths, and couplings to SM states) which are needed to run HiggsBounds and HiggsSignals.(7)Estimate of the Fine-Tuning. SPheno modules can calculate the electroweak fine-tuning with respect to a set of defined parameters.

Model Files. In particular the vertex lists can be exported into several formats to create model files for FeynArts/FormCalc, CalcHep/CompHep, and  WHIZARD/OMega as well as for MadGraph, Herwig++,  or Sherpa based on the UFO format. Also model files for Vevacious can be generated which include the tree-level potential as well as the mass matrices to generate the one-loop effective potential.

4. Example—Part I: The B-L-SSM and Its Implementation in SARAH

I will discuss in this section and the subsequent ones the implementation of the B-L-SSM in SARAH and how all phenomenological aspects of this model can be studied. The B-L-SSM is for sure not the simplest extension of the MSSM, but it provides many interesting features. There are some subtleties in the implementation which will not show up in singlet extensions, for instance. I hope that the examples presented in the following sections show how even such a complicated model can be studied with a very high precision without too much effort. Applying the same methodology to other SUSY or non-SUSY models should be straightforward. However, I cannot discuss here all topics which might be interesting and useful for some models. In particular, I will not show how models with threshold scales are implemented. SARAH supports thresholds where heavy particles get integrated out and where the gauge symmetries do either change or not. Users interested in that topic might take a look at the manual as well as the implementations of seesaw models4 or left-right symmetric models5. A brief summary of the general approach to include thresholds is also given in Appendix A.5.

In the first part of this section I will give a short introduction to the B-L-SSM, before I come to the tools which are going to be used. The implementation of the B-L-SSM in SARAH is discussed in Section 4.4 and how it is evaluated is shown in Section 4.5. The next section explains what can be done with the model using just Mathematica. It is shown how the LaTeX output is generated, how mass matrices and tadpole equations can be handled with Mathematica, and how RGEs are calculated and solved. Section 6 is about the interface between SARAH and SPheno. The mass spectrum calculation is explained, and what else can be obtained with SPheno is discussed: decays and branching ratios, flavour and precision observables, and the fine-tuning. Afterwards, we include more tools in our study in Section 7: HiggsBounds/HiggsSignals to test Higgs constraints, Vevacious to check the vacuum stability, MicrOmegas to calculate the dark matter relic density and direct detections rates, WHIZARD/OMega to produce monojet events, and MadGraph to make a simple dilepton analysis. Section 8 is about parameter scans and how the different tools can be interfaced.

4.1. The Model

Supersymmetric models with an additional gauge symmetry at the TeV scale have recently received considerable attention: they can explain neutrino data, they might help to understand the origin of -parity and its possible spontaneous violation [272276], and they have a rich phenomenology [277]. In the -parity conserving case, these models come with a new Higgs which mixes with the MSSM one [199], they provide new dark matter candidates [278], and they can have an impact on the Higgs decays [279]. In both cases of broken and unbroken -parity these models have interesting consequences for LHC searches [280283].

4.1.1. Particle Content and Superpotential

We study the minimal supersymmetric model where the SM gauge sector is extended by a and where -parity is not broken by sneutrino VEVs. This model is called the B-L-SSM. In this model the matter sector of the MSSM is extended by three generations of right-handed neutrino superfields and two fields which are responsible for breaking, and . These fields carry lepton number 2 and are called “bileptons.” The chiral superfields and their quantum numbers are summarized in Table 1. The superpotential of the B-L-SSM is given byHere, and are generation indices and we suppressed all colour and isospin indices. The first line is identical to the MSSM, and all new terms are collected in the second line of (86). After breaking a Majorana mass term for the right-handed neutrinos is generated. This term causes a mass splitting between the CP even and odd parts of the complex sneutrinos.

The additional soft-breaking terms compared to the MSSM are The electroweak and symmetry are broken to by the following VEVs of Higgs states and bileptons: In analogy to the MSSM definition , we call the ratio of the two bilepton VEVs . As mentioned above, the Majorana mass term causes a mass splitting between the CP even and odd parts of the right sneutrinos. This makes it necessary to defineHere, is the generation index of the sneutrinos.

4.1.2. Gauge-Kinetic Mixing

The particle content of the B-L-SSM gives rise to gauge-kinetic mixing because the matrix is not diagonal. The indices and run over all groups and over all superfields. For the particle content of Table 1 we find contains the GUT normalization of the two Abelian gauge groups. We will take for and for . From (91) we see that even if gauge-kinetic mixing vanishes at one scale, it is induced again by RGE running and must be included therefore.

However, as long as the two Abelian gauge groups are unbroken, we can make a change of basis. This freedom is used to go to a basis where electroweak precision data is respected in a simple way: by choosing a triangle form of the gauge coupling matrix, the bilepton contributions to the mass vanish: The gauge couplings are related by [284]:

4.1.3. Mass Eigenstates

After electroweak and breaking and also because of gauge-kinetic mixing there is a mixing between the neutral SUSY particles from the MSSM and the new sector.

Neutral Gauge Bosons. In the gauge sector, the neutral gauge bosons , , and mix. This gives three mass eigenstates , , and which are related by a rotation matrix which can be expressed by two angles and :The entire mixing between the and the SM gauge boson comes from which can be approximated by [285] with and .

Neutral Higgs Bosons. In the Higgs sector, the CP even and odd parts of the doublets mix with the corresponding CP eigenstates of the bileptons. This leads to four physical scalar Higgs particles: and four pseudoscalars. Two pseudoscalars are physical and the other two are Goldstone bosons of the and :

Neutralinos. There are seven neutralinos in the model which are an admixture of the three gauginos, the two Higgsinos, and the two bileptinos:

Neutrinos and Sneutrinos. There are six Majorana neutrinos which are a superposition of the left and right neutrino gauge eigenstates: Similarly, the sneutrinos are admixtures of left and right sneutrino gauge eigenstates. Because of the mass splitting due to the Majorana mass term the CP even and odd parts mix separately, and there are 12 real mass eigenstates:

4.2. Constrained Model

The B-L-SSM has 55 additional parameters, including all phases, compared to the general MSSM. Therefore, an organizing principle to relate these parameters to reduce the number of free parameters is often helpful. We choose a CMSSM-like setup inspired by minimal supergravity. We set the boundary conditions at the GUT scale which is taken to be We assume that at the GUT gauge-kinetic mixing is absent but gets induced below that scale. This gives as additional conditions: Thus, at the GUT scale the relations and   hold.

In minimal supergravity all scalar and gaugino soft-masses unify at the GUT scale: that is, there are only two free parameters to fix all soft-masses: and . The boundary conditions at the GUT scale are with the identity matrix 1. We assume also that no gauge-kinetic mixing is present in the gaugino sector at this scale: that is, For the trilinear soft-breaking terms we use in analogy to the CMSSM the relation with a free parameter . The remaining soft-terms and are not taken to be input, but the tadpole equations are, if not stated otherwise, solved with respect to , and . The advantage of this choice is that these parameters do not enter the RGE evaluation of all other terms and they can be considered independently.

To fix the gauge part of the sector, we need two more input parameters. These are the mass of the and the ratio of the bilepton VEVs. Finally, there are two more Yukawa-like matrices, and . Thus, the full set of input parameter is The masses of the light neutrinos in this model are proportional to . This gives strong constraints and we can often neglect because the entries must be tiny.

4.3. Setup

We want to study all aspects of the B-L-SSM: the mass spectrum, decays, flavour observables, Higgs constraints, dark matter, vacuum stability, and collider physics. For this purpose, we not only make use of SARAH but also interface it with several other public tools. To simplify the following presentation, I am going to assume that all the following tools are installed in the same directory $PATH:(1)SARAH 4.5.0 or newer in $PATH/SARAH (http://www.hepforge.org/downloads/sarah),(2)SPheno 3.3.6 or newer in $PATH/SPHENO (http://www.hepforge.org/downloads/spheno),(3)SSP 1.2.0 or newer in $PATH/SSP (http://www.hepforge.org/downloads/sarah),(4)PreSARAH 1.0.3 or newer in $PATH/PRESARAH (http://www.hepforge.org/downloads/sarah),(5)HiggsBounds 4.1.0 or newer in $PATH/HIGGSBOUNDS (http://higgsbounds.hepforge.org/downloads.html),(6)HiggsSignals 1.2.0 or newer in $PATH/HIGGSSIGNALS (http://higgsbounds.hepforge.org/downloads.html),(7)CalcHep 3.4.6 or newer in $PATH/CALCHEP (http://theory.sinp.msu.ru/pukhov/calchep.html),(8)MicrOmegas 4.1 or newer in $PATH/MICROMEGAS (http://lapth.cnrs.fr/micromegas/),(9)WHIZARD 2.2.2 or newer in $PATH/WHIZARD (http://www.hepforge.org/downloads/whizard),(10)MadGraph 5.2.2.2 or newer in $PATH/MADGRAPH (https://launchpad.net/mg5amcnlo),(11)Vevacious 1.1.0 or newer in $PATH/VEVACIOUS (http://www.hepforge.org/downloads/vevacious),(12)MadAnalysis 1.1.11 or newer in $PATH/MADANALYSIS (https://launchpad.net/madanalysis5).It is possible to use the BSM Toolbox scripts [286] from http://sarah.hepforge.org/Toolbox.html to install most tools at once via the included configure  script. The Toolbox contains also a script called butler to implement a model in the different tools automatically based on the SARAH implementation. This might be the convenient choice for the user. However, I am going to use the explicit route and discuss the implementation in each tool in some detail.

Installation of SARAH. The installation of SARAH is very simple: the package can be downloaded from http://sarah.hepforge.org. After copying the tar file to the directory $PATH, it can be extracted.

X.Y.Z has to be replaced by the version which was downloaded. In the last line a symbolic link from the directory SARAH to SARAH, X.Y.Z, is created. There is no compilation necessary, but SARAH can directly be used with any Mathematica version between 7 and 10.

Installation of All Other Tools. For the installation of all other tools, please check the manual or readme files of these tools.

4.4. Implementation of the -SSM

I assume in the following that the B-L-SSM was not delivered with SARAH and we have to implement it ourselves. All information about the model is saved in three different files: B-L-SSM.m, parameters.m, and particles.m which have to be located in the subdirectory B-L-SSM in the Model directory of SARAH. Only the first file, B-L-SSM.m, is absolutely necessary and contains all physical information about the model: the symmetries, particle content, superpotential, and mixing. In parameters.m properties of all parameters can be defined, for example, LaTeX name, Les Houches block and number, relations among parameters, and real/complex. In particles.m additional information about particles are set: mass, width, electric charge, PDG, LaTeX name, output name, and so on. The optional information in parameters.m and particles.m might be needed for the different outputs of SARAH as discussed later.

4.4.1. Main File: B-L-SSM.m

Usually the easiest way to implement a new model in SARAH  is to start with an existing model file. For non-SUSY models the convenient choice is the model file for the SM and for SUSY models the one for the MSSM. Therefore, I am going to discuss the differences in the model file of the B-L-SSM compared to the MSSM one. As reference, the corresponding parts in the MSSM model file are shown as well. In addition, there are some specific flags listed in Appendix A.1 which can be used to turn on/off particular structures in the Lagrangian. Those are not needed for our purpose here but might be useful for other models.

At the very beginning of a model file, some additional information about the model implementation can be given: what is the name of the model (in LaTeX syntax as well as string without any special character), who is the author of the model file, and when was the last change?

Global Symmetries. SARAH supports as well as global symmetries which are defined in the array Global. Usually, one considers the MSSM with conserved -parity. This discrete symmetry is defined via the following.

First, the kind of the symmetry is defined (Z[N] with integer N, or U[1]). Note that symmetries are always understood as multiplicative symmetries. The second entry gives a name to the symmetry. For the there is one specific name which can be used to define -symmetries: RSymmetry. In that case, the charges of the SUSY coordinates are considered as well. There are two possibilities to define the charges of SUSY fields with respect to a global symmetry:(1)If in the definition of the vector or chiral superfields, which will be explained below, only one quantum number is given per superfield per global symmetry this number is used for the superfield itself but also for component fields.(2)If a list with three entries is given as charge for a vector or chiral superfield the following convention applies: for chiral superfields, the first entry is the charge for the superfield, the second one for the scalar component, and the third one for the fermionic component. For vector superfield, the second entry refers to the gaugino and the third one to the gauge boson. With these conventions a suitable definition of the global symmetries for states with -parity would be as follows.

In principle, the B-L-SSM has no global symmetry but -parity is just a remnant of the broken . However, it turns out to be helpful to keep the standard definition for -parity: we can use this to get the relic density with MicrOmegas. Sometimes, one uses also matter parities to have an additional in models with gauged [287, 288].

Gauge Symmetries. The next step to define a SUSY model is to fix the gauge sector. That is done by adding for each gauge group one entry to the array Gauge. For SUSY models for each entry in Gauge  also the corresponding vector superfields are set automatically. For instance, the SM gauge group is set via the following.

First, the names of the gauge fields are given6, and the second entry defines the kind of the group. The third entry gives a name to the gauge group7 and the fourth entry fixes the name of the corresponding gauge coupling. In the fifth entry it is defined if the group will be broken later. The last entry sets the global charges of the gauginos and vector bosons of the corresponding vector superfield using the conventions explained above: here, the vector superfields as well as its fermionic components get -parity  −1, while the spin-1 states have -parity +1.

For the B-L-SSM only a fourth entry is needed. We call the group Bp: that is, the vector boson will be named VBp by SARAH and the gaugino fBp. For the gauge coupling we chose as name gBL8. Thus, the definition of the B-L-SSM gauge sector is as follows.

Since this is the second besides hypercharge, SARAH  will generate two off-diagonal gauge couplings g1BL and gBL1 stemming from kinetic mixing: see also Section 2. The soft-masses for the gauginos which are added by SARAH are MassB, MassWB, MassG, and MassBp. And as consequence of gauge-kinetic mixing MassBBp appears as well.

Chiral Superfields. Chiral superfields in SARAH are defined via the array SuperFields. The conventions are as follows. First, a name for the superfield is given, in the second entry the number of generations is set, and in the third entry the names for the isospin components are defined. Afterwards, the transformation under the different gauge groups is given, and the last entries set the charges under the global symmetries. To define the transformation with respect to Abelian groups the charge is given. For non-Abelian groups the dimension is given as integer, where conjugated representations are negative. If the dimension is not unique, also the Dynkin labels can be defined. The treatment of higher-dimensional representation is different for groups which get broken and those which stay unbroken.(1)Representation with respect to groups which get broken: in that case it is convenient to define higher-dimensional representations as tensor products of the fundamental representation.(2)Representation with respect to groups which do not get broken: in that case the representation is treated as vector which is the appropriate dimension. To do this SARAH  makes use of the generators as well as the Clebsch-Gordan-coefficient provided by Susyno. We have to deal here only with doublets under a broken : that is, we need a vector of length two for the isospin components.

The particle contents of the MSSM are the squark superfields q, d, and u, the lepton superfields l and e, and the two superfields for the Higgs doublets Hd  and Hu. We have 3 generations for all matter fields and the usual charges with respect to the SM gauge groups. These are defined by the following.

The SARAH conventions for the component fields of a superfield are as follows: scalars start with S and fermions with F: for example, the left down-squark is called SdL and the right lepton FeR. The soft-masses for the scalars are mq2, ml2, mHd2, and so forth.

In the B-L-SSM, we have first to define the charge of all MSSM fields and SARAH  includes a factor for all charges as this is done usually by conventions for the hypercharge. Therefore, quark superfields have charge and lepton superfields carry . The other changes are the definition of the right sneutrino superfield vR  which comes in three generations and which is a gauge singlet under the SM gauge groups. The bileptons C1 and C2 are also SM singlets and come with charge according to the just-mentioned conventions.

The soft-terms for the new superfields are mvR2, mC12, and mC22.

Superpotential. The superpotential of the MSSM consists of the Yukawa interaction Yu, Yd, and Ye as well as the -term called ∖[Mu]. SARAH will add the corresponding soft-terms called T[Yu], T[Yd], T[Ye], and B[∖[Mu]]. The Yukawas and the trilinear soft-terms are treated as complex 3 × 3 matrices, and and are complex by default. The MSSM superpotential is set in SARAH by the following.

We see here that we do not have to define the charge indices and the contraction of those. All of this is done automatically by SARAH and the term Yu u.q.Hu gets interpreted internally asIn the B-L-SSM three more terms are present in the superpotential: the neutrino Yukawa coupling Yv, the coupling between bileptons and right neutrino Yn, and the -term called MuP. The corresponding soft-terms added by SARAH are T[Yv], T[Yn], and B[MuP]. Also those are treated by default in the most general way. The full superpotential of the B-L-SSM reads as follows.

Eigenstates. In the MSSM and the B-L-SSM we have to deal with two sets of eigenstates. First, we have the gauge eigenstates which will always be present. These states are called by default GaugeES. Second, we have the mass eigenstates after symmetry breaking which we call EWSB. Both sets are defined in the array NameOfStates. Sometimes, it might be useful to use also intermediate states to make stepwise rotations from the gauge to the final matter eigenstates. Therefore, the length of NameOfStates is not restricted.

Electroweak and B-L Symmetry Breaking. In the MSSM, EWSB happens when the neutral components of the Higgs doublets receive VEVs. In addition, the states are split in their CP even and odd parts. These definitions are done in the array DEFINITION[EWSB][VEVs]. In the MSSM implementation in SARAH the factors of are chosen in a way that the electroweak (ew) VEV is about 246 GeV.

In the B-L-SSM we have to extend DEFINITION[EWSB][VEVs] to give also VEVs to bileptons. Here, we use the same normalization as for the ew VEVs. As mentioned in Section 4.1.1, also the sneutrino will split into real and imaginary parts because of the Majorana mass term. We can also define this splitting in DEFINITION[EWSB][VEVs] and keep 0 for the VEVs. We could study spontaneous -parity violation by adding a nonzero VEV for these particles.

Rotations in Gauge Sector. The electroweak gauge bosons mix in the MSSM as they do in the SM after EWSB. This mixing is defined in the array DEFINITION[EWSB][GaugeSector] and is encoded by three matrices ZZ, ZW, and ZfW. These matrices have a simple form: ZZ is usually parametrized by the Weinberg angle and ZW and ZfW just involve constant factors. This parametrization can be defined later in  parameters.m; see Section 4.4.2. If no parametrization is given, SARAH will treat the matrices as general rotation matrices with the appropriate dimensions9 as follows.

In the B-L-SSM the vector boson of the mixes with the -boson and third component of the -boson to three neutral states. We call the rotation matrix again ZZ but use another parametrization based on (94) as shown in Section 4.4.2. The mixing in the charged gauge boson and gaugino sector does not change compared to the MSSM.

Rotations in Matter Sector. All rotations in the matter sector of the MSSM are defined via the array DEFINITION[EWSB][MatterSector]. First the involved gauge eigenstates are given and then the names for the mass eigenstates and the rotation matrices are given.

Lines 97 and 98 set the mixing of the down- and up-squarks (Sd, Su), and the next two lines set the mixing for charged and neutral sleptons (Se, Sv). Afterwards, the rotations of the CP even, CP odd, and charged Higgs bosons follow (hh, Ah, and Hpm). Lines 104–108 are the mixing for the fermions: neutralinos (L0), charginos (Lm/Lp), charged leptons (FEL/FEL), down-quarks (FDL/FDL), and up-quarks (FUL/FUR). One sees the different conventions used for the mixing of Dirac fermions compared to Majorana fermions and scalars. For instance, the first line is interpreted as Here, is the generation index for the gauge eigenstates , , is the generation index for the mass eigenstates , and is the rotation matrix. is a colour index. In the case of charginos, first the two basis vectors are defined, before the names for the mass eigenstates and the rotation matrices follow. Thus, line 105 leads to the following relations: The two rotation matrices (Um) and (Up) diagonalize the chargino mass matrix.

A few modifications are necessary in the B-L-SSM to include the additional mixing discussed in Section 4.1.3: the mixing of the sneutrinos has to be defined for CP even and odd states separately (SvIm, SvRe) and the basis for the CP even and CP odd Higgs scalars and the neutralinos has been extended by the fields. Finally, a mixing of left and right neutrinos to Majorana states is added. The corresponding definitions in the model file of the B-L-SSM read as follows.

Phases for Unrotated Fields. For states which do not mix there is still a phase which is not fixed. This is just the case for the gluino in both models.

With this definition, the physical gluino mass is related to the gaugino mass parameter by

Dirac Spinors. While SARAH works internally often with Weyl spinors, Dirac spinors are commonly used by Monte-Carlo tools and also by SPheno. Therefore, we have to define the relation between the two- and four-component fermions. Since there are no additional mass eigenstates in the B-L-SSM compared to the MSSM but just some states come with more generations, the definition of Dirac spinors is the same in both models.

The first line defines a Dirac spinor for the down-quarks: while the last line sets the Majorana gluino: In principle, one can also add the definition of Dirac spinors for the gauge eigenstates in exactly the same way. However, we are not interested in performing calculations for gauge eigenstates and I skip this part here. Interested readers can take a look at Appendix C.1 to see the entire model file for the implementation of the B-L-SSM in SARAH.

4.4.2. Parameter Definitions

When the changes in B-L-SSM.m are done, the model is already ready to be used with SARAH. While in principle all calculations in Mathematica can be performed, the different outputs need some more information. These are mostly formal points. All possible options which can be used to define the properties of a parameter in the file parameters.m are the following. (i)Description: it defines a string to describe the parameter. (ii)OutputName: it defines a string which is used for the parameter in the different outputs. No special characters should be used to be compatible with C++ and Fortran. (iii)Real: it defines if a parameter should be considered as real (True) or complex (False). Default is False. (iv)Form: it can be used for matrices to define if it is diagonal (Diagonal) or a scalar (Scalar). By default no assumption is made. (v)LaTeX: it defines the name of the parameter used in the LaTeX output. Standard LaTeX language should be used10. (vi)GUTnormalization: it defines a GUT normalization for an Abelian gauge coupling. (vii)Dependence: it defines a dependence on other parameters which should always be used. (viii)DependenceOptional: it defines a dependence which is optionally used in analytical calculations. (ix)DependenceNum: it defines a dependence which is used in numerical calculations. (x)DependenceSPheno: it defines a dependence which is just used by SPheno. (xi)MatrixProduct: it can be used to express a matrix as product of two other matrices. This can be used, for instance, to relate the CKM matrix to the quark rotation matrices. (xii)LesHouches: it defines the position of the parameter in a Les Houches spectrum file. For matrices just the name of the block is defined, while for scalars the block and an entry have to be given: block, number. (xiii)Value: it assigns a numerical value to the parameter. Many of the above definitions are just optional and are often not needed. I will show some parts of parameters.m to define the properties of new parameters in the B-L-SSM or to change properties of MSSM parameters. However, I will not show here all changes compared to the MSSM but will pick just some important and interesting cases. The full list of changes is given in Appendix C.2.

Gauge Sector. In the gauge sector we have three new gauge couplings: the one corresponding to the new gauge group and the two gauge couplings induced by gauge-kinetic mixing. We define for all three couplings an output name, a LaTeX name, and the position in the Les Houches file which will become important later.

Gauge couplings are by default taken to be real: that is, it is not necessary to define this explicitly. We also used here for the GUT normalization of the charge. The GUT normalization of the two off-diagonal charges is automatically set by SARAH. Similarly, the properties of two new gaugino mass parameters are set; see Appendix C.2.

We have seen in (94) how the rotation matrix can be parametrized by two angles. To do this, we use the Dependence statement for the parameter ZZ used to label the rotation in the neutral gauge sector as follows.

Here, we put this rotation matrix explicitly to real. The reason why we also add an output name is that this matrix shows up internally in SPheno when it diagonalizes the gauge boson mass matrix.

Since the rotation matrix is completely defined by the two rotation angles, it is not necessary to include it in a spectrum file. Therefore, LesHouches -> None is used. In (95) an approximation for the new angle was given. It might be useful to use this approximation sometimes in numerical calculations in Mathematica. Therefore, it is included. However, the better method for high precision calculations with SPheno is to use the numerical result for the rotation matrix calculated by SPheno when diagonalizing the vector bosons mass matrix. To translate this matrix into the rotation angle, we use SPheno calculates this value, uses it internally to get the vertices, and writes it into the block ANGLES as entry 10 of the spectrum file. The needed lines to define all that are as follows.

Rotations in Matter Sector. In the matter sector we have to define new rotation matrices for the CP even and odd sneutrinos and for the neutrinos. The definitions are very short and just include the descriptions, the LaTeX commands, the Les Houches, and output names. For the neutrino rotation matrix the entries read as follows.

Similarly, ZVR and ZVI are set. In addition, there are some rotation matrices which change compared to the MSSM. There is no need to modify the definition for the neutralino rotation matrix because no assumptions are made for these in the MSSM. However, the rotation matrices for scalar and pseudoscalar Higgs are parametrized in the MSSM by two angles: This parametrization is used to express dependence optionally used in the MSSM. Since the rotation matrices have grown to matrices in the B-L-SSM, we can no longer make use of that. Therefore, we can take the definition of the MSSM and overwrite the dependence by None.

We see here a feature which makes life often much simpler. It is actually not necessary to give the full definitions for particles and parameters in the model files for each model. One can make use of global definitions which are defined in the files:$PATH/Models/parameters.m,$PATH/Models/particles.m. The entries for the two Higgs rotation matrices in these files read as follows.

We refereed to these in parameters.m of the B-L-SSM by using the same string as Description, but we overwrote locally the dependence which changed.

4.4.3. Particles Definitions

We turn now to the particles. To define the properties of any particle present in the model, several options are available which can be put in particles.m.(i) Description: it defines a string for defining the particle.(ii) PDG: it defines the PDG numbers of all generations.(iii) PDG.IX: it defines a nine-digit number of a particle supporting the proposal in [289, 290]. By default, the entries of PDG are used in the output11.(iv) ElectricCharge: it defines the electric charge of a particle in units of . This information is exported to the CalcHep/CompHep and UFO model files.(v) Width: it can be used to define a fixed numerical value for the width.(vi) Mass: it defines how MC tools obtain a numerical value for the mass of the particle:(a)a numerical value can be given;(b)the keyword Automatic assigns that SARAH derives the tree-level expression for the mass from the Lagrangian. The mass is then calculated by using the values of the other parameters;(c)the keyword LesHouches assigns that this mass is calculated numerically by a spectrum generator like SPheno and provided via a Les Houches spectrum file. This is usually the preferred method because also loop corrections are included.(vii) OutputName: it defines the name used in the different outputs for other codes.(viii) LaTeX: it defines the name of the particle used in the LaTeX output.(ix) FeynArtsNr: it is the number assigned to the particle used in the FeynArts output.(x) LHPC: it defines the column and colour used for the particle in the steering file for the LHPC Spectrum Plotter (http://lhpc.hepforge.org/). All colours available in gnuplot can be used.(xi) Goldstone: for each massive vector boson the name of corresponding Goldstone boson is given.The properties of all particles appearing as either gauge or mass eigenstates, or just at intermediate steps, can be defined in particles.m. Usually, the user is only interested in the output for the mass eigenstates. Therefore, it is not really necessary to define the entire properties of all intermediate states and the gauge eigenstates. The only input which is usually helpful to have a nice looking LaTeX output is to define the LaTeX syntax for all particles which appear at any stage in the model. Again, I pick just some entries to demonstrate the procedure. The full changes compared to the MSSM are given in Appendix C.3.

Gauge Eigenstates and Intermediate States. Intermediate states are those which do not show up in any Lagrangian. These are, for instance, the superfields which get decomposed in components fields or the real parts of complex scalars after symmetry breaking which directly mix to new mass eigenstates. Also Weyl spinors are considered as intermediate because the output is always in terms of Dirac spinors. For all of these particles it is usually sufficient to define a LaTeX name to have a nice looking pdf file at the end.

The properties of gauge eigenstates can be defined in the array ParticleDefinitions[GaugeES]. However, since rarely calculations are done for these states, it is also often sufficient to give just the LaTeX names. Of course, also all other information can be set as for the mass eigenstates if demanded.

Mass Eigenstates. More interesting are the mass eigenstates. The additional information given for those is used in the different output for SPheno and the MC-tools. We begin with the new states which are not present in the MSSM: the and the corresponding ghost as well as the real sneutrinos.

Here, we defined that the second pseudoscalar (Ah[2]) is the Goldstone of the . For the ghost we used the PDG 0 to make clear that this is not a physical state.

In addition, there are also particles where the number of generations has increased with respect to the MSSM. Since all MSSM particles are defined globally in $PATH/SARAH/Models/particles.m by the Description statement, we can make use of that but just overwrite the lists of PDGs which become longer. The new CP even and odd Higgs states carry now four PDGs.

Here, we used for the first two entries of the pseudoscalars 0. This means that these two states are not physical but are the Goldstones of the massive, neutral gauge bosons. In a similar way, the additional PDGs for neutralinos Chi and neutrinos Fv are defined; see Appendix C.3.

4.5. Running the Model

When we are done with the model files, the model is initialized in Mathematica via the following.

After about 1 minute the following message

should appear and no error messages or warning during the evaluation should show up. More detailed checks if the model implementation is self-consistent and if the model is working fine can be carried out by the following command.

This function makes all checks listed in Section 2.3. With our implementation above there will be some messages like the following.

The last message is caused because it is not obvious from the Lagrangian that there is no mixing: the relations between the rotation angels and and all five involved gauge couplings (, , , , and ) are not known at this stage. Hence, it is not obvious from the Lagrangian that these terms cancel.

The reason for the first two messages is that SARAH  found terms in the Lagrangian of the mass eigenstates which seem to cause a mixing between and as well as between and . The origin of this is that we did not define terms like , , or explicitly as real. Therefore, SARAH considers them to be complex. In the complex case a mixing between CP even and odd scalars would occur unless specific relations among the phases are satisfied. This mixing would be missed in our implementation so far. We just have to keep in mind that the model is supposed to be used for the CP conserving case or just for parameters which satisfy conditions where the mixing between CP even and odd states vanishes. Of course, one can also use the entries in parameters.m  to define the parameters explicitly as real.

Even if a study of CP violation is clearly beyond the scope of this example, I want to show briefly what has to be changed to incorporate it. First, the definition of the VEVs has to be changed by introducing relative phases between the scalars as follows.

One could also work with complex VEVs as follows.

However, this is less common and mainly supposed to be used for generating Vevacious  model files. After introducing the phases also the definition of the rotations must be changed to respect the mixing between CP even and odd states as follows.

The rotation matrices and mass eigenstates in particles.m and parameters.m have to be adjusted accordingly: the matrices ZVR and ZH would no longer be real, and the states Sv need twelve PDGs, while hh includes two Goldstones and six physical scalars. Also assignments of the Goldstones have to be adjusted in particles.m as follows.

However, as I said this is beyond the scope of the discussion here. We start now to study the CP conserving version of this model in detail.

5. Example—Part II: Masses, Vertices, Tadpoles, and RGEs with Mathematica

5.1. LaTeX Output
5.1.1. General Information

After we are done with the implementation of the model and all consistency checks are passed, we can generate a pdf file to get a first overview about the B-L-SSM. The  .tex files generated by SARAH include all information about the model, for example, particle content and superpotential, and all information which SARAH derives like RGEs as well as masses and vertices for specific eigenstates. To get this output one has first to run the command ModelOutput which tells SARAH what it has to calculate and for which eigenstates. We are just interested in the eigenstates after EWSB as follows.

This command calculates always all vertices for the considered eigenstates. Loop corrections and RGEs are not included in the calculations by default. To include them, the user can choose IncludeRGEs -> True and IncludeLoopCorrections -> True. This information will then also be included in the LaTeX files. In principle, one can also use options for ModelOutput to directly generate the LaTeX output and all other outputs for the different codes by setting WriteTeX -> True, WriteFeynArts -> True, WriteCHep -> True, WriteWHIZARD -> True, and WriteUFO -> True. However, we will not make use of this here but discuss each output separately.

When SARAH  is done with all calculations, one can run the following.

We used here the option that not only the information about the model and its physics is included, but also details about the implementation in SARAH are attached to the pdf. Additional options which are available are as follows. (i)FeynmanDiagrams: it defines if the Feynman diagrams should or should not be included in the output. By default they are included. To draw the Feynman diagrams, SARAH makes use of the LaTeX package feynmf [291]. (ii)ShortForm: it defines if a shorter notation for the vertices should be used by not using a separate equation environment for each vertex and skipping Feynman diagrams. When SARAH is done with the output, the  .tex  files are stored in the following.

The main file which can be compiled with pdflatex is B-L-SSM_EWSB.tex. In the case where the Feynman diagrams are included, the compilation is a bit more complicated because mpost has to be used for each diagram after the first run of pdflatex. Afterwards, a second run of pdflatex is needed. SARAH provides a shell script MakePDF.sh which does take care of that. Thus, the easiest way to get the pdf is as follows.

The second step is just necessary to make the script executable if it is not.

5.1.2. Particles and Parameters of the B-L-SSM in SARAH

The reason why we have chosen the option WriteSARAH -> True is that this includes helpful information about the implementation: the names for all particles and parameters in SARAH are given, and it is shown how the pieces are called in LaTeX and in the output for other codes. This output is given for all eigenstates. However, to follow the subsequent examples only the mass eigenstates after EWSB are necessary. So, I skip the output for the other eigenstates and show here only the corresponding tables for the mass eigenstates.

Particles. The whole lists of fermions, scalars, vector bosons, and ghosts are listed in Tables 25. One sees that not only the names of each particle are given which are used at the different stage, but also the indices which the particles carry are defined. In the case of fermions, the Dirac spinors together with their Weyl components are listed. An alternative to get an overview about all particles during the Mathematica  session is to use the following.

Parameters. All parameters which are present at some stage in the B-L-SSM are listed in Table 6. This includes not only the fundamental parameters like gauge couplings, superpotential couplings, and soft-breaking terms, but also rotation matrices and angles, VEVs, and auxiliary parameters which just show up via dependence defined in parameters.m. One can get the entire list of parameters also during the Mathematica session by using the following.

5.2. Extracting Mass Matrices, Tadpole Equations, and Vertices

We can finally do some physics. At first, we want to study the analytical properties of the B-L-SSM within Mathematica. For this purpose I will give some example how to extract mass matrices, vertices, or tadpole equations and how to deal with them. To improve the readability I will give the input in Mathematica format but the output of Mathematica will be translated into LaTeX. If the user just wants to see the expressions without modifying them it is also possible to generate the entire LaTeX output for the model and just read the pdf as just shown in the previous section.

5.2.1. Tadpole Equations

We start with the tadpole equations. All four minimum conditions are returned via

and read I saved them in a new list TadEquations which we will use in the following. Alternatively, one can also use the content of TadpleEquations[EWSB] which contains all tadpole equations after EWSB separated by commas. To get the same TadEquations as above, we make an equation out of any entry in TadpoleEquations[EWSB] by the following.

Let us gain some understanding of these equations. A convenient choice is to solve them for , , , and . For simplicity we do this by restricting ourself to the real case (conj[x_]->x) and work in the triangle basis where disappears as follows.

In addition, we introduced new parameters for and . There are mainly two reasons for this: (i) we can solve the equations in that way for ;  (ii) if X  and B[X] appear in the equations, Mathematica interprets B[X] as function of X instead as an independent parameter. The consequence is that it cannot solve the equations. To circumvent this, a replacement like B[X]->BX would be necessary.

We are just interested for the moment in the solutions for and . They read For a better understanding, we can make the approximation of vanishing kinetic mixing (): in this limit, as we will see below, the vector boson masses are given by and , with and . In addition, we make the replacements , , , and and express , by (TB) and (TBp) as follows.

We find quite simple expressions: The first expression is just the one of the MSSM. Thus, any new contribution to comes only from gauge-kinetic mixing. The equation for looks very similar. However, the large ratio between and gives a much larger constraint: for radiative symmetry breaking is usually restricted to be close to to minimize the negative contributions. We can check this by assuming together with  TeV and  TeV2. will then just be a function of and . Using the ContourPlot function of Mathematica

we get the plot shown in Figure 6. We see that the numbers for quickly drop and the entire area with is ruled out.

5.2.2. Mass Matrices

We turn now to the mass matrices. First, I have to provide the proof that our approximation for the vector boson masses in the limit of vanishing gauge-kinetic mixing to obtain (119)-(120) is correct. To do that we check the vector boson mass matrix. That is done via the following

where all three possibilities (VectorBoson = VP, VectorBoson = VZ, and  VectorBoson = VZP) return the same mass matrix: This matrix is block diagonal for with an upper matrix which is identical to the SM. Thus, we can see that gauge-kinetic mixing in this model is an important effect because it leads to mixing already at tree-level.

In the scalar sector the mass matrix for the CP odd scalars is printed via

and reads with and gauge fixing contributions and . We see that the matrix is block diagonal. That means that there is no mixing between the CP odd components of the Higgs doublets and bileptons. However, this is a statement which is only strictly true at tree-level. Therefore, we kept the mixing of both states in the model definition. One self-consistency check to see that the Goldstone degrees of freedom appear correctly is quickly done: this matrix should have two zero eigenvalues in Landau gauge. For this purpose and for simplicity we solve the tadpoles with respect to the soft-breaking scalar masses and plug the solution into the pseudoscalar mass matrix. In addition, we are going to Landau gauge (RXi[_] -> 0).

In the last line, we replaced . The outcome is this handy list of eigenvalues: So, we find the expected two massless modes. The physical pseudoscalars have at tree-level the same expressions as in the MSSM by replacing the corresponding VEVs and -terms for the sector.

The scalar mass matrix is given by

and is a bit more complicated. We parametrize it by with One can see that this matrix is in general not block diagonal: that is, there is already a mixing between the MSSM doublets and the bileptons at tree-level. However, all terms with and are proportional to ; that is, this mixing is only visible if gauge-kinetic mixing is taken into account. That is another reason why gauge-kinetic mixing is in general a very important effect in this model. We can also try to get an estimate of the size of this mixing. For this purpose, we plug the solution of the tadpole equations in the Higgs mass matrix and fix some numerical values: , , , = 1 TeV, = 1 TeV2,12.

numMhh is now the mass matrix which just depends on the bilepton VEV and the off-diagonal gauge coupling. We can write a simple function which diagonalizes this mass matrix for given values of these two parameters. Furthermore, this function extracts the two lightest masses as well as the bilepton admixture of the lightest eigenstates and returns these three values as follows.

We can use this new function with the ContourPlot command of Mathematica as follows.

where NUMBER = 1, 2, 3 should be used to get all three plots shown in Figure 7. We see at these plots that the mixing is especially large when both states are close in mass and can be easily and more.

We can now turn the sfermion sector. The matrices there are usually quite lengthy. I just want to pick out one important effect which we see in the diagonal entries of the charged sleptons, for instance. The entry corresponding to in the mass matrix of the sfermions (Se) is shown via the following.

We can rewrite the terms under the assumption that only third generation Yuka was a nonzero. For this purpose, we expand the sum and put all entries of to zero but the one. In addition, we make the assumption that gauge-kinetic mixing vanishes for simplicity and we can use the usual replacements for the VEVs as follows.

The entries then read The important point is the appearance of which gives large negative contributions because the lower limit on this mass is about 2.5 TeV. Thus, must be close to 1 to minimize this term and to prevent tachyons.

The last mass matrix we want to check is the one of the neutralinos. This mass matrix is returned by

and reads The upper block is the one known from the MSSM. The lower block is the counterpart in sector of this model. Here, we make a similar observation as for the scalar Higgs mass matrix: both blocks are only coupled if gauge-kinetic mixing is taken into account.

In the same way all other mass matrices of the model can be checked with SARAH and interesting observations can be made like the CP even and odd mass splitting for the sneutrinos.

5.2.3. Vertices

Single Vertices. We continue with vertices and show how they can be handled in SARAH. Let uss assume one is interested in the coupling between two up-quarks and the neutral CP even scalars:

and it gives , , and are rotation matrices as shown in Table 6, , , and are the generation indices of the external states, and and are the colour indices of the quarks. One sees that only the projection on the second gauge eigenstate () contributes which corresponds to the up-Higgs. Thus, this is the same vertex as in the MSSM and the new sector does not contribute here. That is different if one considers, for instance, the neutrino-scalar vertex:

and it returns We find here also projections on the third gauge eigenstate which comes from the -term in the superpotential. In general, many vertices get modified with respect to the MSSM and a discussion of all effects is far beyond the scope of this paper. I just want to pick out one more vertex: the electron- interaction, as follows.

We find that this vertex receives important modification due to the mixing: Working in the triangle basis, the contributions from vanish. However, the coupling compared to SM expectation gets still modified by the presence of . This gives strong constraints on the angle . That is of course the case for any -interaction in this model.

All Vertices. It has been already mentioned that it is also possible to calculate all vertices at once. The command to do this is as follows.

This creates lists

for the different generic types of vertices ($TYPE = SSS, SSSS, SSV, SSVV, SVV, FFS, FFV, VVV, VVVV,  GGS, and GGV). One can also play a bit with these lists. For instance, to get all fermion interactions with the one can use the Select command of Mathematica as follows.

Similarly, all scalar interactions can be extracted where the off-diagonal gauge couplings show up.

If one compares the length of this list with the length of all four scalar interactions in general

it turns out that this is the case for any vertex. That is of course not surprising because of the -term contributions, but it underlines the importance of this effect again.

5.3. Understanding the RGEs
5.3.1. Analytical Results

The full two-loop RGEs of the B-L-SSM are calculated just via

The options for CalcRGEs are as follows. (i)TwoLoop: it defines if two-loop RGEs should be calculated. This is done by default. (ii)ReadLists: it defines if the results from previous calculations should be read instead of calculating the RGEs again. (iii)VariableGenerations: it defines if the generations of some particles should be treated as free parameters. The RGEs are then expressed in terms of  NumberGenertions[X], where X is the name of the superfield. (iv)NoMatrixMultiplication: it can be set if the RGEs should not be expressed in terms of matrix multiplication but by using sums over indices. (v)IgnoreAt2Loop: it can be used to define parameters which should be put to zero in the two-loop calculation. (vi)WriteFunctionsToRun: it defines if a file should be written to evaluate the RGEs numerically in Mathematica. This is done by default and we are going to make use of it. When the calculation is finished, the full two-loop RGEs are saved in different arrays: (i)Gij: anomalous dimensions of all chiral superfields, (ii)BetaYijk: trilinear superpotential parameters (, , , , and ), (iii)BetaMuij: bilinear superpotential parameters (, ), (iv)BetaTijk: trilinear soft-breaking parameters (, , , , and ), (v)BetaBij: bilinear soft-breaking parameters (, ), (vi)Betam2ij: scalar squared masses (, , , , , , , , and ), (vii)BetaMi: gaugino masses (, , , , and ), (viii)BetaGauge: gauge couplings (, , , , , and ), (ix)BetaVEVs: VEVs (, , , and ).BetaQijkl, BetaWijkl, BetaLi, BetaSLi, BetaDGi, and BetaFIi are empty in this model. All lists are three-dimensional arrays where each entry gives (i) the parameter, (ii) the one-loop -function (up to a factor ), and (iii) the two-loop -function (up to a factor ). To check the order in which the RGEs for the trilinear superpotential parameters are given, one can use

what returns

Thus, if we want to see the one-loop RGE for the electron Yukawa coupling we have to check

and get One sees that gauge-kinetic mixing is also taken here into account. In the limit this reproduces the well-known MSSM result. Maybe, more interesting are new features in the gauge sector. To get just all one-loop RGEs at once, we can execute

that returns The expressions for and are just the MSSM results but gets modified by gauge-kinetic mixing. Solving these equations analytically is no longer possible but we are going to study them numerically.

Finally, we can also check analytical expressions for soft-breaking terms. We will do this at the example of the bilepton masses which are given in the last two entries of Betam2ij. Even more interesting is the difference between both -functions: we see from (120) that a large mass splitting between both masses is needed to get radiative symmetry breaking. Thus, starting with the same values at the GUT scale, the differences in the -functions are crucial in order to break or not. To see the difference, we can use the following:

and find and are abbreviations for often appearing traces over scalar masses. These can be found in TraceAbbr: If one starts with a model in which all scalars unify as we have in mind according to Section 4.2, one gets . This is a RGE invariant and does always hold. Thus, the only possibility to get a large mass splitting is large values for and and soft-masses and . We will come back to this when we study the model with SPheno.

5.3.2. Numerical Results

The RGEs for the gauge coupling demand a closer look. However, the analytical expressions at one-loop are already a bit complicated and it is hard to learn something from them. Before turning to the full numerical analysis of the entire set of RGEs with SPheno there is the possibility to study the RGEs in Mathematica first: CalcRGEs creates a file which contains all -functions in a format which can be used with NDSolve in Mathematica to solve the RGEs. This file is loaded via

In addition, also a function is provided to perform the RGE running.

The input is as follows: (i)values: all nonzero values for parameters at the scale where the running starts, (ii)start: logarithm of the scale where the running starts, (iii)finish: logarithm of the scale where the running should end, (iv)Options: optionally two-loop contributions can be turned off (TwoLoop->False). Since the running at one-loop for , , and is the same as in the MSSM in the limit of vanishing gauge-kinetic mixing, we should find the same unification at about  GeV. This can be tested via

Here, I used for the SM gauge couplings the values as input which are found when including thresholds discussed in Section 3.6.3 with SUSY states of about 1 TeV. The plot which is created via these two commands is shown in Figure 8 on the left. We find a value of about 0.7 of the gauge couplings at the unification scale.

We can now use this value and demand a strict unification () and run down the RGEs.

We can check what we get for at 1 TeV. And actually the naive try

returns 0.476. That is quite a bit away from the input value of 0.46 we started with. The reason is that we missed performing the rotations to go to the correct basis; see (93). What we have to associate with the physical hypercharge coupling is as follows.

This indeed returns exactly the input value of 0.46. We can now plot the proper couplings via

The plot is shown on the right in Figure 8. One sees here that the off-diagonal coupling gets negative. The values of the five physical running couplings at 3 TeV are as follows.

Here, the GUT normalization is still included for the couplings. The not normalized values are With the same procedure we could now also start to analyse the running of the superpotential terms and the soft-masses. An estimate of the running gaugino masses based on universal GUT values is obtained by

and we find

The hierarchy is similar to the CMSSM, but the soft mass is smaller than all other gaugino masses. So, it might be that this particle would be a new dark matter candidate. I will come back to that in the next section. We also see that the off-diagonal terms run negative similar to the off-diagonal coupling. In the limit of vanishing kinetic mixing it is easy to explain the hierarchy of the gaugino masses: combining (42) and (52) we see that the value is a constant at one-loop: that is, Plugging in the numbers from (136) we get for the gaugino mass termsThe values for and are a bit different because (137) does not include the effect of gauge-kinetic mixing. Nevertheless, there is a nice agreement with the numerical results.

6. Example—Part III: Mass Spectrum, Decays, Flavour Observables, and Fine-Tuning with SPheno

6.1. Calculating the Mass Spectrum with SPheno

We start now to make use of the different outputs SARAH provides to use the derived information about a model with other tools. Maybe, the most important interface is the one with SPheno which gives a very flexible, fully functional, and highly precise spectrum generator for the model under consideration. A similar functionality to get a tailor-made spectrum generator based on SARAH became available with FlexibleSUSY13.

Before we can use SPheno we have to provide an additional input file for SARAH. I will start with a description of what this file is supposed to do.

6.1.1. Defining the Boundary Conditions

In general, there are two different kinds of SPheno versions the user can create which need a different amount of input. (i)GUT Version.” In a GUT version of SPheno a RGE running between the electroweak, SUSY, and GUT scale is supported. The user can define appropriate boundary conditions at each of these three scales. Furthermore, also threshold effects by including additional scales where heavy particles are integrated out can optionally be included. Finally, the user can define a condition which has to be satisfied to identify the GUT scale. The most common choice is the unification scale of gauge couplings, but also other choices like Yukawa unification are possible. In addition, these versions include also the possibility to define the entire input at the SUSY scale and skip the RGE running to the GUT scale. (ii)Low Scale Version. In a low scale version no RGE running is included, but the SPheno version expects all free parameters to be given at the SUSY scale. I concentrate on the first option because we are interested in a GUT model. Actually, the input for the second version is much shorter and can be easily derived from the information given here. The file to define the properties should be called SPheno.m and must be located in

as well. The entire file is shown in Appendix C.4 and  I will discuss here the main parts of it.

Input Parameters. We have collected in (108) a list of all input parameters we want to use. These parameters should be given to SPheno in our numerical studies via a Les Houches input file. We make the choice that all input parameters which are not a matrix are included in the block MINPAR of the Les Houches input file. This can be done via

The exact meaning of this definition will become clear when we discuss the Les Houches input in Section 6.1.3. In addition, it would also be possible to use the array EXTPAR to define input which will be given via the block EXTPAR in the Les Houches file. This might be useful to remove parameters not present in the MSSM from MINPAR. In that case the corresponding lines in SPheno.m would read as follows.

SARAH is completely agnostic concerning official SLHA conventions for the MINPAR and EXTPAR blocks [292, 293]. It just refers to the definition as given by the user. However, it might be helpful to stick for models which are covered by SLHA to the corresponding conventions.

In any case, three of the input parameters are always real. This information is set via

This definition is especially for and importantly because we will use trigonometric functions with these parameters as argument. If not defined as real, Fortran might return NaNs.

Tadpole Equations. We choose to solve the tadpole equations with respect to , , , and .

In that way SARAH uses the Solve command of Mathematica to get the analytical solutions and exports them to Fortran code. For other choices of parameters, no analytical solution might exist. In these cases it is possible to solve the equations numerically during the run with SPheno. I give some more information about this in Appendix A.4.1. One could give also the solutions based on simplified assumptions or define some assumptions which should be used for solving the equations. This is briefly discussed in Appendix A.4.2.

Renormalization Scale. SPheno either can be used with a fixed renormalization scale, or it calculates the scale dynamically. A convenient choice is often . In our case the stops are part of the six generations of up-squarks. Since we have always large -terms the stops will always correspond to the lightest and heaviest of the mixed states. So we can put as renormalization scale . If the hierarchy of the up-squark masses is not clear, another possibility would be to give an analytical expression of the determinant of the stop mass matrix as renormalization scale. This is, for instance, done in the MSSM by setting

Here, we skipped -terms which are negligible in the MSSM. However, this does no longer hold in the B-L-SSM because of the new contributions from the bilepton VEVs. Hence, the definition of the renormalization scale using that approach would even be a bit more lengthy. We can keep in this model the simple expression .

In the very first iteration when the stop masses are unknown, SPheno needs a crude first guess of the renormalization scale. We choose . However, also a constant like 1 TeV could be used. The two lines to define both scales for the B-L-SSM are

GUT Condition. The condition for the GUT scale in our model is . However, because of gauge-kinetic mixing one has to be careful by choosing the correct : in the running the general gauge coupling matrix is used: that is, for the check of the GUT scale it is necessary to rotate it to the triangle form.

We demand that gauge-kinetic mixing vanishes at the GUT scale and this condition will simplify to g1 == g2. Nevertheless, one should keep the full form to stabilize numerics in the first few iterations.

Boundary Conditions. Now, we define the boundary conditions at the GUT scale. First, we make sure again that the coupling matrix is in the triangle form. In order to further stabilize numerics in the first iterations, where the unification might not be too good, we average and before we set the couplings in the sector and the off-diagonal ones. All other entries just parametrize in an obvious form the boundary conditions from (103) to (107).

Note the keyword DIAGONAL and the usage of the parameters defined via MINPAR.

At the SUSY scale we rotate again the gauge couplings to the triangle basis because the entry will not be zero any more due to RGE effects. In addition, we calculate and from the input values of and . Finally, the input parameters for and are used here. Since and are matrices, it is not possible to define them via MINPAR or EXTPAR. Therefore, LHInput[x] is used. With this command, SPheno expects the parameters to be given via blocks YXIN and YNUIN in the Les Houches file14.

The boundary conditions at the EWSB scale are similar to the ones at the SUSY scale and are skipped here. To further stabilize numerics in the first run up to the GUT scale, it is useful to give approximate values for the initialization of the gauge couplings which are not present in the SM.

Decays. Finally, we tell SARAH that it should make use of the default conventions to write code to calculate two- and three-body decays with SPheno.

By convention, the following decays are included that way: (i) all two-body decays of SUSY particles, Higgs states, the top quark, and additional vector bosons and (ii) three-body decays of SUSY fermions in three other fermions and decays of SUSY scalars in another scalar and two fermions.

Precision. We are mostly going to neglect neutrino masses in the following. However, if a study of the neutrino phenomenology should be included, it might be necessary to calculate the masses of the neutrino eigenstates with a higher precision as this is usually done in SPheno. The reason is the potential large hierarchy between left and right neutrinos. Details about this are given in Appendix A.3.

6.1.2. Obtaining and Running the SPheno Code

To obtain the SPheno output we run in Mathematica after SARAH is loaded and the B-L-SSM is initialized at the command as follows.

The different options are as follows. (i)ReadLists->True: it can be used if all vertices and RGEs have already been calculated for the model and the former results should be used. (ii)InputFile: it defines the name of the SPheno input file. By default SPheno.m  is used. (iii)StandardCompiler->Compile: it defines the compiler which should be set as standard in the Makefile. Default is gfortran. (iv)IncludeFlavorKit: it can be used to disable the output of flavour observables based on FlavorKit. When executing MakeSPheno, SARAH calculates first all information which it needs: that is, it is not necessary that the user has done the calculation of vertices or RGEs before. When SARAH is done, the source code for SPheno is stored in $SPATH/SARAH/Output/B-L-SSM/EWSB/SPheno/. The compilation of this codes is done as follows: one has to enter the directory of the SPheno installation, a new subdirectory has to be created, and the code must be copied into this directory.

Afterwards, the code is compiled via

and a new executable SPhenoBLSSM is created which we will use in the following.

6.1.3. Setting the Input for SPhenoBLSSM

An input file, by default called LesHouches.in.BLSSM, is needed to run SPhenoBLSSM. SARAH writes a template for that file which has been copied to the BLSSM subdirectory of SPheno together with the Fortran code. We move it to the root directory of SPheno as follows.

By doing this we can work now from the SPheno main directory and we do not have to give the file as argument when running SPheno. Thus, SPheno can be called via

Alternatively, one can keep the Les Houches file in the BLSSM and work with it via

However, before we can run SPheno we first have to define the input parameters: for that purpose we have to fill the template written by SARAH with numbers. I am going to discuss the different blocks appearing in the Les Houches file briefly. More details especially about the block SMINPUTS can also be found in the official references for SLHA [292, 293].

At the very beginning, the block MODSEL is given.

This block fixes the general setup the user wants to use.1: it defines if a GUT scale input is used (1) or all parameters should be given at the SUSY scale (0).2: in principle, it is possible to define in SPheno.m different boundary conditions for the GUT input, for instance, if different SUSY breaking mechanism should be studied. This flag can be used to choose one. Since we have not made use of that, this flag has no effect here.5: if put to 1, CP violation is allowed and the phase of the CKM matrix is included.6: if put to 1, flavour violation is allowed and all off-diagonal entries in soft or superpotential parameters can receive nonzero values. If put to 0, the CKM matrix is taken to be the identity matrix.12: this flag can be given optionally to fix the SUSY scale to a constant value. The block SMINPUTS contains all important values for the SM parameters like Fermi constant , strong coupling constant , pole mass of the -boson, and third-generation fermion masses , , and . Also other parameters can be set as explained in the SLHA write-ups but this is usually not necessary.

We turn now to the input to fix the parameter point we want to study. I have chosen two points as examples, EP1 and EP2, with slightly different input. All necessary input parameters are given in Table 7. The neutrino Yukawa coupling is highly constrained by neutrino data and we can ignore it here for our purposes. It will become important if the user wants to study lepton flavour violation, for instance. The input is highly motivated by the observations we made at the analytical level: we need small and large , , and to break radiatively. One can test that the values of in Table 7 are close to the Landau pole. When using even larger values SPheno will stop with an error message. The values for EP1 are set in the Les Houches file via

Finally, we have some more switches in the block SPhenoInput as follows.

These flags can be used to adjust the calculations done by SPheno and the output. All possible flags are listed in Appendix B. Im explaining here just the most important ones.(i)Loop Level. To turn off all loop corrections to the masses, flag 55 is put to 0. The two-loop corrections in the Higgs sector are turned on/off by flag 7. flag 8 chooses the method to calculate the two-loop contributions: 1, purely numerical effective potential calculation; 2, semianalytical effective potential calculation; 3, diagrammatic calculation; 8/9, results from the literature (if available).(ii)Decays and Branching Ratios. All decays are turned on/off via flag 11, while three-body decays can be adjusted via flag 13.(iii)Precision Observables. The calculation of precision and flavour observables is turned on and off using flag 57.(iv)Gauge-Kinetic Mixing. SPheno can be forced to ignore gauge-kinetic mixing by setting flag 60 to 0.(v)Output for Other Codes. The outputs of HiggsBounds and HiggsSignals input files are switched on/off via flag 76, while flag 75 is responsible for the output of the parameter file for WHIZARD. The Vevacious specific blocks in the spectrum file are included or excluded with flag 530.(vi)Fine-Tuning. The calculation of the fine-tuning is skipped when setting flag 550 to 0.

6.1.4. Calculating the Spectrum and Higgs Couplings with SPheno

When the input file is filled with numbers we can run the point as explained above. The entire output including all parameters, masses, branching ratios, and low-energy observables is saved by default in SPheno.spc.BLSSM. This file is rather lengthy and contains a lot of information. I will pick some parts of it and discuss them.

Sometimes, it is convenient to work with input and output files with other names. In that case, the names can be used as argument for SPhenoBLSSM. For instance, we can make two input files for the points EP1 and EP2 and run them via

The entire output is written to the files given as second argument, that is, SPheno.spc.BLSSM_EP1 and SPheno.spc.BLSSM_EP2.

The Running Parameters and Mass Spectrum for EP1. First, we check if gauge coupling unification at about  GeV remains. From the block gaugeGUT we see that the unification scale is a bit higher than that in the MSSM and that we have no strict unification here because there is a small offset of . This behaviour is also known from the MSSM and one assumes that higher order corrections as well as threshold corrections from super heavy particles are responsible for an exact unification once taken into account.

The running gauge couplings at the SUSY scale are given in the block Gauge. The SUSY scale shown as in the head of the block is about 3 TeV: that is, the stop masses used to fix the scale are quite heavy. That is a consequence of the large and where we used to get radiative breaking. Note that in this block is the value without GUT normalization. Thus, it is related to used in Section 5.3.2 by a factor of . We find also an off-diagonal coupling of −0.11 and close to as expected from our estimates with Mathematica; see (136).

Many more blocks appear after the one for the gauge couplings and contain all other running parameters at the SUSY scale. We just want to take a look at two other blocks: BL and MSOFT. The first one contains the soft-terms of VEVs and in the sector, and the second one contains the soft-terms of Higgs and gaugino masses from the MSSM part.

We see here that is negatives as it is expected to break the electroweak symmetry. However, both soft-terms for the bileptons are actually positive and one might wonder if is really broken radiatively. It is broken because the full condition is not that a soft-term has to be negative, but that the determinant of the mass matrix has a negative eigenvalue in the limit of vanishing VEVs. For the lower block of the scalar mass matrix this condition readsBecause of the large value of the condition is fulfilled for this point and is broken. Another observation is that the blino soft mass is lighter than the bino one as we already expected from Section 5.3.2. Actually, is also smaller than the other gaugino masses and the -term not shown here. So, one would expect that the lightest neutralino is a blino. However, we will see below that this is not the case.

After all blocks with the running parameters, all masses are printed in the block MASS.

The squarks (Su and Sd) are very heavy, and also the charged sleptons (Se) have masses well above 1 TeV. However, there are some light CP even sneutrinos (SvRe) while the CP odd ones are much heavier (SvIm). Thus, the mass splitting between the CP eigenstates of the sneutrinos is another interesting and import aspect of the B-L-SSM. Actually, scrolling down, we see that all other SUSY states like neutralinos (Chi) and charginos (Cha) are actually heavier than . Thus, the LSP, and therefore the DM candidate, is the lightest CP even sneutrino for this point. The hierarchy of the heavy neutrino (Fv) reflects the input values of . We see also that the mass is not exactly identical to the input. The reason is that the input is taken to be the tree-level mass in the limit of vanishing gauge-kinetic mixing, while here the full one-loop mass including all mixing effects is shown. However, the differences are very small and one has usually not to worry about them.

In the Higgs sector, we have the lightest scalar (hh_1) in the mass range preferred by measurements. Also the second scalar is not much heavier. In contrast, the two heavier scalars as well as the two physical pseudoscalars (Ah) and the charged Higgs (Hpm) are all about 3 TeV.

This is maybe a good time to briefly comment on the importance of the loop corrections in this model. We can turn off the two-loop corrections via

and we get = 115.9 GeV and = 356.0 GeV. Turning even off one-loop corrections via

we find = 87.2 GeV and = 353.9 GeV. The corrections for are similar as in the MSSM and one can guess already from these numbers that this state is mostly a MSSM-like doublet. To confirm this guess, we can check the mixing matrix in the scalar sector which is given in the block SCALARMIX as follows.

These are the entries of the rotation matrix and we concentrate here one only on the first two eigenstates. We see that bilepton admixture of the first eigenstate is just 15, while the second one is nearly a pure bilepton. So, the mixing between both sectors for this point is moderately small because the masses are not too close.

The mixing matrix for the neutralinos is shown in the block NMIX as follows.

So, we see that the lightest state has a bino fraction of about 99%16, a wino fraction of 17, a Higgsino fraction of roughly 0.07%18 and a similar blino fraction19, and finally a bileptino fraction of a bit less than 1%20. This is surprising because we have seen above that the blino soft-term is smaller than the bino one. Why is not then the LSP a blino? To understand this, we can check the mixing of the second lightest neutralino. Since the default convention by SPheno is that all Majorana masses are negative, some entries of the rotation matrix are imaginary. Hence, we find the composition of in IMNMIX as follows.

We see that in the sector there is a large mixing between blino and bileptino. The reason is that the mixing entries are proportional to and not to as for the MSSM part. Therefore, one does not have a pure blino that would be lighter than the bino. One can check that, in the limit of very heavy , the bileptinos decouple and the blino is indeed the LSP [199]. All other rotation matrices are given as well in the spectrum file but not further discussed here.

Higgs Couplings. Other important pieces of information about the properties of all scalars and pseudoscalars for the given parameter point are shown in the two blocks HiggsBoundsInputHiggsCouplingsFermions and HiggsBoundsInputHiggsCouplingsBosons. These blocks give the coupling of all Higgs states in the model normalized to the SM expectation as defined in (82).

Thus, the values in these blocks for the lightest Higgs, which we associated with the SM-like state, should be close to one, while those for the second lightest Higgs are expected to be much smaller. This is exactly what we observe as follows.

Since the couplings squared to down-type quarks and leptons for are about 20% larger than in the SM we can expect that this point does not explain the measurements too well. To quantize that we will use HiggsSignals later, see Section 7.1.2. These coupling ratios can also be used to get the production cross section of the Higgs states at the LHC in the different channels by rescaling the SM results. This is done in the blocks HiggsLHC7 and HiggsLHC8. The SM cross sections for a given Higgs mass have been obtained by fitting the data fromhttps://twiki.cern.ch/twiki/bin/view/LHCPhysics/CERNYellowReportPageAt7TeV,https://twiki.cern.ch/twiki/bin/view/LHCPhysics/CERNYellowReportPageAt8TeV.Since this fit is only valid in a finite range for the Higgs mass21 also the results are only given for Higgs masses which lie in the fit range of each channel.

A Point with a Light Scalar. We want to briefly discuss the particularities of the point EP2 given in Table 7. To run this point, we have only to change three lines in the Les Houches input file compared to EP1 as follows.

The interesting aspect of this point is the Higgs sector which now contains a light scalar as follows.

This point is mostly a bilepton, of course. However, it has a doublet fraction of about 1% doublet as we can see from the scalar mixing matrix as follows.

Thus, it will be interesting to see if it passes all tests from Higgs searches. We are going to check that with HiggsBounds in Section 7.1. Another important aspect of this point is the Higgs mass at the different loop levels:At tree-level the lighter mass is the doublet, and the heavier one is the bilepton. Thus, there is a level crossing at one-loop compared to tree-level. The bilepton mass nearly changes by a factor of 3 when including radiative corrections. Therefore, to have some trust in the mass prediction, also two-loop corrections for the bileptons are crucial for this point. These corrections are even a bit larger than for the MSSM-like particle and give a push of about 10 GeV.

6.2. Decay Widths and Branching

SPheno modules by SARAH do not only calculate the mass spectrum and effective couplings for the Higgs scalars, but also provide functions for calculating decays. To adjust the output of the decays, the important flags in the Les Houches input are as follows.

With flag 11 the entire calculation of all decays can be turned on and off. Note that for the HiggsBounds and HiggsSignals outputs discussed in Section 7.1 the calculation of at least two-body decays is essential. flag 13 can be used to turn on/off the three-body decays separately: only three-body decays either of fermions or of scalars or of both can be calculated. Usually, the scalar three-body decays are quite time consuming because of the many decay channels. This can sometimes be helpful because the calculation of the three-body decays can be time consuming and is not always needed. With flag 12 a lower limit of the branching ratios which should be shown in the SLHA file is given, while flag 15 gives a lower limit on the width to be listed in the SLHA file.

For our discussion, we pick out some decays of specific states which give some impression of the main differences compared to the MSSM. For instance, we have seen that for EP1 the neutralino is actually not the LSP. So, it should decay. This is exactly what we find.

The width is very small. The reason is that it is suppressed twice: by the small blino admixture to the neutralino and the small right fraction of the neutrino.

The second lightest neutralino consists mostly of states; that is, the coupling to the second lightest Higgs (bilepton) is larger than the lightest one. This explains why mostly decays into the second lightest scalar but not the lightest one despite the larger phase space.

Also the heavy neutrinos are expected to decay. Since these are mostly right-handed states the coupling to the lightest neutralino and sneutrino is much larger than for the light neutrino. Thus, the width of these states is much larger than the width of the lightest neutralino even if the involved particles seem to be similar.

Another interesting topic in these models are the decays of the : new decay channels can alter significantly the width of the and have an impact on the limits of masses from collider searches [200, 282]. However, for our point EP1 we see that 98% of the final states are SM fermions. The reason is that the important channels in right neutrinos are kinematically forbidden.

This changes in EP2 where the is much heavier than the right neutrinos. Here, the BR in right neutrinos is about 10% and is much more important than the channels in sleptons which are also kinematically allowed for  TeV.

We go back to EP1 and change topics a bit: we are no longer interested in new effects compared to the MSSM but we want to know how good the BR of the light Higgs reproduce the SM expectations. The width and BRs for this point calculated by SPheno are as follows.

These values have to be compared with those of the SM for a Higgs mass of 124.2 GeV. The numbers for the SM can be found online athttps://twiki.cern.ch/twiki/bin/view/LHCPhysics/CERNYellowReportPageBR3and the different BRs and the total width areThese BRs are similar to the ones we got with SPheno but they do not agree exactly. Also the expected width is smaller by about 10% than the one calculated by SPheno. The reason for this is the enhanced coupling of the Higgs to bottom quarks for this point which was already visible from the HiggsBounds blocks as discussed in the last subsection.

6.3. Flavour and Precision Observables

The SPheno modules written by SARAH contain already out of the box the routines to calculate many quark and lepton flavour violating observables. In addition, also other observables like and are calculated. We are going to start with a short discussion of the results which can be obtained just by running SARAH  and SPheno out of the box. In a second step, I show how the FlavorKit functionality [146] can be used to implement Wilson coefficients for new operators and how to use these coefficients to calculate new observables with SPheno.

6.3.1. Observables Out of the Box

To turn the calculation of low-energy observables on, the Les Houches input file must contain the following.

In that case the QFV and LFV observables are given in the blocks FlavorKitQFV and FlavorKitLFV. , and EDMs are shown in SPhenoLowEnergy. This block reads for EP1 as follows.

While the EDMs vanish because we did not include CP violation, the other observables do not receive a large contribution in this model because of the heavy SUSY spectrum. So, let us turn to the flavour observables.

All LFV rates are smaller than and can be interpreted as numerical zeros. For QFV there are, of course, the nonvanishing SM contributions which have to be taken into account as well. For QFV observables, SPheno does not only give the absolute size of the observable like the corresponding branching ratio or mass splitting, but also give the observable normalized to the SM expectation. For this purpose SPheno actually calculates each observable internally twice. In the second calculation all non-SM contributions are dropped. It is more convenient to confront the value normalized to the SM prediction of SPheno with exclusion limits from experiment. The reason is that by taking this ratio uncertainties, for example, in the hadronic parameters, drop out. Also a constant shift just caused by missing higher order corrections in SPheno does not lead to the false impression of a deviation from the SM as long as the ratio is close to 1. We see for our point that SUSY and contributions change the prediction of at most 1.5% compared to SM expectation. Thus, this point is in total agreement with all limits. The reasons for this are, of course, again the heavy sfermions in general and the weak coupling of the few light states. Even if these results are not very exciting I show the entire output of SPheno for QFV observables to give an overview of what is calculated.

I have claimed above that we can neglect the neutrino Yukawa couplings for most studies because they have hardly an impact on the calculation since they are highly constrained. To show this, we can try what happens if we turn on this coupling by using arbitrary values of for some entries of .

We find only a small impact on most masses and the QFV observables. However, the light neutrino masses are much too large.

Thus, it would be necessary to make the right neutrino much heavier to get a kind of seesaw suppression. However, also some flavour observables, in particular conversion, are already in conflict with the experimental values shown in Table 8.

This brings us already to our end of the short excursion to flavour observables which are included in SARAH and SPheno. I discuss now what can be done if your favourite observable is not yet calculated by SPheno.

6.3.2. Adding New Observables

I show now how the FlavorKit functionality can be used to implement new observables in SPheno. To make it even more interesting we choose a process for which SPheno does not even know the Wilson coefficients. Namely, we decide to study the flavour violating, radiative decays of the top quark:The process is interesting, because it is highly suppressed in the SM by GIM but can receive large contributions in SUSY models [300]. The transition amplitude can be expressed by [301]with the quark momenta and . The partial width can be expressed using and asand the BRs we are interested in arewith the total width of the top quark.

One can also work in the chiral basis using the effective Lagrangian:with the operatorsThe relations between the coefficients are justWe are going to make use of this relation in the following by first calculating , , and translating them later into and to calculate the partial width according to (144).

Our to-do list is the following:(1)Get the generic expressions for and ,(2)Implement those in SARAH,(3)Implement the formula for the BRs in SARAH,(4)Make sure that all information is found by SARAH  and included in the SPheno output.Even if this sounds like a lot of work involving several loop calculations and hacking some code, this is not the case at all. Each step is fully automatized. The user just has to create three small input files. The first input file is needed for PreSARAH. PreSARAH is a Mathematica package which calculates one-loop amplitudes in a generic way using FeynArts and FormCalc and extracts the Wilson coefficients for the operators the user needs. The results are then translated into input files which can be used by SARAH. So, we create the file TopPhotonQ.m with the following content.

This defines a new process called TopPhotonQ which involves two fermions and one vector boson (ConsideredProcess). The Fierz ordering of the external states is defined via FermionOrderExternal. PreSARAH is absolutely agnostic concerning particle physics and it tries always to calculate the most general case. For us, this would mean that the results are a function of three masses: those of the fermions and the one of the vector boson. To make sure that in the generic expression the photon mass does not show up, we put NeglectMasses = 3. The reason is that the photon is the third particle defined in the list of all external states. The information in ExternalFields is not used at all by PreSARAH. PreSARAH just includes the information in the output used for SARAH. SARAH knows then what a “top quark” and a “photon” are. Also CombinationGenerations is not used by PreSARAH but just passed to SARAH and SPheno. This list contains all combinations of external generation indices for which the coefficients are later calculated by SPheno. We need here for top-charm and for top-up operators. The two operators from above are called OTgQSL () and OTgQSR () and their expressions are given in FeynArts syntax using ec[3] for the helicity of the third particle and k[1] as momentum of the first particle. The meaning of these symbols and how to use, for instance, Dirac matrices in the definition of operators are explained in the FeynArts manual and briefly summarized in the FlavorKit reference as well. Note that, in the case of Wilson coefficients for QFV observables, SARAH will automatically generate for each Wilson coefficient X  another coefficient XSM which just includes SM contributions. Finally, we tell PreSARAH that the output should be written into the file TopPhotonQ.m and we do not want to filter out any diagrams (Filters = ).

We run now this file in Mathematica with PreSARAH similar to how we run models with SARAH.

The output file is located in the PreSARAH output directory and has to be copied to the FlavorKit directory of SARAH. Because of obvious reasons we choose the QFV/Operators subdirectory.

If we would have put the file in LFV/Operators, the coefficients would have been calculated at instead of = 160 GeV. We are already done with steps (1) and (2) of the to-do list. Now we have to teach SPheno  how to calculate the branching ratios. For this purpose we need two files which we have to put into the following.

The first file is a steering file in Mathematica syntax. It defines the name for the process used by SARAH  internally, which operators are needed to calculate the process, and which observables should show up in the spectrum file later.

NameObservables is an array containing all observables which should show up in the spectrum file. The first part of each entry gives the name of a variable, the second one the number used in the Les Houches block FlavorKitQFV, and the third one the comment which is used in the Les Houches file to make clear to which variable the number shown belongs. All operators, which we need to calculate the observables, are given in NeededOperators. As mentioned above, SARAH creates not only routines to calculate the Wilson coefficients including all new physics, but also coefficients in the SM limit. For our purpose, we need both sets of coefficients, because we want not only to calculate the BR but also to normalize it to the SM expectation calculated under the same assumptions. The name of another file is given at the end of the steering file. This file contains the “body” of the Fortran routine to calculate the observable TqGamma.f90. With “body” I mean that the head of the routine is automatically generated by SARAH. The user can start at the stage of initializing variables needed for the calculation of the observable. The entire file TqGamma.f90 looks as follows.

In the first three lines we initialize a few local variables we need. In the entire routine we make a loop over two iterations. In the first iteration we pick up the generation indices for the top-u decay and in the second one the indices . These indices are saved in the variables gt1 and gt2 which are then used as argument for the coefficients. We first express and by and and do the same with the coefficients for the SM part only. In line 20, we calculate the overall normalization factor and plug everything into (144). The variable width in line 22 is the partial width including all contributions and widthSM in line 23 is the partial width only with SM contributions. Our final observables are then calculated using the total width of the top (gTFu(3)) or taking the ratio of both widths. We are now done with step (3) of the to-do list. Step (4) happens automatically if all files have been put into the correct directories. We can now generate the SPheno code again.

To save time, I used the option ReadLists->True: that is, SARAH reads the list with all analytical results from the previous run. The output has to be copied again in the BLSSM subdirectory of SPheno and can be compiled as usual. After running SPheno with the input for EP1 we get the new entries in the block FlavorKitQFV.

Obviously, the numbers and comments show up as expected. We see some deviations from the SM prediction. However, this is still far away from the experimental limits which still allow branching ratios in the percent range [302]. To get large effects of this size one might search for points with light stops, for instance.

One can also compare these numbers with the SM prediction in the literature. The partial width is strongly suppressed by GIM mechanism and therefore is rather sensitive on the values of the CKM matrix as well as on the running quark masses in the loop. Therefore, the predicted rates in the SM come with a sizable, theoretical error. In [301] SM prediction for and other flavour violating top decays were given. The number for the radiative decays into a charm quark reads and agrees with our calculation within errors. is supposed to be suppressed by a factor , where are the entries of the CKM matrix. That is also similar to what we find.

6.4. Getting the Fine-Tuning

One of the main motivations for SUSY was naturalness: it solves the hierarchy problem of the SM by stabilizing the unprotected Higgs mass. However, with the more and more severe limits on the SUSY masses, the question about the fine-tuning rises again. SARAH and SPheno provide functions to calculate the fine-tuning according to (84). The user can choose the list of parameters which should be included in the fine-tuning calculation. For this purpose, SPheno.m of the model has to be extended by the following.

The list FineTuningParameters contains the parameters which are varied at the GUT and a numerical coefficient to “normalize” the fine-tuning. The factor for is there because at the GUT scale the boundary conditions are in terms of . We see that the fine-tuning can not only be calculated with respect to the input parameters defined in MINPAR. Also other parameters which, for instance, are fixed by the tadpoles equations can be included. In principle, one can also calculate the fine-tuning with respect to SM parameters like the top Yukawa coupling (Yu[3,3]) or the strong interaction (g3) by including those in the list above.

After editing SPheno.m, it is necessary to reproduce the SPheno code with SARAH and to compile the new version. Exactly the same steps in Section 6.1.2 are used for that. When SPheno  is compiled, the fine-tuning is calculated and included in spectrum file if the corresponding flag is set in the Les Houches input file.

Running our point EP1 with the new version of SPhenoBLSSM, we find the block for the FineTuning before the list of decays. This block contains the following entries.

The overall fine-tuning is given the first entry coming with number 0. All other entries list the fine-tuning with respect to the different parameters. This makes it obvious what parameters contribute mostly to the fine-tuning. In our example these are mainly and which have a similar fine-tuning. Moreover, one sees that even the additional parameters from the sector can have quite some impact on the fine-tuning. The reason is that the tadpole equations are coupled because of gauge-kinetic mixing. We can check this assumption by using the flag

to turn off gauge-kinetic mixing. The impact on the overall fine-tuning is moderately small. However, the contributions of and do vanish in this limit as expected. Also the fine-tuning with respect to becomes smaller because the off-diagonal gauge couplings and gaugino do not further contribute to the running of the gauginos.

7. Example—Part IV: Higgs Constraints, Vacuum Stability, Dark Matter, and Collider Studies

7.1. Checking Higgs Constraints with HiggsBounds and HiggsSignals

HiggsBounds and HiggsSignals are dedicated tools to study the Higgs properties of a given parameter point in a particular model. While HiggsBounds checks the parameter point against exclusion limits from Higgs searches, HiggsSignals gives a value to express how good the point reproduces the Higgs measurements. In general, HiggsSignals and HiggsBounds can handle different inputs: either the cross sections for all necessary processes can be given at the parton or hadron level, or the effective couplings of the Higgs states to all SM particles are taken as input. In addition, the masses and widths of all CP even and odd as well as charged Higgs states are always needed. SPheno provides all input for the effective coupling approach. The information is given in the SLHA spectrum file and in addition in separated files (called MH_GammaTot.dat, MHplus_GammaTot.dat, BR_H_NP.dat, BR_Hplus.dat, BR_t.dat, effC.dat). While SLHA files can be used with HiggsBounds for models with up to five neutral scalars, the separated files can be used with even up to nine neutral and nine charged scalars. Since the second input works in more cases I am going to concentrate on it. First, I discuss how exclusion limits are checked with HiggsBounds; afterwards I show the usage of HiggsSignals.

7.1.1. HiggsBounds

In the same directory in which the SPheno spectrum file is located, also all other input files for HiggsBounds and HiggsSignals are saved by SPheno. The (relative) path to this directory has to be given as last argument to HiggsBounds when executing it. Thus, working from the directory $PATH, HiggsBounds is started via the following.

From other directories, one can use absolute paths.

The other arguments are the data which should be used. Here, we have chosen LandH which incorporated data from LEP and hadron colliders. Other options would be onlyL for only LEP data, or onlyH for only Tevatron and LHC data, or onlyP for only data which has been published. Then, we turn on the effective coupling input effC via separated files. The other possible options with the data provided by SPheno would be SLHA which uses also the effective coupling approach. The numbers of neutral scalar22 and charged scalars are given as integer. HiggsBounds checks all files for consistency and if no problem appears, it writes the results to a file called HiggsBounds_results.txt in the same directory where the input is located. For our standard point EP1 the results look as follows.

All information to understand the output is already given in the file: HiggsBounds finds that the strongest constraints come from at CMS (chan = 682) but the rate normalized to the exclusion limit (obsratio) is smaller than 1: that is, the point is allowed (HBresult = 1). A list with all processes which are implemented and which were checked is also written to Key.dat in the same directory.

We are far away from any exclusion limit: that is, we do not have to worry about small uncertainties in the Higgs masses because they will not change the overall result. For points which are closer to the border, one has to think more about this. In these cases it might be helpful to provide an input file MHall_uncertainties.dat which includes the uncertainties in the Higgs mass calculation. I will give more details about this in the HiggsSignals part. If this file is provided, HiggsBounds runs several times varying all Higgs masses in the range of their uncertainty and checks for the strongest constraints.

Checking Light Singlet. We can also run the point EP2 which has a light singlet of just 79 GeV and find that also this point is allowed by all Higgs searches because of the highly reduced coupling of this scalar to SM particles.

The most dominant search channel comes from LEP, but the rate is just about half of the one needed to rule this point out.

7.1.2. HiggsSignals

HiggsSignals is the complement to HiggsBounds and checks how good a point reproduces the Higgs mass and rate measurements. The syntax is very similar to HiggsBounds and to run it with the data for our standard point we have to call from the directory $PATH.

It would be possible to use also here absolute paths. The first three arguments are different compared to HiggsSignals and have the following meaning: (i) which experimental data should be used (refers to the corresponding subdirectory in $PATH/HIGGSSIGNALS/Expt_tables/), (ii) the method (peak for peak centred, mass for mass centred, or both)23, and (iii) parametrization of the Higgs mass uncertainty (1: box, 2: Gaussian; 3: box and Gaussian). The results are written into the file HiggsBounds_results.txt which reads, for EP1, as follows.

Also, the HiggsSignals output is rather self-explaining. The important numbers are the (csq(mu)) for the Higgs rates, the (csq(mh)) for the Higgs mass, and the combined (csq(tot)). The combined one is also translated into a value (Pvalue)24. However, one warning appears in the terminal when running HiggsSignals in that way.

That means that the file MHall_uncertainties.dat was not found. That is not surprising because it was not created by SPheno. This file contains the theoretical uncertainty of the Higgs mass prediction. Thus, to produce this file some estimate of the size of missing higher order corrections to the Higgs masses is needed. This is something what SPheno cannot do automatically at the moment. However, HiggsSignals assumes no theoretical uncertainty if the file is missing. That is, of course, unrealistic. The theoretical uncertainty for the corrections included in the SARAH-SPheno interface is expected to be similar to the one in the MSSM using standard two-loop corrections. Thus, we put 3.0 GeV for the two light scalars and 1.0 GeV for all others25. We create MHall_uncertainties.dat and put it in the SPheno directory where also all other input files for HiggsBounds/HiggsSignals are stored. The content of MHall_uncertainties.dat reads as follows.

Now, running HiggsSignals again we find that the values become slightly smaller.

Light Singlet. We want to run also the second example point which has a light singlet. We keep our estimate of the theoretical uncertainty and find for this point the following.

The couplings of the SM-like Higgs to the SM-fermions do not change significantly between EP1 and EP2: that is, also stayed the same. However, the mass of the SM-like state is a bit closer to the best-fit of the measurements; hence, has slightly decreased.

7.2. Checking the Vacuum Stability with Vevacious

The parameter points EP1 and EP2 have passed the first checks. The mass spectrum looks promising and they are consistent with all bounds from flavour and Higgs physics. As next step we want to check if the points have really a stable vacuum: since SPheno found a solution for the tadpole equations, we are sure that the given parameters are at least at a local minimum with respect to the scalar potential where the set of VEVs is nonzero. However, this does not ensure that this is also the global minimum. First, there might be a deeper minimum for other values of . Those minima are in general ruled out because they would predict another mass for the -boson. Another possibility is that other particles could receive VEVs as well. These can either be points with spontaneous -parity violation where the sneutrino get a VEV (), points where charge is broken by slepton VEVs (), or points where charge and colour are broken by squark VEVs (). The last two possibilities are completely forbidden and points would always be ruled out by that. However, the dangerous regions for charge or colour breaking are those where the trilinear soft-terms are large compared to the soft-masses in the stop or stau sector [22, 303313]. This is not the case for the points EP1 and EP2 and we do not have to worry about that. Spontaneous -parity violation is not completely forbidden and could lead to a different phenomenology. However, in our approach it is also very likely that the electroweak VEV changes at the global minimum where the sneutrinos gain nonzero VEVs. Hence, such a scenario is ruled out as well by the mass. We are going to check the stability of the vacuum with neglecting and with including the possibility of sneutrino VEVs. For this purpose we use the package Vevacious [168].

Vevacious is a tool to check for the global minimum of the one-loop effective potential for a given model allowing for a particular set of nonzero VEVs. For this purpose Vevacious finds first all tree-level minima by using HOM4PS2 [314]. Afterwards, it minimizes the one-loop effective potential starting from these minima using minuit [315]. If the input minimum turns out not to be the global one, life-time of meta-stable vacua can be calculated using Cosmotransitions [316].

Vevacious takes the tadpole equations, the polynomial part of the scalar potential, and all mass matrices as input. All of this information has to be expressed including all VEVs which should be tested. This means that to check for charge and colour breaking minima the stop and stau have to show up in the potential and mass matrices and the entire mixing triggered by these VEVs should be included. To take care of all that, the corresponding input files can be generated by SARAH as explained below.

7.2.1. Finding the Global Minimum without Sneutrino VEVs

If one is just interested in the global minimum for the case that no other VEVs are allowed, it is straightforward to get the model files for Vevacious: the standard implementation of the model can be used together with the command MakeVevacious.

MakeVevacious comes with some options which I list for completeness. However, we stick to the default settings. (i)ComplexParameters: it defines if specific parameters should be treated as complex. By default, all parameters are assumed to be real in the Vevacious output. (ii)IgnoreParameters: it defines if a given set of parameters should be set to zero when writing the Vevacious model files. (iii)OutputFile: it defines the name for the model files. By default BLSSM.vin is used. (iv)Scheme: it defines the renormalization scheme. For SUSY models SARAH uses and for non-SUSY by default. One sees from the first option that the parameters are handled less general in the Vevacious output as this is usually done by SARAH. The reason is that the evaluation of a parameter point with Vevacious can be very time consuming. Thus, doing reasonable approximations might be an option to speed this up.

As soon as the model file is created, it is convenient to copy them to the model directory of the local Vevacious installation. In addition, one can also generate a new subdirectory which contains the SPheno spectrum files for the B-L-SSM used as input for Vevacious, as well as the output written by Vevacious.

These steps are just optional: the user can give in the initialization file used by Vevacious, which is discussed in a second, also paths to other locations of the model and spectrum file. Independent of the location of the files, one has to write this initialization file for a new study. The easiest way is to start with the file included in the Vevacious package in the subdirectory bin and edit it.

The only change we apply here is to give the paths for HOM4PS2 (http://www.math.nsysu.edu.tw/leetsung/works/HOM4PS_soft_files/) and CosmoTransitions (http://chasm.uchicago.edu/cosmotransition) which are used by Vevacious. I am going to assume here that these are installed in the same directory $PATH as all other tools are. The other pieces of information needed are the location of the model and spectrum files as mentioned above.

For all other settings like what homotopy method should be used, what is the tolerance to consider extrema as identical, what the necessary survival probability is to label a meta-stable point “long-lived,” and how Vevacious should try to get away from saddle points we keep the default values. Interested reader might take a look at the Vevacious manual for more details about these options.

When all adjustments of the initialization file are done, we can run Vevacious on EP1 by calling from the directory $PATH/Vevacious/BLSSM.

After about 30 s Vevacious is done with checking for the global minimum of the one-loop effective potential. Since it has not started CosmoTransitions to calculate the tunnelling time, the point is stable. This can also be seen from the file SPheno.spc.BLSSM where a new block has been appended.

This block contains a flag to assign the stability ([,] = 1: stable, [,] = 0 long-lived, [,] = -1: short-lived, [,] = -2: thermally excluded), the tunnelling time if calculated (entry [,]), and the temperature at which tunnelling is likely to happen (entry [,]). Afterwards the input VEVs are repeated (entries [,][,]) and the depth of the potential at the input minimum is given (entry [,]). Finally, the depths of the global minimum together with the VEVs at that minimum are shown (entries [,][,]). Obviously, the entries [,X] and [,X] are identical.

If the user is interested in some more information about all possible minima found at tree-level and one-loop, he/she can check the file Vevacious_tree-level_extrema.txt. This file contains all VEV combinations which are actually a minimum of the tree-level potential. Also the depth of the potential at each minimum is given at tree-level and one-loop level.

We can see from that file that there are actually four minima which are not related by a phase transformation of the VEVs. The minimum without symmetry breaking (all VEVs are zero) has a depth of 0 at tree-level as expected but receives large loop corrections. Nevertheless, the depth is still much less than that for all other combinations where there is at least one symmetry (electroweak or is broken). There is just one minimum where both symmetries are broken and this corresponds to our input minimum. The full list of minima found at one-loop is given in the file Vevacious_loop-corrected_minima.txt. All additional minima listed there are small variations of the tree-level ones.

We can do the same check for EP2 and find that also this point is stable.

7.2.2. Checking for Spontaneous -Parity Violation

As mentioned above, one cannot be completely sure that the point is stable if Vevacious does not find a deeper minimum if the first check is passed. There is still the possibility that additional particles might receive VEVs. We are checking here the possibility of spontaneous -parity violation. For this purpose, it is necessary to create a new SARAH model file. We call it B-L-SSM_RpV.m. The simplest way is to take our B-L-SSM.m  file as basis and apply the following changes.

First, we change the name of the model to make sure that no files of the other implementations are overwritten. The main modification is to give VEVs to the left and right sneutrinos as done by the changes in DEFINITION[EWSB][VEVs]. However, we did not consider the most general case where all three generations get VEVs but restrict VEVs to the third generation only. Even in this case we have to deal with a 6-dimensional parameter space. In the general case, we would even have 10 VEVs and running Vevacious would take significant longer. Therefore, it s always good to check what degrees of freedom can be rotated away. The other lines are a consequence of -parity violation: a mixing between the CP even and odd sneutrinos and the Higgs scalars happens and the charged Higgs scalars mix with the charged sleptons. In the fermionic sector the charginos mix similarly with the charged leptons and the neutralinos with the neutrinos. We have to include this mixing because Vevacious checks not only the tree-level potential but also the one-loop effective potential. This mixing will give additional contributions to the one-loop corrections. If we are really just interested in the Vevacious output, we can skip the modifications of particles.m and parameters.m and come back directly to your study with Vevacious: the remaining steps are the same as for the -parity conserving case: (i) running the B-L-SSM_RpV with SARAH, (ii) running MakeVevacious[], (iii) copying the file to the Vevacious installation, (iv) creating a new initialization file VevaciousInitialization_BLSSM_RpV.xml with the location of the model file, and (v) running Vevacious.

We find that both parameter points pass also this check. For example, for EP1 the Vevacious output reads as follows.

I want to show that things are not always so boring and unexpected things can happen in such complicated potentials. For this purpose, I modify the input parameters for EP1 a bit:Even with these modifications, all right sneutrino soft-terms are still positive.

Nevertheless, we find that at the global minimum -parity is broken by sneutrino VEVs.

We see that the stability is labelled as “long-lived but thermally excluded” ([,] = -2). This means that the point is long-lived at zero temperature but quickly decays if temperature effects are taken into account. In that case the entry [,] shows at which temperature the tunnelling is likely to happen. Thus, this point is actually ruled out.

One sees at this example that the condition sometimes used in the literature for distinguishing -parity violation and conservation is not necessary. On the other hand, it is also possible to find points where this condition is fulfilled, but -parity is still unbroken at the global minimum [275]; that is, it is also not sufficient. Therefore, one should not rely on such simple minded conditions but perform always a numerical check to test the vacuum stability. The same statement holds for charge and colour breaking minima: analytical thumb rules like [22, 303306] which are, unfortunately, still widely used in the literature don not bear up against numerical checks and turn out to be pretty useless [220, 221]. These conditions miss the large majority of points which actually suffer from an unstable ew vacuum.

7.3. Calculating the Dark Matter Properties with MicrOmegas

As next step we want to study the dark matter (DM) properties of the model by using MicrOmegas. MicrOmegas is a tool which not only calculates the relic density for one or more dark matter candidates, but also gives cross sections for direct and indirect DM searches. To enable these calculations, MicrOmegas needs in general three inputs:(1)the model files to implement a new model,(2)a steering file to coordinate the different calculations,(3)numerical values for all parameters. I will show step by step how these three points are addressed.

7.3.1. Implementing New Models in MicrOmegas

The calculations of the cross section and all necessary decay widths are done by CalcHep which comes together with MicrOmegas. Thus, a new model in MicrOmegas is implemented by providing the corresponding CalcHep model files. This means that one can use the SARAH output for CalcHep to work with MicrOmegas.

By just running MakeCHep[], the default options are used. However, there are several options to adjust the output. (i)FeynmanGauge: it defines if Feynman gauge should be supported beside Landau gauge. This is done by default. (ii)CPViolation: it defines if parameters should be handled as complex. By default, all parameters are treated as real because CalcHep is not really optimized for the usage of complex parameters and this option should be used carefully. (iii)ModelNr: it numbers the model files. SARAH starts by default with 1. (iv)CompHep: it can be used to write model files in CompHep instead of CalcHep format. (v)NoSplittingWith: SARAH does not decompose four-scalar interactions in pairs of two scalar interactions with auxiliary fields if particular fields are involved. Such a decomposition is usually done because of the implicit colour structure in CalcHep which does not allow four-point interactions of coloured states. To keep the model files shorter, SARAH makes the same decomposition also for noncoloured states. (vi)NoSplittingOnly: one can define particles, for which SARAH does not decompose four-scalar interactions in pairs of two scalar interactions with auxiliary fields if only the given fields are involved the interaction. (vii)UseRunningCoupling: it defines if should run in the model files. (viii)SLHAinput: it defines if parameter values should be read from a spectrum file. (ix)CalculateMasses: it defines if tree-level masses should be calculated internally by CalcHep. (x)RunSPhenoViaCalcHep: it writes C code to run SPheno from the graphical interface of CalcHep to calculate the spectrum on the fly. (xi)IncludeEffectiveHiggsVertices: it defines if effective Higgs vertices and should be included. (xii)DMcandidate1: it sets the first DM candidate. (xiii)DMcandidate2: it sets optionally a second DM candidate.For our example we can stick to the default options. I will just comment on two important switches which demand a further explanation.

Mass Spectrum. By using SLHAinput -> True the model files are written in a way that CalcHep and, respectively, MicrOmegas expect all input parameters to be provided in a spectrum file which is called SPheno.spc.BLSSM. CalcHep and MicrOmegas are going to read this file and extract all important information using the SLHA+ functionality [317] from it. With the other options MicrOmegas/CalcHep expect either all masses and rotation matrices given in the file vars.mdl (SLHAinput -> False, CalculateMasses -> False), or all fundamental parameters (soft-terms, couplings, and VEVs) as input and diagonalizes the mass matrices internally (SLHAinput -> False, CalculateMasses -> True).

Dark Matter Candidates. One can work either with one or two dark matter candidates in MicrOmegas. The first DM candidate is the lightest particle of all states having a particular charge under a discrete symmetry to define the symmetry and the charge, and the option DMcandidate1->Value is used. There are two possibilities for Value: (i) when set to Default, the DM candidate is the lightest odd particle odd under the first defined as global symmetry; (ii) for any other choices, one can give first the name of the global symmetry and then the quantum number with respect to that symmetry GlobalSymmetry == Charge.

When SARAH is finished with MakeCHep, the CalcHep model files are located in the directory$PATH/SARAH/Output/B-L-SSM/EWSB/CHep/.To implement the model in MicrOmegas, a new project has to be created and the files have to be copied in the working directory of this project.

7.3.2. Setting Up the DM Calculations

To use the model with MicrOmegas a steering or “main” file has to be provided either in Fortran or C language and must be compiled. Examples for these files are delivered with MicrOmegas and called main.F and main.c. SARAH writes also two examples which can be used for the following calculations. (i)CalcOmega.cpp:   this file calculates only the DM relic density and prints the result at the screen and into a file called omg.out. (ii)CalcOmega_with_DDetection.cpp: this file calculates the DM relic density and in addition some direct detection rates: (i) spin independent cross section with proton and neutron in pb, (ii) spin dependent cross section with proton and neutron in pb, and (iii) recoil events in the 10–50 keV region at , , , and nuclei. The output is also written into a file called omg.out. Note that the syntax for the direct detection calculations has been changed in MicrOmegas compared to earlier versions. SARAH includes also a file CalcOmega_with_DDetection_old.cpp which is compatible with versions 2.X of MicrOmegas. We are going to choose the second file which includes the calculation of direct detection rates. There are even more calculations MicrOmegas can do like indirect detection rates. Those can be added as well to the main file provided by SARAH or the user can write their own file. For this purpose, it might be helpful to take a look at main.F or main.c which show the different options to turn on specific calculations and outputs.

The steering files written by SARAH were copied together with all model files into the working directory of the current project. We can move it to the main project directory and compile it.

A new binary CalcOmega_with_DDetection is now available. The only missing pieces are the input parameters.

7.3.3. Running MicrOmegas with SPheno Spectrum Files

Providing the numerical parameters is pretty easy because MicrOmegas/CalcHep can read the SPheno spectrum file. However, the user must make sure that no complex rotation matrices show up in the spectrum file: in the case of Majorana matrices and no CP violation, there are two equivalent outputs: (i) all Majorana masses are positive, but some entries of the corresponding rotation matrices are complex; (ii) all mixing matrices are real, but some masses are negative. CalcHep can just handle the second case with real matrices. Hence, one has to use the flag as follows

to get the spectrum according to that convention. Afterwards, the spectrum file just has to be moved to the same directory as CalcOmega_with_DDetection. We copy it there and start the calculation as follows.

The first run can take some time, even up to several hours depending on the computer power: MicrOmegas has to compile all necessary annihilation channels of the DM candidate for that particular parameter point. All further evaluations of similar points are done in a second or less. However, as soon as new channels are needed, MicrOmegas has to compile new amplitudes and the computation slows down extremely again. This can happen, for instance, if the DM candidate changes or if the second lightest state becomes close in mass and coannihilation has to be included. As soon as the run is done, we see the following on the screen.

In the first line, the freeze-out temperature and the relic density are given. We find that this point falls into the preferred region: combining Planck, WMAP polarization, high-resolution CMB data, and baryon acoustic oscillation results [318].

The important channels contributing to the annihilation follow in the next lines. This point is a bit boring, because the annihilation in two bileptons makes 98% of the entire annihilation. All other individual channels are not printed because they are below 1%. This threshold can be changed in CalcOmega_with_DDetection.cpp by changing the cut to lower values.

The same information is also written in the file omg.out. The style of this file is inspired by the SLHA format.

Because of this format, one can append this file to the spectrum file to save the dark matter results together with the other pieces of information and read it later with a standard SLHA parser.

This is, for instance, done automatically when running scans with SSP and including MicrOmegas.

The values shown for the direct detection rates can be compared with limits from experiments. For this purpose it is helpful to multiply these values by a factor of to get the rates in which is usually used to present the direct detection limits in the plane.

We can do the same for the EP2. This point has a neutralino LSP; that is, MicrOmegas has to compile again many channels and we have to wait again sometime for the results. The output on the screen looks less promising.

Despite the many different channels which contribute to the annihilation, the relic density is much too high. This is not surprising because it is well known that for a neutralino LSP often particular conditions are needed to fulfil the relic density bounds. Either a charged particle close in mass, or resonances, or a large Higgsino fraction are needed. This holds not only for a bino LSP in the CMSSM but also for a blino LSP in the constrained B-L-SSM as we have it here [278].

7.4. Monojet Events with WHIZARD

We change topics again and enter the wide field of collider studies with Monte Carlo (MC) tools. A detailed discussion of this is beyond the scope of this paper. Tools like CalcHep, WHIZARD, MadGraph, Herwig++, or Sherpa are very powerful and offer a rather unlimited number of possibilities what can be done. Therefore, I am just going to show at two examples how the output of SARAH can be used together with WHIZARD and MadGraph to perform simple studies. As soon as a model is implemented in these tools and is working fine for one study, it can be used in the same way as all models are delivered with the different tools. Thus, to become more familiar with these tools, one can check for the many examples and tutorials which can be found online.

Actually, there is one big advantage when working with model files produced by SARAH: not only the chosen MC tool needs the model files containing all vertices but also numerical values for all parameters have to be provided. This can be a delicate task especially in supersymmetric models coming with a lot of parameters and rotation matrices. When using numerical values for all these parameters obtained with another code, one has to make always sure that the conventions which are used in the model file and these of the spectrum generator are identical. This problem is absent when working with model files produced by SARAH and spectrum files written by SARAH generated SPheno. In that case, the implementation of models in the MC tool and in SPheno is based on single model file in SARAH. Thus, the same conventions are used for sure in both parts.

7.4.1. Introduction

WHIZARD [153] is a fast tree-level MC generator for events at parton level. WHIZARD makes use of OMega [154] to generate the matrix elements; that is, strictly speaking a model implementation in WHIZARD means that model files for WHIZARD and OMega have to be generated and included in both codes. SARAH is going to take care of both. I will first show how the WHIZARD and OMega model files are generated with SARAH and how they are compiled. In the second step, I will show how the parameters are passed from SPheno to WHIZARD. Finally, events for the processare generated. The jet and rapidity distributions are plotted using intrinsic WHIZARD functions.

7.4.2. Generating the Model Files for WHIZARD/OMega

For the process we are interested in, we just need vertices which involve fermions. Thus, we can neglect all vertices which only come with scalars and vectors. This is sometimes helpful because the compilation of the model files with WHIZARD/OMega can be quite time and memory consuming for complicated models. So, we run the following

to include only FFV and FFS vertices. This shortcut is very helpful for our purposes here to get quickly some results. However, it has to be used carefully in order to make sure that no relevant vertices are dropped. In the case that all vertices should be kept, there is another possibility to speed up compilation a bit: usually, SARAH splits the entire list of vertices in pieces containing 150 vertices and writes for each part a separate file. Especially for SSSS and SSS interactions even 150 vertices can cause a large file which needs sometime to be compiled. Thus, for complicated models where the expressions for the vertices are lengthy, it might be helpful to go even for less couplings per file. That is done by the option MaximalCouplingsPerFile -> X with some integer X. A good choice for the full model files for the B-L-SSM is 50 or less. There are some more flags which can be used to adjust the WHIZARD output. The full list of options is as follows. (i)MaximalCouplingsPerFile: it defines the maximal number of vertices per file. (ii)WriteOmega: it defines if the model files for OMega should be written. (iii)WriteWHIZARD: it defines if the model files for WHIZARD should be written. (iv)WOModelName: it defines the name for the model in the output. (v)Version: it defines for which version of WHIZARD the files are generated. By default 2.2.0 is used. (vi)ReadLists: it defines if the information from a former evaluation should be used.

7.4.3. Compiling the Model Files

After the interface has completed, the generated files are stored in the directory$PATH/SARAH/Output/B-L-SSM/EWSB/WHIZARD_Omega/.In order to use the model with WHIZARD and OMega, the generated code must be compiled and installed. In most cases this is done by the following.

If WHIZARD has not been installed globally in the home directory of the current user, WHIZARD will not be able to find the binaries. Thus, the WO_CONFIG environment variable was used to point explicitly to the binaries. By default, the configure script would install the compiled model into .whizard in the home directory of the user. If the user wants to have several WHIZARD installations or install WHIZARD locally, it might be better to provide a model just for one installation. For these cases the installation path has been defined via the --prefix option of the configure script. More information on the available options is shown with the following command.

The  configure  script prints also another import information, namely, the name of the model which is used to load it in WHIZARD.

Thus, the model is called blssm_sarah. This name could be changed by using the option WOModelName of MakeWHIZARD.

The model files produced by SARAH are supposed to be used with WHIZARD2.x. The possibility to patch these files for a use with WHIZARD1.x does exit in principle. However, I will not go into detail here and highly recommend to use version 2.

7.4.4. Parameter Values from SPheno

WHIZARD is able to read Les Houches files for the MSSM generated by public spectrum generators using SLHA conventions. However, WHIZARD does not provide a possibility to read spectrum files which go beyond that. Therefore, to link WHIZARD and SPheno, all SPheno modules created by SARAH write the information about the parameters and masses into an additional file. This file is written in the WHIZARD specific format and can be directly read by WHIZARD. In our example the file is called WHIZARD.par.BLSSM and it is written to the same directory where SPheno writes the standard spectrum file. One just has to make sure that the corresponding flag is turned on the Les Houches input for SPheno to get this output.

The parameter file can then be included in the Sindarin input file for WHIZARD via the following.

7.4.5. Sindarin Input and Running WHIZARD

WHIZARD comes with its own steering language called Sindarin. With Sindarin all settings to define a process in a specific model to apply cuts and even to make plots can be put in one single input file. The input file BLSSM_monojet.sin for our example of monojets at the LHC might look as follows.

First, we set the model and tell WHIZARD where to find the spectrum file written by SPheno. In general, the SPheno file contains nonzero and different masses for all SM fermions. However, to group fermions together into one object, those have to have the same masses. Therefore, we put all first- and second-generation quark masses explicitly to zero in lines 5–8. Afterwards, we can combine these quarks, their antiparticles, and the gluon into one object called parton. For the final state we define another object jet  which consists of the same particles. When we now define a process involving parton and jet, WHIZARD will generate all nonvanishing subprocesses on parton level. A name for the process (monojet) is given. This name is used in the following to refer to this process. Thus, one can also define several processes in one file, treat them separately, and run one after the other.

The next steps are to compile the process (line 15), set the beam energy (line 17), and define the pdf set which should be used (line 19). We apply a cut on the jet of 50 GeV. The process is now fully set and can be integrated (line 20). To improve numerics, we use 5 iterations26.

Lines 22–32 are used to generate figures directly while running WHIZARD. The figures will show a histogram of the jet from 0 to 1000 GeV in bins of 10 GeV and the rapidity of the jet from −5 to −5 in bins of 0.1. Note that pt_jet and eta_jet are undefined at this stage but are just variables. The analysis command is used to tell WHIZARD what is meant by both pt_jet and eta_jet. In the last two lines, the number of events and the name for the output file are given.

We save this file in the root directory of WHIZARD ($PATH/WHIZARD). However, running it in the same directory would give some mess because WHIZARD produces several output files. Therefore, we generate a new subdirectory which contains at the end the entire WHIZARD output.

The last line runs the executable whizard in the binary directory on our Sindarin input file. Note that we did not move BLSSM_monojet.sin to the subdirectory run_BLSSM_monojet. The reason is that we might want to clean this directory by rm * in order to make a new run with other settings.

After some time, WHIZARD is done and has created a pdf including both plots shown in Figure 9. The output directory includes also all events in the WHIZARD native format called evx. To turn on the output of other formats, it is possible to add the flags to the Sindarin input file

where <format> can be, for instance, lhef to get files in the Les Houches accord event format. For a complete list of all supported formats, I refer to the WHIZARD manual.

7.5. Dilepton Analysis with MadGraph

As second example for doing a collider study with SARAH model files, I will show the usage of UFO model files with MadGraph [155]. The UFO format is also supported by other tools like Herwig++ or Sherpa and the user can pick his/her favourite MC program. The command to generate the UFO files is as follows.

As option one can give a list of generic vertices which should not be included in the output similar to MakeWHIZARD: Exclude -> $LIST. By default, four scalar vertices are excluded and this is sufficient for us to have a speedy output. The UFO model files for the B-L-SSM are written to$PATH/SARAH/Output/B-L-SSM/EWSB/UFO.All files included in this directory have to be copied to a new subdirectory in MadGraphs model directory.

Now, we can import this model in MadGraph and work with it. For this purpose one can either start the interactive mode by running

or one can make a short input file including all necessary commands and give it as argument.

Here, I used a file Input_pp-MuMu.txt which contains the following lines.

In the first line we import the model in MadGraph. The option modelname is used to keep the names of the particles as given in the model files. By default, MadGraph will try to use the default naming conventions. However, this would fail for this model, because there are more than two CP even scalars and h3 can be used as name for the CP odd one as MadGraph wants to do27. We define a multiparticle called p which consists of all light quarks. We can skip the gluon because it will not contribute to our process. The muon is the second lepton which is called e2 and the antimuon is accordingly e2bar. Thus, in the third line we generate the process . The output for MadEvent is written to a new subdirectory ppMuMu and we close MadGraph when it is done via exit.

After MadGraph has created the output for MadEvent and finished, we can enter the new subdirectory ppMuMu. The important settings to generate events are done via the files in the Cards-directory:  the file param_card.dat is used to give the input for all parameters and run_card.dat controls the event generations. In the last file, the user can, for instance, set the beam type and energy, define the renormalization scale, apply cuts, and fix the number of events.

We want to use, of course, the spectrum file as written by SPheno. However, there is one caveat: MadEvent has problems with reading the HiggsBounds specific blocks in the SPheno spectrum file (HiggsBoundsInputHiggsCouplingsFermions and HiggsBoundsInputHiggsCouplingsBosons). If these blocks are included, MadEvent will not accept the file. Therefore, we either modify the output by hand and delete these blocks or we regenerate the file by changing the options in the Les Houches input file. The HiggsBounds blocks are disabled by the flag as follows.

In addition, we turned on the decays just in case that this was not done before: MadEvent is going to read the decay blocks from SPheno to know the widths of all particles. If those widths are not provided via the SLHA file it is necessary to calculate them first with MadEvent before generating events.

When we have the spectrum file in the correct form, we can copy this file to the Cards directory as param_card.dat.

The other settings we have to do are to demand small modifications on the run-card: we want to generate one million events and we want to apply a cut on the invariant mass of leptons to get rid of the -peak. The number of events is set here.

And the cuts are applied here.

We are now ready to generate the events. This can either be done again in the interactive mode by starting the following

or we can directly start the event generation with the following.

The two 0s are used as argument because we do not want to make any further modifications on the param- or run- card, and we also do not want to run pythia or any detector simulation. When starting MadEvent in that way a long list of warnings appears on the screen.

The reason is that the UFO model files by SARAH in general can handle complex parameters. However, SPheno does only print the real parts if we do not turn on CP violation. The zeros for all imaginary parts are not given explicitly in the spectrum file. Thus, MadEvent does not find an input for the imaginary parts and takes them as zero as it should. In addition, MadEvent prints a warning for each parameter where this happens. Thus, we do not have to worry about these many warnings.

MadEvent will give a status update in a new browser window. When it is done, the events are saved in the Les Houches event format and can be processed further.

We are just going to make a plot to check if the peak shows up. This can, for instance, be done with MadAnalysis [319] which I assume here to be installed as well in $PATH. We make another short input file called plotMuMu.txt and save it in $PATH/MADANALYSIS. The content of the file is the following.

In the first line we import the unweighted events which are generated by MadGraph and which are saved by default in the LHE format. In the second line, we make a histogram of the invariant mass of the muon pair in the mass range of 500 to 3000 GeV using 100 bins. For the -axis we use a log scale (logX). Finally, everything is submitted to be evaluated by MadAnalysis and the output directory should be called ppMuMu. We run MadAnalysis on that file as follows.

The output of MadAnalysis is stored in $PATH/MADANALYSIS/ppMuMU and contains also the plot shown in Figure 10 with the expected peak at 2.5 TeV.

8. Example—Part V: Making Scans

We have learned in the last sections how SARAH can be used together with other tools to study all aspects of a model. Of course, it is often not sufficient to consider just one single parameter point. SUSY models like the B-L-SSM have even in their constrained version a large parameter space which wants to be explored. Thus, at some point one has to start making scans to check many different points. I will discuss two possibilities of how to perform scans: the first one is only using functions the Linux bash provides together with simple scripts28. That might be sufficient to check quickly the dependence of a few observables on a single parameter. Afterwards, I will introduce the Mathematica package SSP which is a dedicated tool for more sophisticated scans.

8.1. Using Shell Scripts

Let us assume that one is just interested in the dependence of the two lightest Higgs masses on in a small range starting from our parameter point EP1. In principle, one does not need any additional software to do a small scan but Linux provides everything which is needed. For this purpose we create a file called LesHouches.in.BLSSM_Template which is the input file for EP1 with just one change: we replace the input value for by a unique string as follows.

Now, we can write a short bash script which makes a loop over all numbers from 1.2 to 1.3 in steps of 0.01 using the seq command. For each value we use the sed command to replace the string TBPINPUT in LesHouches.in.BLSSM_Template by the value of the loop variable and to generate a complete input file LesHouches.in.BLSSM in that way. We run SPheno with that file and use grep and sed to extract the masses of the two lightest Higgs states. These numbers are “piped” into two files called results_hX.dat with X = 1, 2 using >>. The full script called RunSPheno.sh reads as follows.

We see here that a very handy method to extract single lines from the SPheno spectrum file is to use grep with the comments appearing in the spc file (#…). The sed commands after grep are used to cut the PDG and the comment appearing in the same line in the spectrum file, that is, the variable mh1, just contains a real number at the end.

The script has to be saved in the SPheno root directory where also LesHouches.in.BLSSM_Template is located. Otherwise, the paths must be adjusted accordingly. We run the script via the following.

When the script is finished, the file results_h1.dat just contains in each line a pair of the value and of the corresponding Higgs mass.

We can plot both masses as function of , for instance, with gnuplot which is also included in many Linux distributions. For this purpose we write a short input file (gnuplot_mh.txt) as follows.

I used here basic gnuplot commands to adjust the output format (line 1), the name of the output file (TBpMH.eps), the labels for the axes (lines 3 and 4), disabling the legend (line 5) and plotting the content of the two files with our data (line 6). We run gnuplot on that file as follows

and get the plot shown in Figure 11.

There are now many possibilities to improve this ansatz. One can include easily in the script to run SPheno also other codes; the scans can be varied by playing with seq, and more observables can be stored; the appearance of the plot can be improved by using the full power gnuplot provides to polish the layout, and so on. However, I think there is no need to invent the wheel again and again. There are public tools which can be used for scanning and plotting. I will discuss briefly SSP now which is one of these tools.

8.2. Making Scans with SSP

A tool which is optimized for parameter scans using SPheno and the other tools discussed so far is the Mathematica package SSP (SARAH Scan and Plot). SSP provides functions for simple random or grid scans but can also make use of intrinsic Mathematica functions to sample the parameter space or to include constraints directly during the scan. I want to discuss here two simple examples. First, a linear scan in isSecond, I discuss a grid scan in the range All other parameters are set to the values of EP1. For more complicated scans one can also study the examples which are delivered with SSP.

8.2.1. General Setup

First, we need a file which contains information about the location and usage of all the different tools. For this purpose, we rename the file DefaultSettings.in included in the SSP package to  DefaultSettings.in.BLSSM. The content should look like the following.

Of course, $PATH has to be replaced everywhere by the installation directory of the different tools. The absolute path to the executable has to be defined for SPheno and the name for the input and output has to be given. Also the path for the executable for MicrOmegas is set. The names of the spectrum file used as input and the output file written by MicrOmegas are the other information necessary to include MicrOmegas in the scan. In addition, one can define if  MicrOmegas should only calculate the relic density if a specific particle is the LSP. In that case the PDG has to be given, that is, either 1000022 for a neutralino LSP or 1000012 for a CP even sneutrino LSP. We use here ALL to calculate the relic for any particle. One could also use the following

to just consider a subset of particles. The lines below give the commands to run HiggsBounds and HiggsSignals as explained in Section 7.1. Finally, to run Vevacious the path to the executable as well as the desired initialization file have to be given as done in the last two lines.

8.2.2. Defining a Scan

A second input file defines the scan we want to make. SARAH  also writes templates for this file during the SPheno output which could be used as starting point. The file names of these templates start with SSP_Template. We call the file for our examples here BLSSM_TBpMZp.m. The different parts are as follows.

At the very beginning, the file which contains the information about the installation of the codes involved in the scans is loaded. This is the file we have set up in the first step. Then, identifiers for all scans which we want to make are defined using the list RunScans. We just perform two scans here as said above which are called MZpLinear and MZpTBpGrid, but there is in principle no limit of how many scans are done within a single file. By default, SSP always runs SPheno. We also want to include here HiggsBounds and HiggsSignals and put therefore the flags to True. To include MicrOmegas as well, it would just be necessary to put also that flag to True. However, this would slow down the scan significantly because different LSPs show up in the range we have chosen and MicrOmegas would need a long time to compile all amplitudes. Hence, I skip it for the example here but for practical applications it can easily be included. For the same reason, I have also not included Vevacious in the scan.

Note that we applied all definitions to any scan defined in this file because we used DEFINITION[a_]. To use different options for the different scans, DEFINITION[$NAMEofSCAN] can be used.

Now, the main part which defines all input parameters and ranges follows. SSP is very agnostic concerning the underlying model. Therefore, it is first necessary to tell SSP what blocks are actually needed for a scan before the values can be defined (DEFINITION[a_][Blocks] = ). These are the blocks which we discussed in Section 6.1.3. The main part of the input is setting the numbers for these blocks and their different entries. In this context, the blocks are defined as arrays: first the block number appears, and then we can give the numerical value. Fixed values are assigned by the flag Value. Thus, the blocks MODSEL, SMINPUTS, and SPhenoInput which just come with fixed values read as follows.

The other blocks showing up in the Les Houches file are those to set the parameters for the scans. Those are defined in a similar way as follows.

We see that we can use for both scans exactly the same blocks but for MINPAR. That means that the majority of blocks had just to be defined once using again DEFINITION[a_]. For the two versions of MINPAR we gave the name of the scans as arguments. Also the scan ranges are defined easily.

The linear scan is set up by varying (MINPAR[8]) in the range between 2500 and 4000 using 40 steps with a linear distribution. The grid scan is set by varying and (MINPAR[7], MINPAR[8]) within the given limits and assuming linear distributions in both directions. There are also other options possible, for example, a logarithmic distribution (Distribution->LOG) or random distribution (Distribution->RANDOM). Also relations to other parameters can be given. For instance, to scale in a scan the same way as , one could use the following.

For the possibility to perform basic Marcov-Chain Monte-Carlo runs, to apply fits during the scan, or to make a sampling of the parameter space, I refer to the SSP manual and the examples which come with SSP.

Finally, we want to get some figures automatically when the scan is finished. For the linear scan we want to plot (i) the two lightest CP even sneutrinos, (ii) the lightest CP even and odd sneutrino, (iii) the two lightest neutralino masses, and (iv) the composition of the lightest neutralino. To make the plots a bit more appealing, we first generate a basic style (BasicStyle) which is applied to all figures. Since our figures are a composition of two, respectively, four plots, we have to define also styles with two (StyleDefault2) and four (StyleDefault4) colours. That is done by mapping our colours on the basic style. These definitions are so far purely Mathematica commands. When this is done, the plots themselves are defined quickly: we use the option P2D of SSP for 2-dimensional plots, and set for each figure what parameters and observables should be shown, what style should be used, what the label of the -axis that we need, and what the name for the output file should be.

Note that I used here the keyword UseLaTeX together with LaTeX syntax for the labels. By doing this, SSP calls the script fragmaster (http://www.ctan.org/tex-archive/support/fragmaster) which makes use of psfrag to get nice looking labels.

The plots for the grid scan are actually even simpler to define because we can use the same style for each plot. The flag P3D performs 3-dimensional plots (contour plots) based on the Mathematica function ListContourPlot. Thus, one can set the options for these plots by using the SetOptions command of Mathematica. We are going to make four plots again: (i, ii) the two lightest Higgs masses in the plane and the results of (iii) HiggsBounds and (iv) HiggsSignals in the same plane.

When we are done with setting up the entire scan, it is started by running Mathematica with the following.

The output is stored in the subdirectories:$PATH/SSP/Output/MZpLinear,$PATH/SSP/Output/MZpTBpGrid.These directories contain not only the scan data saved in the Les Houches format (SpectrumFiles.spc) and Mathematica format (Data.m) but also the plots that we wanted to see. I will show these in Figure 12 for the linear scan and in Figure 13 for the grid scan.

8.2.3. Using Data of a Scan in Mathematica

Of course, it is also possible to use the results of scans performed by SSP later in Mathematica. For this purpose SSP provides a function MakeSubNum to translate the data saved in the SLHA or Mathematica format into a list of Mathematica substitutions. These substitutions can then be used to either apply cuts or to extract points or to make more plots.

To load and format the data of the grid scan, we can either use

or

With both options we get an array of substitutions called SubData which we can be used. However, there is one caveat: the data files for large scans are huge. These file include any information as calculated by SPheno and the other tools. Often, not all information is really needed, but one is only interested in the behaviour of a subset of parameters or observables. Therefore, it often saves a lot of time and memory to extract the information from the big files which is actually needed and store that information in smaller files. This can be done, for instance, under Linux with a small shell script using again grep and the comments appearing in each line of the SPheno output.

This script takes as argument the name of the file containing all spectra, extracts the data, and writes the necessary lines in another file called SmallSpectrum.out. We call this script extract.sh  and save it in $PATH.   It can be used then via the following.

The small spectrum file is now loaded much faster as follows

and we can use it, for instance, to make some more plots. For instance, to make a contour plot of the doublet fraction of the second lightest Higgs in the plane, we can use the following.

The obtained plot is shown in Figure 14.

We can also apply some cuts and collect points with a neutralino mass below 500 GeV

and check for which values of this occurs.

Thus, this is only the case for heavy masses. Similarly, one can also extract all points with a neutralino LSP by demanding that the lightest neutralino is lighter than the lightest CP even sneutrino

or check what is the lightest mass appearing for the second scalar.

The relevant information about this point is shown via the following.

We see that there is now an infinite numbers of possibilities to study data within Mathematica using the Select or also other Mathematica commands.

9. Summary

In the first part of this paper I have given an overview of what models SARAH can handle and what calculations it can do for these models. In addition, I have discussed to what other HEP tools the information derived by SARAH can be linked. In the second part I have discussed in great detail how all aspects of a SUSY model can be studied with SARAH and the related tools. For this purpose I choose the B-L-SSM as an example. The implementation of the B-L-SSM in SARAH was explained, and it was shown what can be done within Mathematica to gain some understanding about the model. Afterwards, I have explained how SARAH in combination with SPheno is used to calculate the mass spectrum, decays, flavour and precision observables, and the fine-tuning. The next step was to check parameter points with HiggsBounds and HiggsSignals for their Higgs properties, with Vevacious for their vacuum stability, and with MicrOmegas for their dark matter relic density. I have given two short examples for collider studies using SARAH model files. First, monojet events with WHIZARD were generated. Second, a simple dilepton analysis with MadGraph was done. Finally, I discussed possibilities how to perform parameter scans using either shell scripts or SSP.

This paper hopefully shows how helpful SARAH can be to study models beyond the SM or MSSM. Because of a very high level of automatization the user can get quickly results with a precision which is otherwise just available for the MSSM. Of course, also the possibility to make mistakes is highly reduced compared to a home-brewed calculation. I hope that the detailed explanation of a specific example simplifies the first contact of interested users with the many different tools which are available today.

Appendices

A. Some More Details

I could not address all interesting topics in the main part. Therefore, I give in this  appendix  a few more details to selected topics.

A.1. Flags in SARAH Model Files

There are different flags to enable or disable distinctive features which might be present in some models: (i)AddDiracGauginos = True/False; default: False, it includes/excludes Dirac Gaugino mass terms, (ii)AddFIU1 = True/False; default: False, it includes/excludes Fayet-Iliopoulos -terms, (iii)NoU1Mixing = True/False; default: False, disables effects from gauge-kinetic mixing, (iv)IgnoreGaugeFixing = True/False; default: False, it disables the calculation of the gauge fixing and ghost terms. Note that this is just possible at tree-level. For loop calculations the ghosts are needed. Specific parts of the Lagrangian are turned off via the following: (i)AddTterms = True/False; it includes/excludes trilinear soft-breaking couplings, (ii)AddBterms = True/False; it includes/excludes bilinear soft-breaking couplings, (iii)AddLterms = True/False; it includes/excludes linear soft-breaking couplings, (iv)AddSoftScalarMasses = True/False; it includes/excludes soft-breaking scalar masses, (v)AddSoftGauginoMasses = True/False; it includes/excludes Majorana masses for gauginos, (vi)AddSoftTerms = True/False; it includes/excludes all soft-breaking terms, (vii)AddDterms = True/False; it includes/excludes all -terms, (viii)AddFterms = True/False; it includes/excludes all -terms. By default all terms are included. In particular the last two flags have to be used very carefully.

A.2. Parts of the Lagrangian in SARAH

SARAH saves the different parts of the Lagrangian which it has derived in different variables. This happens for all eigenstates ($EIGENSTATES) and the user has access to this information: (i)LagSV[$EIGENSTATES]: parts with scalars and vector bosons (i.e., kinetic terms for scalars), (ii)LagFFV[$EIGENSTATES]: parts with fermions and vector bosons (i.e., kinetic terms of fermions), (iii)LagSSSS[$EIGENSTATES]: parts with only scalars (i.e., scalar potential), (iv)LagFFS[$EIGENSTATES]: parts with fermions and scalars, (v)LagVVV[$EIGENSTATES]: parts with three vector bosons, (vi)LagVVVV[$EIGENSTATES]: parts with four vector bosons, (vii)LagGGS[$EIGENSTATES]: parts with ghosts and scalars, (viii)LagGGV[$EIGENSTATES]: parts with ghosts and vector bosons, (ix)LagSSA[$EIGENSTATES]: parts with scalars and auxiliary fields. In addition, the different steps to derive the Lagrangian of the gauge eigenstates are also saved in different variables: (i)superpotential: Superpotential, (ii)fermion-scalar interactions coming from the superpotential: Wij, (iii)F-terms: FTerms, (iv)scalar soft-breaking masses: SoftScalarMass, (v)gaugino masses: SoftGauginoMass, (vi)soft-breaking couplings: SoftW, (vii)kinetic terms for scalars: KinScalar, (viii)kinetic terms for fermions: KinFermion, (ix)D-terms: DTerms, (x)interactions between gauginos and a scalar and a fermion: FSGaugino, (xi)trilinear self-interactions of gauge bosons: GaugeTri, (xii)quartic self-interactions of gauge bosons: GaugeQuad, (xiii)interactions between vector bosons and gauginos: BosonGaugino.

A.3. A More Precise Mass Calculation

In some cases a numerical more precise calculation is needed to diagonalize mass matrices in SPheno. This is the case if the hierarchy in the mass matrix is very large. In that case double precision with about 15 digits precision might not be sufficient. The best example is models with -parity violation where neutrinos and neutralinos mix. Another example is seesaw type-I like models where TeV-scale right neutrinos mix with the left-neutrinos. In this case one has to go for quadruple precision which gives a precision of about 32 digits. To enable quadruple precision for specific masses, two small changes are necessary.(1)In SPheno.m used to set up the SPheno output, one has to define for which particles the higher precision is needed. This is done with the variable QuadruplePrecision which accepts a list of mass eigenstates. If we just want to have the masses of the neutrinos, which are called Fv in the considered model, with higher precision, the corresponding line reads as follows.

(2)We must change the Makefile of SPheno located in the src directory and remove the compiler flag -DONLYDOUBLE. This flags forces all calculations just to be done with double precision.

By doing that, the routines necessary for a higher precision get compiled. To make sure that everything is consistent, it might be a good idea to recompile the entire code after changing the Makefile.

A.4. More about Tadpole Equations and SPheno
A.4.1. Numerical Solutions

In the main part of this paper we solved the tadpole equations for the SPheno output with respect to , , , and for which an analytical solution exists. This must not always be the case. For instance, if one wants not to use and as input but obtain and from the minimum conditions, an analytical solution does not exist. To solve the equations numerically and to define the initialization used by the Broydn routines used for that, SPheno.m has to contain the following lines.

The first line is the same as for the analytical approach and defines that the tadpole equations have to be solved with respect to , , , and this time. Without the other two lines, Mathematicas function Solve would try to find an analytical solution but it fails. SARAH would then stop the output with an error message. However, due to the second line the attempts to solve the tadpole equations inside Mathematica are skipped. The third line assumes that , , and are and is These values are used in the numerical routines for initializing the calculation. Of course, other possible and reasonable choices would have been to relate with the running soft-breaking terms of the Higgs, and and to which is now used as input.

Also constant values can be used as follows.

Usually, the time needed to find the solution changes only slightly with the chosen initialization values as long as they are not completely off. Note that all choices above would only find the solution which is the closest one to the initialization values. However, the equations are cubic in the VEVs and there will be in general many solutions. Thus, it would be necessary to check if the found vacuum is the global one or at least long-lived. This could be done, for instance, with Vevacious.

A.4.2. Assumptions and Fixed Solutions

Assumptions. It is possible to define a list with replacements which are done by SARAH when it tries to solve the tadpole equations. For instance, to approximate some matrices as diagonal and to assume that all parameters are real, one could use the following.

That has, of course, no impact on our example because these matrices do not show up in the tadpole equations. However, in the -parity violating case with sneutrinos VEVs this might help to find analytical solutions which do not exists in the most general case.

Fixed Solutions. There might be cases in which an analytical solution exists when some approximations are made, but Mathematica does not find this solution. Then, it might be useful to give the solutions as input in SPheno.m.  This can be done via the following.

Note that the solutions have to be given for the tree-level and loop-corrected tadpole equations. In the loop-corrected tadpole equations the one-loop contributions are parametrized by Tad1Loop[i], where is an integer counting the equations.

A.5. Thresholds in SUSY Models

Assumptions. It is possible to include threshold effects in the RGE evaluation with SPheno. I concentrate here on the simpler case where the gauge symmetry does not change. In that case SARAH can derive the RGEs for all scales from the RGEs of the highest scale above all thresholds as follows. (i)The numbers of generations of the fields which are supposed to be integrated out during the RGE evaluation are parametrized by new variables . All gauge group constants like the Dynkin index are expressed as function of . is dynamically adjusted during the SPheno run when the RGEs cross the different thresholds. (ii)The superpotential and soft-couplings which involve the heavy states are set to zero when a threshold is crossed. For instance, we take the Yukawa-like coupling which involves three generations of the heavy field . At the threshold of , the th row of is set to zero. In addition, two assumptions have to be satisfied: (i) the difference between the masses of the scalar and fermionic component of the heavy superfield is negligible; that is, the masses coming from superpotential interactions are much larger than the soft-breaking term; (ii) these masses are a consequence of bilinear terms in the superpotential. Both assumptions are fulfilled, for instance, for very heavy vector-like particles or for singlets which have a large Majorana mass.

Procedure. There are two steps necessary to implement thresholds according to the above assumptions. First, small changes in the model file of the considered model are necessary: the heavy states have to be “deleted” at the SUSY scale. This is done by adding the superfields to the array DeleteFields.

This ensures that the heavy particles are not take into account in the calculation of mass matrices, vertices, or loop corrections at the SUSY scale.

The second step is to add the thresholds to SPheno.m. For this purpose, the threshold scales have to be defined via the array Thresholds. In this array, the numerical value of each threshold scale has to be fixed by some parameter which will be known by SPheno. In addition, the fields must be stated which should be integrated out at that scale. Afterwards, the boundary conditions for all threshold scales can be define via the arrays BoundaryConditionsUp and BoundaryConditionsDown. The conditions in the first array are applied during the evaluation from to and the conditions of the second array when running down from .

Seesaw Type-I. To exemplify these steps, I will discuss briefly the simplest model with a threshold scale: seesaw type-I. In this model, the MSSM particle content is extended by three generations of right-handed neutrino superfields. In addition, a neutrino Yukawa coupling between the left and right neutrinos is present. The mass of the right neutrino is fixed by a Majorana mass term . We assume that ; that is, the right neutrinos should be integrated out and should not play any role in . This will generate the Weinberg operator which couples in its supersymmetric version two left-lepton superfields with two up-Higgs superfields.

It might look a bit odd that and show up in the same superpotential. However, we will make sure that during the numerical analysis the Weinberg operator just gets initialized when the right neutrinos are integrated out. In the following lines in SPheno.m of the seesaw I take care of the following.

Here, we defined three threshold scales which are given by the diagonal entries of the input value of (MvIN[X,X]). At each scale the corresponding generation of the right neutrino superfields is integrated out. Then, we initialize the three boundary conditions for each threshold scale when running up and down. While we need not define any boundary condition when running up, we initialize the Weinberg operator when running down. The shifts of the coefficients of the Weinberg operator at each threshold scale are given by where is the th eigenvalue of the running matrix which in general is not diagonal.

B. Flags in SPheno Input File

There are many options which can be used in the block SPhenoInput in the Les Houches input file to set up the calculations and the output done by SPheno:1  sets the error level; default is 0;2  if 1, the SPA conventions are used; default is 0;7  if 1, it skips two loop Higgs masses; default is 0;8  method to calculate two-loop corrections; default is 3;1:  fully numerical method,2:  semianalytical method,3:  diagrammatic calculation,8/9:  using results from the literature if available; 8 includes only corrections,9  if 1, two-loop corrections are calculated in gauge-less limit; default is 1;10  if 1, safe mode is used for the numerical derivative in the two-loop Higgs calculations; default is 0;11  if 1, the branching ratios of the SUSY and Higgs particles are calculated; default is 1;12  defines minimum value for a branching ratios to be included in output; default is ;13  adjusts the three-body decays: 0: no three-body decays are calculated; 1: only three-body decays of fermions are calculated; 2: only three-body decays of scalars are calculated; 3: three-body decays of fermions and scalars are calculated; default is 1;14  if 1, the running parameters at the mass scale of the decaying particle are calculated. Otherwise, the parameters at the standard renormalization scale are used; default is 1;15  defines minimum value for a width to be included in output; default is ;31  positive values are used as GUT scale; otherwise a dynamical GUT scale fulfilling the given condition is used; default is −1;32  if 1, forcing strict unification, that is, ; default is 0;33  if set, a fixed renormalization scale is used;34  sets the relative precision of the mass calculation; default is ;35  sets the maximal number of iterations in the calculation of the masses; default is 40;36  sets the minimal number of iterations before SPheno stops because of tachyon in the spectrum; default is 5;37  defines if CKM matrix is taken to be in the up- (1) or down- (2) quark sector; default is 1;38  sets the loop order of the RGEs: 1 or 2 can be used; default is 2;39  if 1, writes output using SLHA1 format; default is 0;41  sets the width of the -boson , default is 2.49 GeV;42  sets the width of the -boson , default is 2.06 GeV;50  if 1, negative fermion masses are rotated to real ones by multiplying the rotation matrix with ; default is 1;51  if 0, the parameters are not rotated into the SCKM basis in the spectrum file; default is 0;52  if 1, a negative mass squared is always ignored and set 0; default is 0;53  if 1, a negative mass squared at is always ignored and set 0; default is 0;54  if 1, the output is written even if there has been a problem during the run; default is 0;55  if 0, the loop corrections to all masses are skipped; default is 1;57  if 0, the calculation of the low-energy observables is skipped; default is 1;58  if 0, the calculation of in the boundary conditions at the SUSY scale is skipped; default is 1;60  if 0, possible effects from kinetic mixing are neglected; default is 1;61  if 0, the RGE running of SM parameters is skipped in a low scale input; default is 1;62  if 0, the RGE running of SUSY parameters to the low scale is skipped for the calculation of the flavour and precision observables; default is 1;63  if 0, the RGE running of SM parameters to the low scale is skipped for the calculation of the flavour and precision observables; default is 1;64  if 1, the running parameters at the scale are written in the spectrum file; default is 0;65  can be used if several, independent solutions to the tadpole equations exist; default is 1. An integer is used to pick one solution75  if 1, a file containing all parameters in WHIZARD format is created; default is 1;76  if 1, input files for HiggsBounds and HiggsSignals are written; default is 1;86  sets the maximal width which is taken as “invisible” in the output for HiggsBounds and HiggsSignals; default is 0;88  sets a maximal mass of particles which are included in loop calculations; default is  GeV. Note that this option must be turned in SARAH first;89  sets the maximal mass for scalars which is treated as numerical zero; default is  GeV;95  if 1, mass matrices at one-loop are forced to be real; default is 0;400  fixes initial step-size in numerical derivative for the purely numerical method to calculate two-loop Higgs masses; default is 0.1;401  fixes initial step-size in numerical derivative for the semi-analytical method to calculate two-loop Higgs masses; default is 0.001;510  if 1, SPheno writes solution of tadpole equations at tree-level; default is 1. This is needed for Vevacious;515  if 1, SPheno writes all running values at the GUT scale; default is 0;520  if 1, SPheno writes HiggsBounds blocks (effective coupling ratios of Higgs particles to SM fields); default is 1;525  if 1, SPheno writes the size of all different contributions to the Higgs diphoton rate; default is 0;530  if 1, the tree-level values of the tadpole equations appear in the output instead of the loop-corrected ones; default is 0;550  if 0, the fine-tuning calculation is skipped; default is 1;551  if 1, one-loop corrections to -mass are included in fine-tuning calculation; default is 0;999  if 1, debug information is printed on the screen; default is 0.

C. Model Files for the B-L-SSM

The full model file for the implementation of the B-L-SSM in SARAH is shown. In addition, I summarize all changes in particles.m and parameters.m compared to the MSSM. This includes definition of new parameters and changed properties of parameters already present in the MSSM. For the particles I restrict myself to the intermediate states and the mass eigenstates after EWSB but skip the gauge eigenstates. Finally, the additional input file to create a SPheno version for the B-L-SSM is given in Appendix C.4.

C.1. Model File

C.2. Parameters Files

(i)New gauge couplings.

(ii)New gauge boson mass.

(iii)New gaugino masses.

(iv)New gauge boson mixing angle.

(v)New angle to give ratio of VEVs.

(vi)New superpotential and soft-breaking parameters.

(vii)New VEVs.

(viii)New rotation matrices in matter sector.

(ix)Modified rotation matrix in gauge sector.

(x)Modified rotation matrix in matter sector.

C.3. Particles Files

(i)Intermediate states.

(ii) New Mass Eigenstates. More interesting are the mass eigenstates. The additional information given for those is used in the different output for SPheno and the MC-tools. We begin with the new states which are not present in the MSSM.

Some comments are at place:(a) PDG and PDG.IX:  …(b) FeynArtsNr:  …(c) ElectricCharge:  …(d) OutputName:  …(iii) Modified Mass Eigenstates. For other states only more generations appear compared to the MSSM. Therefore, it is only possible to extent the lists for the PDGs.

C.4. SPheno File

D. Models Included in SARAH

I show here the list of models which are included in the public version of SARAH. Additional models created and provided by user are also found herehttps://sarah.hepforge.org/trac/wiki.

D.1. Supersymmetric Models

(i)Minimal supersymmetric standard model:(a)with general flavour and CP structure (MSSM),(b)without flavour violation (MSSM/NoFV),(c)with explicit CP violation in the Higgs sector (MSSM/CPV),(d)in SCKM basis (MSSM/CKM).(ii)Singlet extensions:(a)next-to-minimal supersymmetric standard model (NMSSM, NMSSM/NoFV, NMSSM/CPV, and NMSSM/CKM),(b)near-to-minimal supersymmetric standard model (near-MSSM),(c)general singlet extended, supersymmetric standard model (SMSSM),(d)Dirac NMSSM (DiracNMSSM),(e)next-to-minimal supersymmetric standard model with inverse seesaw (inverse-seesaw-NMSSM).(iii)Triplet extensions:(a)triplet extended MSSM (TMSSM),(b)triplet extended NMSSM (TNMSSM).(iv)Models with -parity violation:(a)bilinear RpV (MSSM-RpV/Bi),(b)lepton number violation (MSSM-RpV/LnV),(c)only trilinear lepton number violation (MSSM-RpV/TriLnV),(d)Baryon number violation (MSSM-RpV/BnV),(e)SSM (munuSSM).(v)Additional ’s:(a)-extended MSSM (UMSSM),(b)secluded MSSM (secluded-MSSM),(c)minimal model (B-L-SSM),(d)minimal singlet-extended model (N-B-L-SSM).(vi)SUSY-scale seesaw extensions:(a)inverse seesaw (inverse-Seesaw),(b)linear seesaw (LinSeesaw),(c)singlet extended inverse seesaw (inverse-Seesaw-NMSSM),(d)inverse seesaw with gauge group (B-L-SSM-IS),(e)minimal model with inverse seesaw (BLRinvSeesaw).(vii)Models with Dirac gauginos:(a)MSSM/NMSSM with Dirac gauginos (DiracGauginos),(b)minimal -symmetric SSM (MRSSM),(c)minimal Dirac gaugino SSM (MDGSSM).(viii)High-scale extensions:(a)seesaw 1–3 ( version) (Seesaw1,  Seesaw2,  Seesaw3),(b)left/right model () (Omega).(ix)Others:(a)MSSM with nonholomorphic soft-terms (NHSSM),(b)MSSM with colour sextets (MSSM6C).

D.2. Nonsupersymmetric Models

(i)Standard model (SM) (SM), standard model in CKM basis (SM/CKM). (ii)Inert Higgs doublet model (Inert). (iii) extended SM (B-L-SM). (iv) extended SM with inverse seesaw (B-L-SM-IS). (v)SM extended by a scalar colour octet (SM-8C). (vi)Two Higgs doublet models (THDM, THDM-II, THDM-III, THDM-LS, THDM-Flipped). (vii)Singlet extended SM (SSM). (viii)Triplet extended SM (TSM).

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The author is very grateful to Mark D. Goodsell and Kilian Nickel for providing routines for the two-loop calculation via the SARAHSPheno interface. In particular the author thanks Werner Porod who raised his interest in supersymmetry. This was the starting point of the entire development of SARAH. The author is in debt to Martin Hirsch, Avelino Vicente, Daniel Busbridge, James Scoville, Alexander Voigt, Peter Athron, Roberto Ruiz de Austri Bazan, Moritz McGarrie, Lorenzo Basso, and many others for testing of SARAH, helpful suggestions, and also their bug reports. Finally, it has been a pleasure for the author to work with Jose Eliel Camargo-Molina, Ben O’Leary, and again Werner Porod and Avelino on Vevacious and FlavorKit. The author thanks Manuel Krauss, Lukas Mitzka, Tim Stefaniak, and Avelino Vicente for their remarks on the paper.

Endnotes

  1. See for instance [320] for an overview of SUSY searches.
  2. Another method to deal with gauge-kinetic mixing was proposed in [321].
  3. For simplicity, I’ll use here “Wilson coefficients” also for the coefficients of LFV operators which are more commonly called “form factors”.
  4. Models with a threshold scale where heavy superfields are integrated out are Seesaw1, Seesaw2, Seesaw3.
  5. A model with threshold scales where the gauge group changes is the left-right symmetric model called Omega in SARAH.
  6. Gauge bosons get this name with a prefix V, gauginos with a prefix f and ghosts with the prefix g. For instance, the gluon and gluino are called VG and fG by this definition and the Ghost gG.
  7. The charge indices of non-Abelian groups start with the first three letters of the gauge group’s name followed by an integer, that is, col1 is used for colour indices.
  8. Note, gBp is not possible because this is already the name of the ghost!
  9. One has to be careful with the rotations of charged vector bosons like . Here, the mass matrix in the basis is diagonal and the calculated rotation matrix from diagonalizing this matrix would be just the identity matrix. In that case it is inevitable to give the standard parametrization in numerical studies as input. For neutral vector bosons such a problems won’t show up.
  10. “/” is interpreted by Mathematica as escape sequence. Therefore, “/” has to be replaced by “//” in the LaTeX commands.
  11. To switch to the new scheme, either at the beginning of a SARAH session or in the model files, one has to set UsePDGIX = True;
  12. While there is not much space to play with the gauge couplings, the other parameters could be chosen differently, of course.
  13. I’m going to concentrate here on SPheno. Readers interested in FlexibleSUSY might check the homepage: http://flexiblesusy.hepforge.org/.
  14. The convention is that the block name for the output is used together wit the suffix IN.
  15. The exact value is .
  16. Given by .
  17. Given by .
  18. Given by .
  19. Given by .
  20. Given by .
  21. Gluon fusion and vector boson fusion from 50–1000 GeV, all other channels from 50–300 GeV. For 13 and 14 TeV only cross sections are provided for  GeV. That’s the reason why no HiggsLHC13 and HiggsLHC14 blocks are given.
  22. Sum of CP even and physical CP odd scalars, that is, we have 4 + 2 = 6 in the B-L-SSM. In the case of CP violation, the number of physical mass eigenstates has to be used.
  23. It is recommended by the HiggsSignals authors at the moment to use peak because for the mass centred method not much data is provided by ATLAS and CMS.
  24. The -value in this context is the ratio of divided by the numbers of degrees of freedom (ndf). For ndf HiggsSignals takes the numbers of observables. To change this and to define the number of free parameters in the model, Nparam has to be set in the file usefulbits_HS.f90 in the HiggsSignals source code.
  25. An estimate for this uncertainty could be obtained by varying the renormalization scale in SPheno in the range and checking the impact on the masses.
  26. For more complicated processes it is often useful to make two runs via integrate (process) iterations = 15 : 20000, 25 : 20000 to further improve numerics.
  27. One has also to be careful that MadGraph uses the PDGs of sfermions according to their flavour. However, in the case of flavour violation this is longer possible. Moreover, SPheno versions by SARAH always sort them by their mass. Hence, it is always safer to use the option -modelname. One can use the command display particles in the interactive MadGraph session to see the names of all particles present in the loaded model.
  28. Unfortunately, I don’t have experience with Mac. However, I expect that MacOS provides similar functions.