Abstract

In the Next-to-Minimal SuperSymmetric Model (NMSSM) the lightest CP-odd Higgs bosons () can be very light. As a consequence, in addition to the standard charged Higgs boson () decays considered in the MSSM for a light charged Higgs (), the branching fraction for can be dominant. We investigate how this signal can be searched for in production at the Large Hadron Collider (LHC) in the case that () with the giving rise to a single -jet and discuss to what extent the LHC experiments are able to discover such a scenario with an integrated luminosity ~20 fb−1. We also discuss the implications of the possible Higgs-signal observed at the LHC.

1. Introduction

With the successful start-up of the LHC and the intriguing results from the first year of data-taking at the center of mass energy of 7 TeV and an integrated luminosity close to 5 fb−1 for each of the ATLAS and CMS experiments, the ongoing run in 2012 is set to be a milestone in particle physics. The possible signal for a Higgs boson around 125 GeV may or may not be confirmed and the search for physics Beyond the Standard Model (BSM) will continue. Contrary to the neutral Higgs boson, which if discovered may need to be analyzed in detail regarding its branching fraction into various channels in order to determine whether it is the Standard Model (SM) Higgs boson or not, the discovery of a charged Higgs boson would be an unmistakable sign of BSM physics.

The charged Higgs boson arises in theories with more than one Higgs doublet. The prime example is the MSSM [1, 2] which has two Higgs doublets, leading to two CP-even Higgs bosons () and one CP-odd () in the case of CP-conservation and two charged states () after electroweak symmetry breaking. In this case the two Higgs doublets are required by supersymmetry with one of them giving masses to the uptype fermions and one to the downtype ones. For a complete introduction to the Higgs sector in the MSSM we refer to [3].

The main reason for introducing supersymmetry (for a general introduction to supersymmetric theories we refer to [4]) is to solve the so-called hierarchy problem, that is, why the scale of the electroweak interaction is so much smaller than the cut-off of the SM, normally taken to be the Planck mass where gravity becomes the dominant force and the SM breaks down. With the Higgs boson being a scalar particle, the higher-order corrections to its mass are proportional to this cut-off and with the SM only being an effective theory this cut-off dependence cannot be renormalized away. In a supersymmetric theory this problem is essentially solved by the introduction of the fermionic partners of the Higgs fields, which only get logarithmic corrections to their masses and thereby avoids fine-tuning. In turn this means that the Higgs boson masses are also protected from receiving quadratic corrections as long as supersymmetry is not broken or only softly broken.

In addition to solving the hierarchy problem supersymmetry also offers a candidate for cold dark matter [5, 6] in the case that -symmetry is preserved, which in turn is introduced to avoid terms in the Lagrangian that otherwise would mediate proton decay. In this case the lightest supersymmetric particle (LSP) is stable and generically it has the right mass and cross-section to constitute the observed dark matter. Supersymmetry also improves the unification of gauge forces at an hypothesized grand unification scale although the unification is not exact and it does depend on the details of the spectrum of SUSY-particles.

As is well known, the MSSM by itself is not without problems. Leaving the question of the precise mechanism for supersymmetry breaking aside, the MSSM faces the so-called -problem. This relates to the magnitude of the dimensionful -parameter which couples the two Higgs doublets to each other in the superpotential. In order to avoid large cancellations between this contribution to the Higgs masses and the soft supersymmetry breaking terms as well as having a phenomenologically viable supersymmetric theory (mainly having a large enough chargino mass), the magnitude of should be of order the electroweak or supersymmetry breaking scales. The problem is then that there is no a priori reason for this parameter to have any particular value, in principle it could be anything up to the Planck scale, so why is it similar to the electroweak or supersymmetry breaking scales?

In the NMSSM the -problem is solved by introducing an additional Higgs singlet into the theory. After supersymmetry breaking this field gets a vacuum expectation value (vev) that effectively acts as a -term. The original -term in the superpotential can then be set to zero without spoiling the viability of the theory. For a more detailed review of the NMSSM we refer to [7, 8].

The additional Higgs singlet has important consequences for the phenomenology of the Higgs sector. In the MSSM, the masses of the heavy Higgs bosons () are closely related to each other as they originate from the same (second) Higgs doublet if viewed in the Higgs basis where only one (the first) of the Higgs doublets has a vev. For example, at tree-level the masses of the CP-odd and charged Higgs bosons are related by . As a consequence the decay is typically not open.

In the NMSSM the additional Higgs singlet means that there is one more CP-even and one more CP-odd field with a separate mass scale introduced into the Higgs sector. As a consequence, the by now three CP-even and two CP-odd electroweak states will mix into the respective mass eigenstates. Thus the mass-relations from the MSSM will be altered. This is particularly evident in the CP-odd sector where the lightest state may now be much lighter than the charged Higgs boson—even after taking experimental constraints into account as discussed below—opening up the possibility for the decay to be dominant. In turn this means that the search for charged Higgs bosons, in -quark decays for example, has to be widened also to include this decay channel.

The decay has already been considered to different levels of detail [911] in the literature and there are constraints from the DELPHI experiment for  GeV [12], as well as the CDF experiment for the case [13]. In this paper we want to focus on the region in parameter space where the mass is above the threshold but still so close to it that the two -quarks will fragment into a single -jet. The viability of scenarios with light ’s has also been considered by [1418].

Our paper is organized as follows. In the next section we give some basic properties of the Higgs sector in the NMSSM that are relevant to our discussion. We then discuss the constraints on the parameter space in Section 3, including the latest results from LHC. In Section 4 we illustrate how the signal can be searched for in -production taking into account the appropriate backgrounds. Section 5 contains a discussion of the implications of the possible Higgs signal from the ATLAS and CMS experiments and in Section 6 we summarize and conclude.

2. Basic Properties of The NMSSM

We consider the -symmetric version of the NMSSM with the superpotential given by where is the superpotential of the MSSM with set to zero. The soft supersymmetry breaking potential relative to the MSSM is then given by where the part of only depending on the Higgs fields is given by,

In addition contains all the dependence on the other soft supersymmetry breaking parameters: the gaugino masses , the trilinear couplings , the squark masses , and finally the slepton masses . In the following we will assume minimal flavour violation so that the sfermion mass matrices are diagonal and the trilinear couplings are proportional to the corresponding Yukawa coupling matrices , and so forth.

After electroweak symmetry breaking, and assuming that CP is conserved, the Higgs sector will contain three CP-even Higgs bosons (), two CP-odd (), and one charged (), where the states are ordered in terms of increasing mass. In the same way as in the MSSM the minimization conditions for the Higgs potential allow one to trade the parameters for the doublet vev  GeV and . Similarly can be expressed in terms of the singlet vev which in turn gives rise to the effective -parameter, . All in all this leaves us with 6 unknown parameters describing the Higgs sector of NMSSM at tree-level: . Below we will trade the latter two parameters for the masses and .

As already alluded to the mass-eigenstates are mixtures of the electroweak eigenstates. More specifically, writing we have . (In the MSSM limit this means that .) Similarly for the CP-odd states we have with . Here the mixing matrix is simply . Together with the ratio of the two doublet vevs (or equivalently the rotation angle needed to go to the Higgs basis where only one of the doublets have a vev) , the mixing matrices and specify the reduced couplings to fermions and gauge bosons as given in Table 1.

3. Experimental Constraints

In this section we will explore to what extent the process we are interested in is constrained by existing experimental data. Since we are interested in a light (with ) and a light there are constraints both from collider experiments as well as low-energy flavour experiments. However, before going into the various constraints we will specify the scenarios that we have considered and then come back to the question of experimental constraints.

3.1. Specification of SUSY Scenario Considered

In the following we will consider a variant of the well motivated -scenario in the MSSM [19], similarly to what was done in [17]. Thus we will consider a universal scale for the sfermion masses at the supersymmetry breaking scale. In other words we assume, as already stated, that the sfermion mass matrices ( etc.) are diagonal, and furthermore we assume that all diagonal entries are equal to which we keep fixed at 1 TeV. In addition we assume that the gaugino masses are related as in the constrained MSSM, where supersymmetry breaking is assumed to be mediated by gravity, namely,  GeV,  GeV,  GeV. Finally we will assume that but contrary to what was done in [17] we will let them vary in the range  GeV so that the amount of mixing between the and is unconstrained.

For the Higgs sector we will let all six parameters vary freely. However, as was done in [17], we will trade the parameter for and parameter for using an iterative procedure starting from the tree-level relations: where the latter gives the masses of the mass-eigenstates , after diagonalisation. Thus the parameters we consider with their respective ranges are:

The limits for the various parameters have been chosen as follows: For , and we impose perturbativity up to the GUT scale which effectively means that any value out side the above regions is bound to fail. (In addition some points inside these regions also fail because of this requirement.) The lower limits on and are dictated by experimental constraints. The upper limit on is not a hard one but follows from the implicit assumption that should be of order the electroweak scale whereas the upper limit on is given by the condition that the decay should be open. The reason for letting vary freely is mainly that this decreases the correlations between the masses of the Higgs bosons as will be discussed more below. Finally the lower limit on is chosen in order to have open whereas the upper limit follows from having  GeV.

In order to calculate the resulting models from the inputs we use the package NMSSMTools version 3.2.0 [20, 21] with default settings. Among other things this means that we impose perturbativity of the model up to the GUT scale. Finally, in all scans we generate ~1 M points (with flat priors in the parameters considered) which fulfill the theoretical constraints implemented in NMSSMTools.

3.2. Current Experimental Constraints

The most important constraints come from the direct searches for Higgs bosons for which we use the package HiggsBounds version 3.7.0 [22, 23]. In addition there are also constraints from direct searches for supersymmetric particles, various flavour constraints and in principle also the anomalous magnetic moment of the muon as well as the relic density of dark matter. We have not applied the latter two constraints for the following reasons. To investigate the amount of dark matter in the various models one would also need to vary the gaugino masses as was done in [24]. Since we keep these fixed we have not applied the dark matter constraints. On the same vain we have not applied the constraint from the anomalous magnetic moment of the muon, since this depends on the masses of the scalar partners of the muon and the neutrinos, apart from requiring to be positive.

When it comes to the flavour constraints the situation is more involved in that various constraints have different level of model dependence. On the one hand there are constraints from tree-level mediated processes such as , which only depend on the Higgs sector and on the other hand there are constraints from loop-mediated processes such as and which depend on details of the supersymmetric sector of the model. In the following we will limit ourselves to applying the most severe constraints from , which limits the available parameter space in and which puts limits on . The latter constraint is especially important since we will consider light which is very constrained by the data from LHCb and CMS [25, 26]. Finally we apply the direct constraints from searches for supersymmetric particles. For all constraints except the Higgs bosons we use NMSSMTools version 3.2.0.

The results from the scan are displayed in Figure 1 with black points being viable models from a theory point of view but excluded by the direct searches for Higgs bosons and coloured points being allowed by the same constraint. Of primary interest are the allowed regions in . As is clear from the figure there is a distinct region of points with  GeV allowed by all constraints (indicated by green colour) for essentially any value of  GeV. The same is also true in the constrained scenario where  GeV and are fixed as was already noted in [17]. However, at difference to the constrained scenario there are regions with larger that are allowed also for  GeV.

Looking at the -plane, one clearly sees the constraint from for intermediate (shown as blue points). For larger there is a cancellation between the SM and contributions, which makes this region allowed by , but instead the constraints from come into play (red points). It should be noted that the points excluded by are plotted on top of the constraints from . Similarly the constraints from searches for supersymmetric particles are plotted (in yellow) on top of the constraints from -decays, but with the cut  GeV there are hardly any points excluded by this constraint. Finally we note that for there are allowed points in parameter space for essentially any value of  GeV.

Turning to the -plane we see that the region is allowed by all constraints up to , whereas for larger there are points allowed by direct Higgs boson searches but not allowed by and . Given the uncertainties related to the indirect constraints from -decays we conclude that there is a region in parameter space with ,  GeV, and that should be searched for by the ATLAS and CMS experiments. It should also be noted that values both above and below the threshold are allowed by the constraints.

Before turning to the signal of interest, that is, , we also show in Figure 1 the effects of the various constraints when projected onto the and -planes. From the first of these plots one clearly sees the constraint which arises from requiring perturbativity up to the GUT scale. From the second we see that the constraints imply  GeV, which essentially follows from the radiative corrections to the lightest CP-even Higgs becoming small or even negative for large relative to the value  TeV that we are using. We also see that for there are hardly no experimental constraints in the region considered. On the other hand, if we would extend to values smaller than 125 GeV then all those points would be excluded by searches for supersymmetric particles.

As promised we turn now to the resulting branching ratios for the decay . From Figure 2 we observe the following general feature, the branching ratio can be large as soon as the channel is open () except for large where the decay becomes dominant. Concentrating on those points that pass all the constraints considered and the region we also see from the lower right plot that as long as and  GeV. Thus we can conclude that not only is the parameter space region ,  GeV, and allowed in this region, but the decay is also dominant. In the next section we will exemplify how the so far unexplored region of parameter space with  GeV can be probed by searching for with in -production at the LHC.

Before ending this section we also show in Figure 3 the branching rations for the decay chains of interest, that is, , , and as a function of when restricting to parameter space points with  GeV. As can be seen from the figure, points with as large as are still allowed when the decay is included. This should be compared with the experimental constraints from ATLAS and CMS which so far have assumed that giving a limit few percent [27, 28].

4. Search Strategy

In the following section we will perform a signal-to-background analysis for three different charged Higgs masses: and 150 GeV, respectively. For definiteness we have used when simulating the signal but the end results will not depend on this value. The mass of the is set to 11 GeV throughout as an example of a small mass that is just above -threshold. The three charged Higgs masses are chosen to illustrate different kinematic properties: at  GeV the -jet from the decay will be rather soft whereas the from the will be harder. For  GeV the situation will be the opposite, and in the intermediate case  GeV both jets can be relatively hard and in addition the available phase-space will be largest.

We aim to reconstruct the signal process where one of the -quarks decays leptonically via with and the other hadronically via as illustrated in Figure 4. All cross-sections have been corrected for these enforced W decays as well as a factor of 2 for taking account of the process also with interchanged roles between the and the . Because the is supposed to decay close to threshold to , we aim for a reconstruction where the two ’s from the are clustered together to give a single -jet.

As backgrounds to the process we consider the irreducible as well as, because of its higher magnitude, with one jet being accidentally -tagged (weighted with a mistagging probability, assumed to be 0.01 [29, 30]). In order to include also the single top contributions to the background we have simulated the processes and , respectively, but in the following we will denote them as and for simplicity. For other reducible backgrounds, such as jets, we assume that similar procedures can be applied as in the cross-section determination. For example, requiring two -tagged jets reduces the jets background to production to about 10% [31]. Several cuts are applied to strengthen the signal and to suppress the background as will be discussed in the following.

A center-of-mass energy of 8 TeV at the Large Hadron Collider is assumed throughout the whole analysis. For the generation of the hard matrix elements, we use MadGraph 5 [32] with a fixed renormalization and factorization scale (set as default to the mass) and the “CTEQ6L1” parton distribution functions. To supply MadGraph with the proper parameters of the signal we have for simplicity used a simple two Higgs Doublet Model with the masses given above as implemented in the Two Higgs Doublet Model Calculator 2HDMC [33]. The only important difference to the NMSSM then arises from the decay giving and extra factor (cf. Table 1). All other steps to generate complete events, such as radiation, underlying events, and hadronization, are carried out using Pythia 8. We start from bare samples of 100000 events for the different signals as well as the background whereas for the background we have 50 times higher statistics to start with. There is no detector simulation included in this exploratory analysis but we have simulated -tagging in a simplified way as detailed below.

For all processes we use the leading order cross-sections obtained from MadGraph. On the one hand this means that there is an overall scale factor which is more or less the same for both signal and background. On the other hand the rates will be lower than what would have been resulted if higher order cross-sections had been used. All in all this means that the signal over background rates we find will be underestimated in this respect. For example we get a LO cross-section for process in the SM of 138 pb to be compared with the NNLL resummed result of 232 pb [34].

4.1. Reconstruction of the Leptonic

To reconstruct the leptonic , we first need to identify the charged lepton ( or ) associated to the hard process. The transverse momentum () and pseudorapidity ()-distributions are shown in Figure 5. After applying cuts on the lepton kinematics ( GeV, ), we require the summed of the surrounding particles in an -cone of size around the lepton to be less than 10 GeV to call it isolated. (The numerical values for the cut and the cone size have been optimized by observation of the changes in efficiency and purity when varying the cuts). On the whole event then, we require to have precisely one isolated lepton in the final state.

The next step in reconstructing the leptonic is to identify the missing energy (MET)/missing of the event with the transverse momentum of the neutrino. Assuming the to be on mass-shell and using a mass-less neutrino then leaves two possible solutions for the longitudinal momentum of the neutrino with

In order to have a more accurate reconstruction of we have applied a cut  GeV. In addition, different selection criteria for the choice of the sign have been examined, for example, the invariant mass of the reconstructed . Among these, the most viable one turned out to be a simple selection of the smaller , which is correct in roughly three quarters of the signal events.

4.2. Jet Reconstruction and Tagging

For the reconstruction of the hadronic part of the event, we consider different jet clustering schemes (anti-kT [35], Cambridge/Aachen [36], kT [37, 38]) as well as cone-sizes. All particles, except neutrinos and the isolated lepton, in the rapidity range are fed into FastJet [39, 40]. The resulting clustered jets are required to have  GeV. Afterwards, a simplified -tagging is simulated for all jets in the region by comparing the of the jets to the -quarks of the hard process. All jets within are then classified as -jets.

The jet-algorithms require to specify the distance measure used when calculating the jet measure. Since we want to cluster the pair from the into one jet, it is crucial that the distance between the two subjets is not too large. Figure 6 shows the distance measure between the two -quarks on parton level for different masses. As can be seen from the figure, for reasonable clustering cone sizes (around 0.4 to 0.6) and with in the region of interest for this analysis, the two -quarks will most likely be clustered together as a single jet.

As an ideal reconstruction will now give rise to three -jets, the correct reconstruction of the will be enhanced by the identification of the “wrong” -jet which comes from the together with the hadronically decaying . The strategy to achieve this here is to first find the pair of untagged jets that is closest to the mass and then combine this pair with the -jet that gives a mass closest to the mass. This -jet will then be excluded in the reconstruction, reducing the number of -jets which have to be considered (in an event with a so far correct clustering in the desired way) to two. In addition to this, we put cuts on the quality of the reconstruction by requiring the reconstructed masses shown in Figure 7 to be in the regions  GeV or  GeV, respectively. We have checked that the reconstruction of the and masses is quite independent of the choice of jet clustering scheme and cone-size as is also shown in Figure 7. Only the 0.3 cone size gives a slightly inferior top reconstruction, but then the will also not give rise to a single -jet. If not mentioned differently, we thus use the anti-kT algorithm with in the following.

Using the jet-reconstruction outlined above we obtain the jet and -jet multiplicity, respectively, shown in Figure 8. As is clear from the figure, the number of jets peaks around the expected five and the number of -jets is typically smaller than ideal which is due to the limited -tagging region enforced. Also note the large decrease in -jet multiplicity for the sample from 2 to 3. Here the -jet sample arises from gluon splitting into and we do not take it into account below since that would amount to double counting with the background.

Given the large background from for the two -jet sample we resort to requiring at least three -jets. Assuming that one of the -jets has been identified as coming from the -quark this leaves two -jets that may originate from the . Due to the strong dependence of the hardness of the -jet from the decay on the mass, we consider both of these remaining solutions in general as possible candidates for the reconstruction. The resulting distributions when combining with the leptonically decaying are shown in Figure 9. In the reconstruction, we thus require -jets for the signals and the background, while we require -jets in the sample. For the latter we then assume that any of the non -jets inside the -tagger region () can be mistagged with a probability of 0.01 per jet [29, 30].

4.3. Signal Significance and Reach

To estimate the signal reach as well as its significance, we choose a common window of 90 to 160 GeV to integrate the signal as well as the backgrounds. We correct for -tagging efficiencies which we assume to be 0.6 per -jet. The resulting cross-sections are shown in Table 2.

ratios for the three different mass cases are then calculated for an integrated luminosity of 20 fb−1) and summarized in Table 3. The table also shows the branching ratios at the simulated parameter points, as well as the extrapolations to branching ratios necessary for a discovery. From the table it is clear that for  GeV, the discovery reach is maximal. For smaller or larger , the limitations in phase space will reduce the branching ratio for and respectively. In the former case this means that the standard decay channel will also be significant.

Before ending this section we note that similarly to the standard decay modes of the charged Higgs boson it should be possible to use the spin-correlations between the decay products of the two top quarks as a way of enhancing the signal [4143].

5. Compatibility with Possible Higgs Signal

Recently the ATLAS and CMS experiments announced the combined results of the SM Higgs () searches using 5 fb−1 of data from 2011 [44, 45]. In short they can be summarized as follows: the CMS experiment has ruled out the region  GeV at the 95% confidence level as expected whereas in the region  GeV they have not been able to make any exclusion at all even though they expected to be able to do so at the 95% confidence level, the ATLAS experiment has similarly ruled out the regions  GeV,  GeV, and  GeV at the 95% confidence level whereas they had expected to rule out the region  GeV.

Instead, both experiments have found an excess of events in the regions  GeV and  GeV for ATLAS and CMS, respectively, which when corrected for the so-called look-elsewhere-effect and combining the different channels has a statistical significance of about for each of the two experiments. Although not statistically significant these results have stirred a lot of excitement in the high-energy physics community [24, 4654]. It will most likely not be possible to draw any final conclusions about whether this is a true signal or not before the end of the run in 2012 which will give another ~20 fb−1 of data.

One of the most important properties of the possible signal at the LHC is that the channel is similar to what is expected from the SM. Therefore we start by considering the would-be signal from the as well as the compared to what is expected in the SM for a Higgs boson () with the same mass. Assuming that gluon-gluon fusion dominates the production, which we have verified is always the case in the scenarios we consider, this ratio is given by where in the second equality, following for example [47, 50], we have made the implicit assumption that the difference in radiative corrections for the production and decay processes are canceled in the ratio.

The results obtained for this quantity when using the same scan as in Section 3 are displayed in Figure 10. From the figure it is clear that if the possible signal seen at the LHC is the then we would have to have  GeV. Even so does not become larger than for  GeV in our scan and there appears to be an upper bound  GeV. However, the difference in mass compared to the possible observation is so small that it should be considered to be within the theoretical uncertainty. (For example, including the full one-loop corrections and the two-loop ones from the and Yukawa couplings could push the mass bit higher in compliance with the possible signal. Similarly increasing would also increase .) In any case it is clear that it is hardly possible to have a light if it is the that has been seen at the LHC.

Next we turn to the possibility that it is the that has been seen by the LHC experiments. The results of the scan are also shown in Figure 10. As can be seen from the figure the results are more promising in this case. There are points with also for small and intermediate that have in the region indicated by the LHC experiments.

In order to explore this possibility more closely we show in Figure 11 the results when fixing the mass to 11 GeV as was done in Section 4 for the three cases  GeV. As can be seen from the figure, for the intermediate charged Higgs mass it is possible to reach . However it should be noted that in this case, as also shown in the figure, the branching fraction for is quite small meaning that the standard decay channel can be used even though with a slightly reduced branching fraction.

The last logical possibility would be that it is the that has been observed by the CERN experiments. However, in the scenarios we consider it turns out that is always small.

The trends seen in Figure 11 can be understood on more general grounds from the difficulties of having a light with mass and at the same time have a large . The first problem is that unless the triple Higgs coupling is small the decay will become dominant. Looking at the structures of , which for example can be found in [8], this means that both and typically have to be small. Secondly the decay will also dominate if unless the corresponding reduced coupling given in Table 1 is small. In other words we need to be small. There are essentially three ways to achieve this. If is small then will be mainly singlet like and decouple. However, then will also be small. The second possibility is that and are small, but then the would-be-signal would be mainly singlet-like and not produced in the first place, so has to be small. Finally then, the combination could be small but then the complementary combination would have to be large giving an increased coupling . All in all this means that is difficult to have a light that it is not decoupled and still have a large , although it cannot be completely ruled out.

6. Summary and Conclusions

We have considered a well motivated class of supersymmetric extensions of the standard model with nonminimal Higgs sector, namely, the CP-conserving NMSSM. In these types of models the additional Higgs singlet can modify the phenomenology of the Higgs sector in many different ways. In this paper we have specifically addressed the possibility of having a light CP-odd Higgs boson close to, but still above the threshold. This in turn means that the light charged Higgs boson, with mass , can decay into in addition to the standard decays , thus invalidating the interpretations made of charged Higgs bosons searches assuming that .

When investigating the viability of these types of scenarios we have found that the experimental constraints from direct searches are quite weak even when taking the latest constraints from LHC into account. The constraints from indirect searches in -decays are more constraining but also more model dependent. Even when including the results from the most important ones, namely and , we still find a region of parameter space that is allowed with . This is precisely the same region as the one where the decay can be dominant.

The phenomenology of these types of scenarios is special in that, due to its low mass, the will decay into a single jet. Even so we have shown that it is possible to reconstruct the decay using standard jet finding algorithms when the decays leptonically. This requires to use the missing transverse momentum to calculate the four momentum of the , which can then be combined with the -jet to give a mass peak at . The other quark is assumed to decay hadronically according to giving an additional handle to identify the events of interest. The most important background is thus the irreducible one from production but we have also taken into account the background by considering the possibility of mistagging ordinary jets as -jets.

Based on our study we find that with an integrated luminosity of 20 fb−1 it should be possible to discover a charged Higgs bosons in these types of scenarios as long as the combined branching fraction for the decay chain is larger than 0.01.

Finally we have also investigated the phenomenological consequences of the possible Higgs signal seen at the LHC on the types of scenarios we consider. We find that it is difficult to have a light that is not decoupled and at the same time have a combined production and decay into for one of the CP-even Higgs bosons with mass ~125 GeV. As a consequence we have not been able to find regions of parameter space where the decay dominates and at the same time are compatible with the possible Higgs signal. This means that irrespectively of whether the possible Higgs signal is substantiated or not, the LHC experiments should be able to either discover or put very tight constraints on a light charged Higgs bosons also in the NMSSM.

Note Added

On July 4, 2012 the ATLAS and CMS experiments announced the discovery of a new Higgs-like particle in the same mass region as the previous observations already cited in the text.

Acknowledgments

The authors thank Oscar Stål for helpful communications. Furthermore, they would like to thank Stefan Prestel for his kind advice and help, especially regarding the technical realization. This work is supported in part by the Swedish Research Council Grant 621-2011-5333.