Abstract

Cyber risk assessment requires defined and objective methodologies; otherwise, its results cannot be considered reliable. The lack of quantitative data can be dangerous: if the assessment is entirely qualitative, subjectivity will loom large in the process. Too much subjectivity in the risk assessment process can weaken the credibility of the assessment results and compromise risk management programs. On the other hand, obtaining a sufficiently large amount of quantitative data allowing reliable extrapolations and previsions is often hard or even unfeasible. In this paper, we propose and study a quantitative methodology to assess a potential annualized economic loss risk of a company. In particular, our approach only relies on aggregated empirical data, which can be obtained from several sources. We also describe how the method can be applied to real companies, in order to customize the initial data and obtain reliable and specific risk assessments.

1. Introduction

The process of risk assessment and treatment is fundamental to the implementation of an effective cyber security program and plays a crucial role for the national and international regulations in the field of data protection. A complete understanding of cyber risks is necessary, in order to ensure that the security controls an organization has in place are sufficient to provide an appropriate level of protection against cyber threats.

However, defining reliable models for the cyber risk exposure is still an open problem. Existing models [14] suffer from some important concerns that, for example, prevent the insurability market development [5]. First of all, cyber risk evaluation and the study of its related impact are performed mostly in qualitative ways, which are usually affected by errors and misrepresentations of the risk. They also exhibit several disadvantages, such as the approximate nature of the achieved results and the difficulty of performing a cost-benefits analysis [6]. Quantitative approaches, in their turn, are usually based on scoring systems that associate a certain score to a technological/organizational context. The idea is commendable but, as it will be clarified afterward, the way in which it is commonly implemented does not give a realistic measure of the cyber risk and the related impact. Reliable models for the measure of the cyber risk are not available or have significant limitations, like the lack of generalization and the fact that most works consider only the analysis of past data to derive probabilistic models, while it is not clear how to obtain reliable estimates about future events [7, 8]. Moreover, some quantitative approaches, like the well-known HTMA (how to measure anything) [9] and the FAIR [10] methods rely on a subjective evaluation of the likelihood of an event (in particular, of the probability of a successful attack due to a certain threat) given by a team of experts [11, 12]. These kinds of probabilities usually show some level of inaccuracy and should be replaced by more objective models. The impact of the set of considered threats is then measured in terms of economic loss, which is also subjectively estimated. Based on these premises, the need to improve the quantitative evaluation of the cyber risk of an organization, through a dynamic monitoring of the attacks and vulnerabilities the organization is experiencing, clearly emerges.

A recent study, for the case of data breaches in IT and information security, has been developed in [13]. However, it is clear that the whole panorama involves other sectors (for instance, business and finance companies) and other cyber threats. This means that in order to obtain reliable representations of this wide scenario, more general analyses are required. A step towards the generalization of this work has been done in [14], where the authors consider the whole range of cyber risks and associated costs, by analyzing sufficiently large and available datasets. This kind of approach is not new in the literature (see, for instance, [8, 15] for a study based on data breaches): through the analysis of a large amount of historical data, probabilistic models about the registered cyber crime events can be derived. However, as the authors in these papers clearly state, such an approach has some important limitations. Indeed, cyber crime is a dynamic and continuously evolving phenomenon: considering only past data is clearly not enough to obtain reliable estimates about future events.

For instance, let us suppose that a company wants to determine its risk exposure and eventually define possible strategies to increase its security against cyber crime. In such a case, relying on probabilistic models is very likely to be not enough, since specific countermeasures and consequent attack strategies should be taken into account. One example of this process is given in [12], where the authors consider a dynamic model that describes possible interactions between a defender and an attacker. In such a model, each countermeasure is considered as a software update of an existing cyber security system and is characterized by an effectiveness score; the optimal strategy is thus obtained by taking into account the interplay between the defender and the attacker.

Previous works highlight the fact that the whole scenario of cyber risk, with the heterogeneous ensemble of all possible players and events, looks like a phenomenon that difficultly allows deriving reliable probabilistic models. In particular, applications on some real case studies might strongly depend on specific aspects of the involved subjects. For instance, conducting a risk assessment with a qualitative approach may result in a good level of controls for malware defense, but this does not provide evidence on whether these security measures are effective in counteracting malware attacks. As a consequence, inaccurate prioritization could result in valuable resources being spent on risk areas that may not be very important and which may not deserve such resources and vice versa.

As mentioned above, one way to reduce the uncertainty in this scenario is the one of relying on opinions of experts [11, 12]. This kind of approach is followed in HTMA [9], where a set of threats is characterized by a likelihood value and the corresponding impact. Basically, the likelihood is the probability of successful attack due to each threat, while the impact expresses the subsequent economic loss. In particular, in the HTMA method, the impacts are estimated through interviews to experts that, for each threat, are asked with the 90% confidence range for possible economic losses. Their answers are then used to define the random variable associated to the impact; a log-normal distribution is assumed for each one of the considered threats.

In this paper, we use the HTMA approach but assuming experts’ opinions only as a starting point to be progressively and continuously improved through the acquisition of new and updated information on the organization’s behavior against cyber threats. More precisely, we propose to exploit a combination of probabilistic techniques and objective data as an input to HTMA. Our goal is to define a methodology for fitting a probabilistic model into a real case study. The procedure we propose is substantially based on checking the effectiveness of the applied measures through a data-driven approach. As previous studies by SANS Institute already outline [16], one of the key focuses of an effective cyber risk assessment is the measurement of the security controls implementation effectiveness. Indeed, as seen in most risk assessment methods, a risk matrix only representing impact and likelihood is a commonly used tool to assess cyber risk. According to previous approaches, these values are obtained through the analysis of a sufficiently large amount of historical data. However, these values hardly reflect the actual state of a particular entity under analysis. Indeed, a reliable assessment of the risk exposure without taking into consideration the effectiveness of the applied measures is a hard task. For this purpose, our analysis considers the CIS 20 Critical Security Controls (Center for Internet Security, https://www.cisecurity.org/), but the same procedure can be applied to any other suitable set of security issues.

The paper is organized as follows. In Section 2, we briefly remind the probabilistic model we consider, based on the HTMA approach. In Section 3, we describe how empirical data can be used to fit the model into an actual entity (for instance, a company). In Section 4, we show an application of this methodology to some real case scenarios. Finally, in Section 5, we draw some conclusive remarks.

2. Probabilistic Model

In this section, we shortly describe the method we use for evaluating the cyber risk exposure of a company. We first present a generic model that fits into our study case and give the basic notions that are fundamental for our analysis. We then briefly introduce the HTMA methodology and describe how it can be used to derive a quantitative measure for the risk exposure.

2.1. General Model

We consider a set of events , the i-th one with probability of occurrence , for ; in this paper, we assume that the events are independent, which means that the occurrence of does not influence the occurrence of all the other events , with . More complex scenarios, in the presence of correlation between different events, will be analyzed in future work. Each event is also linked to a random variable , which is associated to the impact and is described through a probability distribution : each time the event occurs, it has an impact , whose particular value depends on . Then, we denote as the random variable defined as the sum of the impacts of the considered events. Our goal is to characterize the statistical properties of .

Actually, unless specific choices for the distributions of are made, providing a closed form for the distribution of is unfeasible. However, we can proceed with numerical simulations and estimate its cumulative distribution function (CDF). More precisely, we can simulate N different scenarios: in each scenario, we simulate the occurrences of the events , and for each occurred event, we randomly extract the corresponding impact, according to . Let be the resulting total impact obtained in the j-th simulated scenario; then, the CDF of can be estimated as

It is clear that, in order to obtain reliable estimates via numerical simulations, the value of N must be sufficiently large.

One crucial quantity in our analysis is the loss exceedance curve defined as

By definition, the value of corresponds to the probability that the total impact of the considered events is equal to or exceeds a threshold value a. If, for instance, we have , then this means that the probability that the set of considered events results in a total impact is equal to 0.2. Also by definition, we have and .

2.2. Cyber Risk Assessment Based on HTMA

In this section, we briefly describe the use of the HTMA method for assessing cyber risk. We consider a set of n different cyber threats and, with reference to the notation introduced in the previous section, define as the event that the i-th threat has resulted in an economic loss for the company. Thus, is the probability of such an occurrence, for instance, in the time interval of one year.

Following the HTMA methodology, each economic impact is obtained on the basis of interviews: in particular, for each threat, experts are asked with the 90% confidence interval of economic losses that their company might sustain, in case of occurrence of the considered threat. The impact of each cyber threat is then associated to a range in the form , corresponding to the 90% confidence interval. In particular, as mentioned, the HTMA method assumes that each economic impact follows a log-normal distribution, with mean and standard deviation that are obtained asrespectively. Thus, starting from likelihood values and the corresponding economic impacts distribution, Monte Carlo simulations can be performed to obtain the loss exceedance curve .

In each simulated scenario, we consider all possible (i.e., identified) threats. For each threat, we randomly sample a variable with continuous uniform distribution in . If such value exceeds , then the corresponding impact is sampled (according to the corresponding log-normal distribution); otherwise, it is set as 0. Then, by computing the sum of all impacts, we obtain the value of the total impact. If we consider a sufficiently large number of scenarios, then we can obtain a reliable estimate of the loss exceedance curve.

3. Cyber Risk Assessment Framework

The loss exceedance curve introduced in the previous section strongly depends on the company assets and characteristics. In other words, in order to obtain a reliable estimate of the loss exceedance curve, we need to customize the values and to the actual organization we are considering. This operation is usually performed through surveys submitted to the company, in order to obtain a reliable overview of what the company is currently doing to prevent cyber threats. However, it is clear that the use of such answers is likely not enough to obtain a complete and accurate picture of the actual state of the company. As an example, the implementation of a particular strategy (e.g., having good anti-malware tools) does not mean that the strategy is indeed effective, since its effectiveness is influenced by many other factors.

In other words, if the stakeholders’ answers are evaluated with a more objective process, this will result in a more objective profiling of the company. This is our aim, which we pursue by introducing the use of some data-driven key risk indicators (KRIs) in the HTMA approach. The model we use is described next, while its application to some practical case studies is reported in Section 4.

3.1. Data-Driven KRIs

Suppose that we can dispose of a tool that monitors the company and returns a sufficient amount of quantitative evidences such as(1)Malicious code/software activity (i.e., malware, ransomware, botnet evidences)(2)Insecure/unencrypted/vulnerable protocols usage (i.e., P2P, vulnerable SSL, etc.)(3)Deep web exposure (company targeted by criminals)(4)Data breaches due to human errors, third parties, or hacking activity(5)Software/infrastructure vulnerabilities

Clearly, we could use such information to establish a well-defined set of KRIs that negatively influence the effectiveness of the existing controls. For instance, the detection of malware incidents originating within the network of the organization is a clear indicator that some of the employees are not aware about phishing attacks and that the anti-malware tools used by the company are not enough, even if all due controls are implemented. In the same way, vulnerabilities or unnecessary/insecure services exposed on the Internet perimeter of the company are a clear indicator that the implemented controls are not effective. In both cases, it is clear that the level of security is not as high as claimed. So, the answers of the survey should be mapped with a list of evidences pertaining to cyber incidents and technical vulnerabilities obtained, for example, with a set of cyber intelligence sources, in order to associate an effectiveness score to each one of the given answers. This way, one can get a more realistic picture of the company cyber profile: in the case of a “bad” score, the controls implemented by the company and declared by the answers are not effective. In other words, the likelihood of the associated threats should be increased, since the probability of incurring in some related incidents is higher than expected. On the contrary, in the case of a “good” score, the stakeholders’ answers can be positively weighted consequently.

More precisely, when considering the stakeholders’ answers, we propose to use a mapping between all the questions and the set of considered threats: this way, we can obtain a coverage score that in the end is used to calibrate the initial likelihood values according to the company profile. For this purpose, we first define the control coverage vector as the ensemble of scores computed on the basis of the stakeholders’ answers. Basically, each entry is defined as the state of coverage against the i-th considered threat. In particular, we define a control as a series of atomic actions to protect the organization against internal and external threats. An example of control is “Continuous Vulnerability Management,” that, as per CIS v7, could be realized carrying out operations like performing authenticated vulnerability scanning, deploying automated operating system patch management tools, etc. The more actions are performed, the higher the level of control’s implementation will be. For instance, an effective measurement of this control can be achieved by taking into consideration the exposure to software and infrastructure vulnerabilities; moreover, an effective measurement of the malware defenses (CSC 8) can be achieved verifying the infections events originated within the network. Then, the stakeholders’ answers can be mapped into the set of controls; in particular, the answers can be collected in a vector , while the controls in a vector . Then, let be a matrix with number of rows and columns equal to, respectively, the number of answers and controls. We can writewhere the entries of are all . With this choice, the values in correspond to weighted sums of elements of . In particular, the entry in position in matrix , which we denote as , expresses the impact of the i-th answer on the j-th control. These coefficients need to be normalized, in the sense that the sum of each column in equals 1. If , this means that the action referred to the i-th answer has no impact on the j-th control. The entries of are determined on the basis of statistical analyses and evaluations on empirical data. We point out that the whole procedure is independent of both the number of collected answers and the number of analyzed controls. Indeed, choosing a different set of answers or different controls (or both of these possibilities) will just result in different dimensions and different entries for the matrix (and, obviously, different lengths for the vectors and ), while the whole procedure will remain unchanged.

For each threat , we then combine the elements of , in order to obtain the elements of . Indeed, each control is associated to one or more threats: the formula we have used in our simulations iswhere is another matrix in which the entries of each column sum to 1, whose entries, analogously to those of , are evaluated after a statistical analysis performed on empirical data. The entry in position in , which we denote as , expresses the impact of control i on the j-th threat.

By combining (5) and (6), we obtainwhich shows how the entries of the control coverage vector are in linear dependence with the entries of the answer vector. It must be noticed that this matrix approach is rather classic and widely used in the literature [17, 18]. The main novelty of our analysis is in the fact that the vector is not used directly but it is combined with another vector resulting from empirical data.

Actually, analogously to what was done for the control coverage vector, a set of measurements can be provided in order to obtain another length-n vector that we call effectiveness vector and denoted as . The vectors and are then combined, in order to obtain the effective coverage score, which is a length-n vector whose i-th entry is obtained as . It is clear that this operation corresponds to scaling the entries of the control coverage vector on the basis of the evidences found. Finally, the customized likelihoods are obtained as , where . We point out that these expressions have been derived after evaluation of available empirical data and extrapolations of the values from [19].

On the other hand, from a practical point of view, it is extremely useful to define a global indicator that measures the general cyber security posture of a company. We call this value Security Control Score (SCS) and we compute it as the average value of the entries of . Basically, a high SCS indicates substantial and effective investments in people and technology to protect the digital assets and a low exposure to costs following from cyber threats.

4. Application to Real Case Scenarios

In this section, we describe the application of our model to some real case scenarios. Consistently with the existing literature, we first consider historical data to obtain reliable values for the likelihood and impact values of a set of cyber threats. Then, we show a real case study, by applying the procedure described in Section 3, that is, introducing the evidences resulting from the intelligence tool into the estimate.

4.1. Obtaining Initial Likelihood Values

In order to provide all the required inputs for the HTMA model, we need to define a set of cyber threats for which the likelihood and impact values can be reliably estimated. For this purpose, we rely on data reported in [19]; this choice is motivated by the fact that this report contains a large amount of information about events in 2017, so it offers a significant and recent picture of the current cyber crime panorama. According to [19], we consider nine different threats and the corresponding likelihood values; such values are listed in Table 1. An ID number is assigned to each threat, in order to simplify the notation.

For each one of the considered threats, starting from [19], we have determined the confidence ranges for the impacts, on the basis of the sector in which a company operates; such ranges are listed in Table 2. For instance, for a company operating in the industrial sector, the range of losses due to malware events goes from a minimum of 1.95 M$ to a maximum of 2.19 M$.

The distinction between operating sectors might not be enough to obtain reliable estimates of the economic losses: as [19] clearly shows, the company size is another aspect that must be taken into account. In particular, coherently with [19], we can define the company size as a function of the number of seats (i.e., number of employees). This dependence can be heuristically modelled through a coefficient α defined as follows:where s is the number of seats and the values of the coefficients in (8) have been obtained on the basis of extrapolations from data contained in the report (a = 1.04 · 10−12, b = −6.54 · 10−8, c = 1.41 · 10−3, d = 1.815 · 107, k = 2.125 · 10−7, l = 0.5838, and m = 6.398 · 10−5).

The coefficient α can then be used to adjust the ranges listed in Table 2. In order to clarify this aspect, let us suppose that a company operating in the industrial sector has : then, all the impacts reported in Table 2 must be multiplied by a coefficient equal to 0.1. For instance, the range for malware attacks becomes : these values correspond to and that are used in (3) and (4).

The values computed this way can then be used as reliable values for the likelihoods and impacts that are exploited in the HTMA methodology.

4.2. Case Studies

In order to validate our approach, we have run some simulations considering three different organizations , , and , with the following common properties:(i)There are 2000 workstations(ii)Their business sector is industrial/manufacturing(iii)Their annual revenue is about 350,000,000 $

It must be said, however, that the proposed approach is quite general and can be equally applied to different sectors. What usually changes passing from one scenario to another is obviously the numerical values of the quantities involved, while the approach and the set of formal relationships at the basis of the model remain substantially unchanged.

An intelligence tool is supposed to be used to detect cyber evidences on the cyber perimeters of , , and , to verify the effectiveness of the security controls implemented by them. Table 3 reports the number of evidences, of the type listed in Section 3.1, monitored for the three organizations in a precise period of time (e.g., 1 year), distinguishing them on the basis of the impact: trivial, middle, and critical. Eight cyber intelligence sources , with , have been adopted. Suitably processed, the values in Table 3 permit to determine the effectiveness vector . By combining it with , we can compute the effective coverage score, , and finally, the security control score, SCS. All these values are reported in Table 4. Finally, the corresponding ranges of the economic losses (90% confidence intervals) are listed in Table 5.

This is all we need to run numerical simulations, following the HTMA approach, for the three considered companies. The resulting loss of exceedance curves is shown in Figure 1. These curves have been obtained by applying the theoretical approach discussed in Section 2.2. So, according to (2), each curve represents, for the specific organization it refers to, the probability that the economic loss is equal to or greater than the values of a reported in abscissa.

As it clearly results from the figure, this company profiling might have some serious consequences in the cyber risk assessment. For instance, suppose that the maximum loss that the three companies can sustain is equal to 1 M$. We see that, for the three companies, the probabilities of exceeding this value are significantly different and go from a minimum of approximately 0.05 for the company to maximum of approximately 0.75 for the company . These values might be compliant or not with the profile and expectations of the organization, and in the latter case, they should suggest the adoption of correcting actions to reduce the risk. On the other hand, the picture so obtained is provisional, since it is expected to change, getting better or worse in subsequent assessment campaigns.

5. Conclusion

In this paper, we have described a data-driven approach to assess cyber risk and associate a score to the cyber exposure of a company. Our model relies on the well-known HTMA approach but reduces the margin for subjectivity by introducing the use of some quantitative key risk indicators. Its applicability in practice has been illustrated through some preliminary case studies taken from the industrial manufacturing world. Future works will concern application of the proposed method in a variety of different sectors, with the aim to catch practical evidences of the advantages it offers with respect to previous methods.

The proposed model should allow overcoming the intrinsic limits of the existing risk assessment approaches, which are based on the estimate of the threats’ occurrence probability and on the observation of events that occurred in the past, thus not guaranteeing an adequate protection for the future. The proposed approach is also expected to provide companies and institutions with a new practical tool able to assess their cyber risk exposure and helping the definition of data protection policies for processes, systems, and infrastructures. The proposed solution is wide-ranging and applicable to different contexts, maintaining versatility and possibility to be used by companies and institutions of different size and working in different fields.

Data Availability

All data used in the paper come from cited references or are reported in the paper.

Conflicts of Interest

The authors declare that they have no conflicts of interest.