Abstract
A new risk was born in the mid-1990s known as operational risk. Though its application varied by institutions—Basel II for banks and Solvency II for insurance companies—the idea stays the same. Firms are interested in operational risk because exposure can be fatal. Hence, it has become one of the major risks of the financial sector. In this study, we are going to define operational risk in addition to its applications regarding banks and insurance companies. Moreover, we will discuss the different measurement criteria related to some examples and applications that explain how things work in real life.
1. Introduction
Operational risk existed longer than we know, but its concept was not interpreted until after the year 1995 when one of the oldest banks in London, Barings bank, collapsed because of Nick Leeson, one of the traders, due to unauthorized speculations. A wide variety of definitions are used to describe operational risk of which the following is just a sample (cf. Moosa [1, pages 87-88]). (i)All types of risk other than credit and market risk. (ii)The risk of loss due to human error or deficiencies in systems or controls. (iii)The risk that a firm’s internal practices, policies, and systems are not rigorous or sophisticated enough to cope with unexpected market conditions or human or technological errors. (iv)The risk of loss resulting from errors in the processing of transactions, breakdown in controls, and errors or failures in system support.
The Basel II Committee, however, defined operational risk as the risk of loss resulting from inadequate or failed internal processes, people and systems, or from external events (cf. BCBS, Definition of Operational Risk [2]). For example, an operational risk could be losses due to an IT failure; transactions errors; external events like a flood, an earthquake, or a fire such as the one at Crédit Lyonnais in May 1996 which resulted in extreme losses. Currently, the lack of operational risk loss data is a major issue on hand but once the data sources become available, a collection of methods will be progressively implemented.
In 2001, the Basel Committee started a series of surveys and statistics regarding operational risks that most banks encounter. The idea was to develop and correct measurements and calculation methods. Additionally, the European Commission also started preparing for the new Solvency II Accord, taking into consideration the operational risk for insurance and reinsurance companies.
As so, and since Basel and Solvency accords set forth many calculation criteria, our interest in this paper is to discuss the different measurement techniques for operational risk in financial companies.
We will also present the associated mathematical and actuarial concepts as well as a numerical application regarding the Advanced Measurement Approach, like Loss Distribution, Extreme Value Theory and Bayesian updating techniques, and propose more robust measurement models for operational risk.
At the end, we will point out the effects of the increased use of insurance against major operational risk factors and incorporate these in the performance analyses.
2. Laws and Regulations
Basel II cites three ways of calculating the capital charges required in the first pillar of operational risk. The three methods, in increasing order of sophistication, are as follows.(i)The Basic Indicator Approach (BIA). (ii)The Standardized Approach (SA). (iii)The Advanced Measurement Approach (AMA).
Regardless of the method chosen for the measurement of the capital requirement for operational risk, the bank must prove that its measures are highly solid and reliable. Each of the three approaches have specific calculation criteria and requirements, as explained in the following sections.
2.1. Basic Indicator and Standardized Approaches
Banks using the BIA method have a minimum operational risk capital requirement equal to a fixed percentage of the average annual gross income over the past three years. Hence, the risk capital under the BIA approach for operational risk is given by where , stands for gross income in year , and is set by the Basel Committee. The results of the first two Quantitative Impact Studies (QIS) conducted during the creation of the Basel Accord showed that on average of the annual gross income was an appropriate fraction to hold as the regulatory capital.
Gross income is defined as the net interest income added to the net noninterest income. This figure should be gross of any provisions (unpaid interest), should exclude realized profits and losses from the sale of securities in the banking book, which is an accounting book that includes all securities that are not actively traded by the institution, and exclude extraordinary or irregular items.
No specific criteria for the use of the Basic Indicator Approach are set out in the Accord.
The Standardized Approach
In the Standardized Approach, banks’ activities are divided into 8 business lines: corporate finance, trading and sales, retail banking, commercial banking, payment & settlements, agency services, asset management, and retail brokerage. Within each business line, there is a specified general indicator that reflects the size of the banks’ activities in that area. The capital charge for each business line is calculated by multiplying gross income by a factor assigned to a particular business line, see Table 1.
As in the Basic Indicator Approach, the total capital charge is calculated as a three-year average over all positive gross income (GI) as follows: The second QIS issued by the Basel Committee, covering the same institutions surveyed in the first study, resulted in , , and as appropriate rates in calculating regulatory capital as a percentage of gross income.
Before tackling the third Basel approach (AMA), we give a simple example to illustrate the calculation for the first two approaches.
2.1.1. Example of the BIA and SA Calculations
In Table 2, we see the Basic and Standardized Approaches for the 8 business lines. The main difference between the BIA and the SA is that the former does not distinguish its income by business lines. As shown in the tables, we have the annual gross incomes related to year 3, year 2, and year 1. With the Basic Approach, we do not segregate the income by business lines, and therefore, we have a summation at the bottom. We see that three years ago, the bank had a gross income of around 132 million which then decreased to −2 million the following year and finally rose to 71 million. Moreover, the Basic Indicator Approach does not take into consideration negative gross incomes. So in treating the negatives, the −2 million was removed. To get our operational risk charge, we calculate the average gross income excluding negatives and we multiply it by an alpha factor of 15% set by the Basel Committee. We obtain a result of 15.23 million €.
Similarly to the BI Approach, the Standardized Approach has a Beta factor for each of the business lines as some are considered riskier in terms of operational risk than others. Hence, we have eight different factors ranging between 12 and 18 percent as determined by the Basel Committee. For this approach, we calculate a weighted average of the gross income using the business line betas. Any negative number over the past years is converted to zero before an average is taken over the three years. In this case, we end up with a capital charge of around 10.36 million €.
2.1.2. The Capital Requirement under the Basic Indicator and Standardized Approaches
As depicted in the previous example, the capital charge relating to the Standardized Approach was lower than that of the Basic Approach. This, however, is not always the case, thus causing some criticism and raising questions such as why would a bank use a more sophisticated approach when the simpler one would cost them less?
In this section, we show that the capital charge could vary between different approaches. To start with, let and , where , is the gross income related to the business line , and is the total gross income.
Compiling these equations, we have and, consequently Therefore, the BIA produces a higher capital charge than the SA under the condition that the alpha factor under the former is greater than the weighted average of the individual betas under the latter.
There is no guarantee that the condition will be satisfied, which means that moving from the BIA to the SA may or may not produce a lower capital charge (cf. Moosa [1]).
2.2. Capital Requirement Review
Several Quantitative Impact Studies (QIS) have been conducted for a better understanding of operational risk significance on banks and the potential effects of the Basel II capital requirements. During 2001 and 2002, QIS 2, QIS 2.5, and QIS 3 were carried out by the committee using data gathered across many countries. Furthermore, to account for national impact, a joint decision of many participating countries resulted in the QIS 4 being undertaken. In 2005, to review the Basel II framework, BCBS implemented QIS 5.
Some of these quantitative impact studies have been accompanied by operational Loss Data Collection Exercises (LDCE). The first two exercises conducted by the Risk Management Group of BCBS on an international basis are referred to as the 2001 LDCE and 2002 LDCE. These were followed by the national 2004 LDCE in USA and the 2007 LDCE in Japan.
Detailed information on these analyses can be found on the BCBS web site: http://www.bis.org/bcbs/qis/.
Before analyzing the quantitative approaches, let us take a look at the minimum regulatory capital formula and definition (cf. Basel Committee on Banking Supervision [4]).
Total risk-weighted assets are determined by multiplying capital requirements for market risk and operational risk by 12.5, which is a scaling factor determined by the Basel Committee, and adding the resulting figures to the sum of risk-weighted assets for credit risk. The Basel II committee defines the minimum regulatory capital as 8% of the total risk-weighted assets, as shown in the formula below: The Committee applies a scaling factor in order to broadly maintain the aggregate level of minimum capital requirements while also providing incentives to adopt the more advanced risk-sensitive approaches of the framework.
The Total Regulatory Capital has its own set of rules according to 3 tiers.(i)The first tier, also called the core tier, is the core capital including equity capital and disclosed reserves. (ii)The second tier is the supplementary capital which includes items such as general loss reserves, undisclosed reserves, and subordinated term debt. (iii)The third tier covers market risk, commodities risk, and foreign currency risk.
The Risk Management Group (RMG) has taken 12% of the current minimum regulatory capital as its starting point for calculating the basic and standardized approach.
The Quantitative Impact Study (QIS) survey requested banks to provide information on their minimum regulatory capital broken down by risk type (credit, market, and operational risk) and by business line. Banks were also asked to exclude any insurance and nonbanking activities from the figures. The survey covered the years 1998 to 2000.
Overall, more than 140 banks provided some information on the operational risk section of the QIS. These banks included 57 large, internationally active banks (called type 1 banks in the survey) and more than 80 smaller type 2 banks from 24 countries. The RMG used the data provided in the QIS to gain an understanding of the role of operational risk capital allocations in banks and their relationship to minimum regulatory capital for operational risk. These results are summarized in Table 3.
The results suggest that on average, operational risk capital represents about 15 percent of overall economic capital, though there is some dispersion. Moreover, operational risk capital appears to represent a rather smaller share of minimum regulatory capital over 12% for the median.
These results suggest that a reasonable level of the overall operational risk capital charge would be about 12 percent of minimum regulatory capital. Therefore, a figure of 12% chosen by the Basel Committee for this purpose is not out of line with the proportion of internal capital allocated to operational risk for most banking institutions in the sample.
2.2.1. The Basic Indicator Approach
Under the BIA approach, regulatory capital for operational risk is calculated as a percentage of a bank’s gross income. The data reported in the QIS concerning banks’ minimum regulatory capital and gross income were used to calculate individual alphas for each bank for each year from 1998 to 2000 to validate the 12% level of minimum regulatory capital (cf. BCBS [5]).
The calculation was Here, is the minimum regulatory capital for bank in year and is the gross income for bank in year . Given these calculations, the results of the survey are reported in Table 4.
Table 4 presents the distribution in two ways—the statistics of all banks together and the statistics according to the two types of banks by size. The first three columns of the table contain the median, mean, and the weighted average of the values of the alphas (using gross income to weight the individual alphas). The median values range between 17% and 20% with higher values for type 2 banks. The remaining columns of the table present information about the dispersion of alphas across banks.
These results suggest that an alpha range of 17% to 20% would produce regulatory capital figures approximately consistent with an overall capital standard of 12% of minimum regulatory capital. However, after testing the application of this alpha range, the Basel Committee decided to reduce the factor to 15% because an alpha of 17 to 20 percent resulted in an excessive level of capital for many banks.
2.2.2. The Standardized Approach
As seen previously, the minimum capital requirement for operational risk under the Standardised Approach is calculated by dividing a bank’s operations into eight business lines. For each business line, the capital requirement will be calculated according to a certain percentage of gross income attributed for that business line.
The QIS data concerning distribution of operational risk across business lines was used and, as with the Basic Approach, the baseline assumption was that the overall level of operational risk capital is at 12% of minimum regulatory capital. Then, the business line capital was divided by business line gross income to arrive at a bank-specific for that business line, as shown in the following formula: where is the beta for bank in business line , is the minimum regulatory capital for the bank, is the share of bank ’s operational risk economic capital allocated to business line , and is the gross income in business line for bank .
In the end, 30 banks reported data on both operational risk economic capital and gross income by business line, but only the banks that had reported activity in a particular business line were included in the line’s beta calculation (i.e., if a bank had activities related to six of the eight business lines, then it was included in the analysis for those six business lines).
The results of this analysis are displayed in Table 5.
The first three columns of the table present the median, mean and weighted average values of the betas for each business line, and the rest of the columns present the dispersion across the sample used for the study. As with the Basic Approach, the mean values tend to be greater than the median and the weighted average values, thus reflecting the presence of some large individual beta estimates in some of the business lines.
Additionally, the QIS ranked the betas according to the business lines with “1” representing the smallest beta and “8” the highest. Table 6 depicts this ranking, and we see that retail banking tends to be ranked low while trading & sales with agency services & custody tend to be ranked high.
Tables 5 and 6 show us the disparity that exists of “typical” beta by business line in columns 4 to 9 and so we want to find out whether this dispersion allows us to separate the different beta values across business lines. Through statistical testing of the equality of the mean and the median, the results do not reject the null hypothesis that these figures are the same across the eight business lines.
These diffusions observed in the beta estimate could be reflected in the calibration difference of the internal economic capital measures of banks. Additionally, banks may also be applying differing definitions of the constitution of operational risk loss and gross income as these vary under different jurisdictions. Given additional statistics and data, the Basel Committee decided to estimate the beta factors between 12% and 18% for each of the different business lines.
2.3. The Advanced Measurement Approach
With the Advanced Measurement Approach (AMA), the regulatory capital is determined by a bank’s own internal operational risk measurement system according to a number of quantitative and qualitative criteria set forth by the Basel Committee. However, the use of these approaches must be approved and verified by the national supervisor.
The AMA is based on the collection of loss data for each event type. Each bank is to measure the required capital based on its own loss data using the holding period and confidence interval determined by the regulators (1 year and 99.9%).
The capital charge calculated under the AMA is initially subjected to a floor set at 75% of that under the Standardized Approach, at least until the development of measurement methodologies is examined. In addition, the Basel II Committee decided to allow the use of insurance coverage to reduce the capital required for operational risk, but this allowance does not apply to the SA and the BIA.
A bank intending to use the AMA should demonstrate accuracy of the internal models within the Basel II risk cells (eight business lines seven risk types shown in Table 7), relevant to the bank, and satisfy some criteria including the following.(i)The use of the internal data, relevant external data, scenario analyses, and factors reflecting the business environment and internal control systems. (ii)Scenario analyses of expert opinion. (iii)The risk measure used for capital charge should correspond to a 99.9% confidence level for a one-year holding period. (iv)Diversification benefits are allowed if dependence modelling is approved by a regulator. (v)Capital reduction due to insurance is fixed at 20%.
The relative weight of each source and the combination of sources are decided by the banks themselves; Basel II does not provide a regulatory model.
The application of the AMA is, in principle, open to any proprietary model, but the methodologies have converged over the years and thus specific standards have emerged. As a result, most AMA models can now be classified into the following. (i)Loss Distribution Approach (LDA). (ii)Internal Measurement Approach (IMA). (iii)Scenario-Based AMA (sbAMA). (iv)Scorecard Approach (SCA).
2.3.1. The Loss Distribution Approach (LDA)
The Loss Distribution Approach (LDA) is a parametric technique primarily based on historic observed internal loss data (potentially enriched with external data). Established on concepts used in actuarial models, the LDA consists of separately estimating a frequency distribution for the occurrence of operational losses and a severity distribution for the economic impact of the individual losses. The implementation of this method can be summarized by the following steps (see Figure 1). (1)Estimate the loss severity distribution. (2)Estimate the loss frequency distribution. (3)Calculate the capital requirement. (4)Incorporate the experts’ opinions.

For each business line and risk category, we establish two distributions (cf. Dahen [6]): one related to the frequency of the loss events for the time interval of one year (the loss frequency distribution), and the other related to the severity of the events (the loss severity distribution).
To establish these distributions, we look for mathematical models that best describe the two distributions according to the data and then we combine the two using Monte Carlo simulation to obtain an aggregate loss distribution for each business line and risk type. Finally, by summing all the individual VaRs calculated at 99.9%, we obtain the capital required by Basel II.
We start with defining some technical aspects before demonstrating the LDA (cf. Maurer [7]).
Definition 2.1 (Value at Risk OpVaR). The capital charge is the 99.9% quantile of the aggregate loss distribution. So with as the random number of events, the total loss is where is the th loss amount. The capital charge would then be
Definition 2.2 (OpVaR unexpected loss). This is the same as the Value at Risk OpVaR while adding the expected and the unexpected loss. Here, the capital charge would result in
Definition 2.3 (OpVar beyond a threshold). The capital charge in this case would be a 99.9% quantile of the total loss distribution defined with a threshold as The three previous methods are calculated using a Monte Carlo simulation.
For the LDA method which expresses the aggregate loss regarding each business line event type as the sum of individual losses, the distribution function of the aggregate loss, noted as , would be a compound distribution (cf. Frachot et al. [8]).
So the capital-at-risk (CaR) for the business line and event type corresponds to the quantile of as follows: and, as with the second definition explained previously, the CaR for the element is equal to the sum of the expected loss (EL) and the unexpected Loss (UL): Finally, by summing all the the capital charges , we get the aggregate CaR across all business lines and event types: (see Figure 2) The Basel committee fixed an to obtain a realistic estimation of the capital required. However, the problem of correlation remains an issue here as it is unrealistic to assume that the losses are not correlated. For this purpose, Basel II authorised each bank to take correlation into consideration when calculating operational risk capital using its own internal measures.

2.3.2. Internal Measurement Approach (IMA)
The IMA method (cf. BCBS [2]) provides carefulness to individual banks on the use of internal loss data, while the method to calculate the required capital is uniformly set by supervisors. In implementing this approach, supervisors would impose quantitative and qualitative standards to ensure the integrity of the measurement approach, data quality, and the adequacy of the internal control environment.
Under the IM approach, capital charge for the operational risk of a bank would be determined using the following. (i)The bank’s activities are categorized into a number of business lines, and a broad set of operational loss types is defined and applied across business lines.(ii)Within each business line/event-type combination, the supervisor specifies an exposure indicator (EI) which is a substitute for the amount of risk of each business line’s operational risk exposure.(iii)In addition to the exposure indicator, for each business line/loss type combination, banks measure, based on their internal loss data, a parameter representing the probability of loss event (PE) as well as a parameter representing the loss given that event (LGE). The product of EI*PE*LGE is used to calculate the Expected Loss (EL) for each business line/loss type combination.(iv)The supervisor supplies a factor for each business line/event type combination, which translates the expected loss (EL) into a capital charge. The overall capital charge for a particular bank is the simple sum of all the resulting products.
Let us reformulate all the points mentioned above; calculating the expected loss for each business line so that for a business line and an event type , the capital charge is defined as where represents the expected loss, is the scaling factor, and is the Risk Profile Index.
The Basel Committee on Banking Supervision proposes that the bank estimates the expected loss as follows: where is the exposure indicator, is the probability of an operational risk event, and is the loss given event.
The committe proposes to use a risk profile index as an adjustment factor to capture the difference of the loss distribution tail of the bank compared to that of the industry wide loss distribution. The idea is to capture the leptokurtic properties of the bank loss distribution and then to transform the exogeneous factor into an internal scaling factor such that By definition, the of the industry loss distribution is one. If the bank loss distribution has a fatter tail than the industry loss distribution would be larger than one. So two banks which have the same expected loss may have different capital charge because they do not have the same risk profile index.
2.3.3. Scorecard Approach (SCA)
The Scorecards approach (http://www.fimarkets.com/pages/risque_operationnel.php) incorporates the use of a questionnaire which consists of a series of weighted, risk-based questions. The questions are designed to focus on the principal drivers and controls of operational risk across a broad range of applicable operational risk categories, which may vary across banks. The questionnaire is designed to reflect the organization’s unique operational risk profile by the following.(i)Designing organization-specific questions that search for information about the level of risks and quality of controls.(ii)Calibrating possible responses through a range of “unacceptable” to “effective” to “leading practice.”(iii)Applying customized question weightings and response scores aligned with the relative importance of individual risks to the organization. These can vary significantly between banks (due to business mix differences) and may also be customized along business lines within an organization. Note that scoring of response options will often not be linear.
The Basel Committee did not put any kind of mathematical equation regarding this method, but working with that method made banks propose a formula related which is where is the exposure indicator, the risk score, and the scale factor.
2.3.4. Scenario-Based AMA (sbAMA)
Risk is defined as the combination of severity and frequency of potential loss over a given time horizon and is linked to the evaluation of scenarios. Scenarios are potential future events. Their evaluation involves answering two fundamental questions: firstly, what is the potential frequency of a particular scenario occurring and secondly, what is its potential loss severity?
The scenario-based AMA (http://www.newyorkfed.org/newsevents/events/banking/2003/con0529c.pdf) (or sbAMA) shares with LDA the idea of combining two dimensions (frequency and severity) to calculate the aggregate loss distribution used to obtain the OpVaR. Banks with their activities and their control environment should build scenarios describing potential events of operational risks. Then experts are asked to give opinions on probability of occurrence (i.e., frequency) and potential economic impact should the events occur (i.e., severity); but Human judgment of probabilistic measures is often biased and a major challenge with this approach is to obtain sufficiently reliable estimates from experts. The relevant point in sbAMA is that information is only fed into a capital computation model if it is essential to the operational risk profile to answer the “what-if” questions in the scenario assessment. Furthermore, the overall sbAMA process must be supported by a sound and structured organisational framework and by an adequate IT infrastructure. The sbAMA comprises six main steps, which are illustrated in Figure 3. Outcome from sbAMA will be statistically compatible with that arising from LDA so as to enable a statistically combination technique. The most adequate technique to combine LDA and sbAMA is Bayesian inference, which requires experts to set the parameters of the loss distribution (see Figure 3 for illustration).

2.4. Solvency II Quantification Methods
Solvency II imposes a capital charge for the operational risk that is calculated regarding the standard formula given by regulators or an internal model which is validated by the right authorities.
For the enterprises that have difficulties running an internal model for operational risk, the standard formula can be used for the calculation of this capital charge.
The European Insurance and Occupational Pensions Authority (EIOPA), previously known as the Committee of European Insurance and Occupational Pensions Supervisors (CEIOPS), tests the standard formulas in markets through the use of surveys and questionnaires called Quantitative Impact Studies (QIS). The QIS allow the committee to adjust and develop the formulas in response to the observations and difficulties encountered by the enterprises.
2.4.1. Standard Formula Issued by QIS5
The Solvency Capital Requirement (SCR) concerns an organization’s ability to absorb significant losses through their own basic funds of an insurance or reinsurance company. This ability is depicted by the company’s Value-at-Risk at a 99.5% confidence level over a one-year period and the objective is applied to each individual risk model to ensure that different modules of the standard formula are quantified in a consistent approach. Additionally, the correlation coefficients are set to reflect potential dependencies in the distributions’ tails (see Table 8). The breakdown of the SCR is shown in Figure 4.

With the calculation of the BSCR, In relation to previous surveys, respondents suggested that the following. (i)The operational risk charge should be calculated as a percentage of the BSCR or the SCR. (ii)The operational risk charge should be more sensitive to operational risk management. (iii)The operational risk charge should be based on entity-specific operational risk sources, the quality of the operational risk management process, and the internal control framework. (iv)Diversification benefits and risk mitigation techniques should be taken into consideration. In view of the above, EIOPA has considered the following (cf. CEIOPS [9]). (i)The calibration of operational risk factors for the standard formula has been revised to be more consistent with the assessment obtained from internal models. (ii)A zero floor for all technical provisions has been explicitly introduced to avoid an undue reduction of the operational risk SCR. (iii)The Basic SCR is not a sufficiently reliable aggregate measure of the operational risk, and that a minimum level of granularity would be desirable in the design of the formula. And so after additional analysis and reports, EIOPA recommends the final factors to be as shown in Table 9.
Before going into the formula let us define some notations (cf. CEIOPS [10]).(i) is the life insurance obligations. For the purpose of this calculation, technical provisions should not include the risk margin and should be without deduction of recoverables from reinsurance contracts and special purpose vehicles.(ii) is the total nonlife insurance obligations excluding obligations under non-life contracts which are similar to life obligations, including annuities. For the purpose of this calculation, technical provisions should not include the risk margin and should be without deduction of recoverables from reinsurance contracts and special purpose vehicles. (iii) is the life insurance obligations for life insurance obligations where the investment risk is borne by the policyholders. For the purpose of this calculation, technical provisions should not include the risk margin and should be without deduction of recoverables from reinsurance contracts and special purpose vehicle.(iv) is the earned premium during the 12 months prior to the previous 12 months for life insurance obligations, without deducting premium ceded to reinsurance. (v) is the earned premium during the 12 months prior to the previous 12 months for life insurance obligations where the investment risk is borne by the policyholders, without deducting premium ceded to reinsurance.(vi) is the earned premium during the previous 12 months for life insurance obligations where the investment risk is borne by the policyholders without deducting premium ceded to reinsurance. (vii) is the earned premium during the previous 12 months for life insurance obligations, without deducting premium ceded to reinsurance. (viii) is the earned premium during the previous 12 months for nonlife insurance obligations, without deducting premiums ceded to reinsurance.(ix) is the amount of annual expenses incurred during the previous 12 months in respect to life insurance where the investment risk is borne by the policyholders.(x) is the basic SCR. Finally the standard formula resulted to be where ,
3. Quantitative Methodologies
A wide variety of risks exist, thus necessitating their regrouping in order to categorize and evaluate their threats for the functioning of any given business. The concept of a risk matrix, coined by Richard Prouty (1960), allows us to highlight which risks can be modeled. Experts have used this matrix to classify various risks according to their average frequency and severity as seen in Figure 5.

There are in total four general categories of risk. (i)Negligible risks: with low frequency and low severity, these risks are insignificant as they do not impact the firm very strongly.(ii)Marginal risks: with high frequency and low severity, though the losses are not substantial individually, they can create a setback in aggregation. These risks are modeled by the Loss Distribution Approach (LDA) which we discussed earlier.(iii)Catastrophic risks: with low frequency and high severity, the losses are rare but have a strong negative impact on the firm and consequently, the reduction of these risks is necessary for a business to continue its operations. Catastrophic risks are modeled using the Extreme Value Theory and Bayesian techniques.(iv)Impossible: with high frequency and high severity, the firm must ensure that these risks fall outside possible business operations to ensure financial health of the corporation. Classifying the risks as per the matrix allows us to identify their severity and frequency and to model them independently by using different techniques and methods. We are going to see in the following sections the different theoretical implementation and application of different theories and models regarding operational risk.
3.1. Risk Measures
Some of the most frequent questions concerning risk management in finance involve extreme quantile estimation. This corresponds to determining the value a given variable exceeds with a given (low) probability. A typical example of such a measure is the Value-at-Risk (VaR). Other less frequently used measures are the expected shortfall (ES) and the return level (cf. Gilli and Kellezi [11]).
3.1.1. VaR Calculation
A risk measure of the risk of loss on a specific portfolio of financial assets, VaR is the threshold value such that the probability that the mark-to-market loss on the portfolio over the given time horizon exceeds this value is the given probability level. VaR can then be defined as the th quantile of the distribution : where is the quantile function which is defined as the inverse function of the distribution function . For internal risk control purposes, most of the financial firms compute a 5% VaR over a one-day holding period.
3.1.2. Expected Shortfall
The expected shortfall is an alternative to VaR that is more sensitive to the shape of the loss distribution’s tail. The expected shortfall at a level is the expected return on the portfolio in the worst of the cases:
3.1.3. Return Level
Let be the distribution of the maxima observed over successive nonoverlapping periods of equal length. The return level is the expected level which will be exceeded, on average, only once in a sequence of periods of length .
Thus, is a quantile: of the distribution function . As this event occurs only once every periods, we can say that :
3.2. Illustration of the LDA Method
Even a cursory look at the operational risk literature reveals that measuring and modeling aggregate loss distributions are central to operational risk management. Since the daily business operations have considerable risk, quantification in terms of an aggregate loss distribution is an important objective. A number of approaches have been developed to calculate the aggregate loss distribution.
We begin this section by examining the severity distribution, the frequency distribution function, and finally the aggregate loss distribution.
3.2.1. Severity of Loss Distributions
Fitting a probability distribution to data on the severity of loss arising from an operational risk event is an important task in any statistically based modeling of operational risk. The observed data to be modeled may either consist of actual values recorded by business line or may be the result of a simulation. In fitting a probability model to empirical data, the general approach is to first select a basic class of probability distributions and then find values for the distributional parameters that best match the observed data.
Following is an example of the Beta and Lognormal Distributions.
The standard Beta distribution is best used when the severity of loss is expressed as a proportion. Given a continuous random variable , such that , the probability density function of the standard beta distribution is given by where The parameters and control the shape of the distribution.
The mean of the beta distribution is given by In our example, we will be working with lognormal distributions (see Figure 6). A lognormal distribution is a probability distribution of a random variable whose logarithm is normally distributed. So if is a random variable with a normal distribution, then has a log-normal distribution. Likewise, if is lognormally distributed, then is normally distributed.

The probability density function of a log-normal distribution is where and are called the location and scale parameter, respectively. So for a lognormally distributed variable , and .
Statistical and Graphical Tests
There are numerous graphical and statistical tests for assessing the fit of a postulated severity of a loss probability model to empirical data. In this section, we focus on four of the most general tests: probability plots, Q-Q Plots, the Kolmogorov-Smirnov goodness of fit test, and the Anderson-Darling goodness of fit test. In discussing the statistic tests, we shall assume a sample of observations on the severity of loss random variable .
Furthermore, we will be testing: (i): samples come from the postulated probability distribution, against (ii): samples do not come from the postulated probability distribution.
(1) Probability Plot
A popular way of checking a model is by using probability plots (http://www.itl.nist.gov/div898/handbook/eda/section3/probplot.htm/). To do so, the data are plotted against a theoretical distribution in such a way that the points should form approximately a straight line. Departures from this straight line indicate departures from the specified distribution.
The probability plot is used to answer the following questions.(i)Does a given distribution provide a good fit to the data? (ii)Which distribution best fits my data? (iii)What are the best estimates for the location and scale parameters of the chosen distribution?
(2) Q-Q Plots
Quantile-Quantile Plots (Q-Q Plots) (http://www.itl.nist.gov/div898/handbook/eda/section3/qqplot.htm/) are used to determine whether two samples come from the same distribution family. They are scatter plots of quantiles computed from each sample, with a line drawn between the first and third quartiles. If the data falls near the line, it is reasonable to assume that the two samples come from the same distribution. The method is quite robust, regardless of changes in the location and scale parameters of either distribution.
The Quantile-Quantile Plots are used to answer the following questions.(i)Do two data sets come from populations with a common distribution? (ii)Do two data sets have common location and scale parameters? (iii)Do two data sets have similar distributional shapes? (iv)Do two data sets have similar tail behavior?
(3) Kolmogorov-Smirnov Goodness of Fit Test
The Kolmogorov-Smirnov test statistic is the largest absolute deviation between the cumulative distribution function of the sample data and the cumulative probability distribution function of the postulated probability density function, over the range of the random variable:
over all , where the cumulative distribution function of the sample data is , and is the cumulative probability distribution function of the fitted distribution. The Kolmogorov-Smirnov test relies on the fact that the value of the sample cumulative density function is asymptotically normally distributed. Hence, the test is distribution-free in the sense that the critical values do not depend on the specific probability distribution being tested.
(4) Anderson-Darling Goodness of Fit Test
The Anderson-Darling test statistic is given by
where are the sample data ordered by size. This test is a modification of the Kolmogorov-Smirnov test which is more sensitive to deviations in the tails of the postulated probability distribution. This added sensitivity is achieved by making use of the specific postulated distribution in calculating critical values. Unfortunately, this extra sensitivity comes at the cost of having to calculate critical values for each postulated distribution.
3.2.2. Loss Frequency Distribution
The important issue for the frequency of loss modeling is a discrete random variable that represents the number of operational risk events observed. These events will occur with some probability .
Many frequency distributions exist, such as the binomial, negative binomial, and geometric, but we are going to focus on the Poisson distribution in particular for our illustration. To do so, we start by explaining this distribution.
The probability density function of the Poisson distribution is given by where and is the mean and is the standard deviation (see Figure 7).

Estimation of the parameter can be carried out by maximum likelihood.
Much too often, a particular frequency of a loss distribution is chosen for no reason other than the risk managers familiarity of it. A wide number of alternative distributions are always available, each generating a different pattern of probabilities. It is important, therefore, that the probability distribution is chosen with appropriate attention to the degree to which it fits the empirical data. The choice as to which distribution to use can be based on either a visual inspection of the fitted distribution against the actual data or a formal statistical test such as the Chi-squared goodness of fit test. For the Chi-squared goodness of fit test, the null hypothesis is
The test statistic is calculated by dividing the data into sets and is defined as where is the expected number of events determined by the frequency of loss probability distribution, is the observed number of events, and is the number of categories.
The test statistic is a measure of how different the observed frequencies are from the expected frequencies. It has a Chi-squared distribution with degrees of freedom, where is the number of parameters that needs to be estimated.
3.2.3. Aggregate Loss Distribution
Even though in practice we may not have access to a historical sample of aggregate losses, it is possible to create sample values that represent aggregate operational risk losses given the severity and frequency of a loss probability model. In our example, we took the Poisson(2) and Lognormal(1.42,2.38) distributions as the frequency and severity distributions, respectively. Using the frequency and severity of loss data, we can simulate aggregate operational risk losses and then use these simulated losses for the calculation of the operational risk capital charge.
The simplest way to obtain the aggregate loss distribution is to collect data on frequency and severity of losses for a particular operational risk type and then fit frequency and severity of loss models to the data. The aggregate loss distribution then can be found by combining the distributions for severity and frequency of operational losses over a fixed period such as a year.
Let us try and explain this in a more theoretical way. Suppose is a random variable representing the number of OR events between time and , ( is usually taken as one year) with associated probability mass function which is defined as the probability that exactly losses are encountered during the time limit and and let us define as a random variable representing the amount of loss arising from a single type of OR event with associated severity of loss probability density function ; assuming the frequency of events is independent of the severity of events, the total loss from the specific type of OR event between the time interval is The probability distribution function of is a compound probability distribution: where is the probability that the aggregate amount of losses is , is the convolution operator on the functions , and is the -fold convolution of with itself.
The problem is that for most distributions, cannot be evaluated exactly and it must be evaluated numerically using methods such as Panjer’s recursive algorithm or Monte Carlo simulation.
(a) Panjer’s Recursive Algorithm
If the frequency of loss probability mass function can be written in the form (cf. McNeil et al. [12, page 480])
where and are constants, Panjer’s recursive algorithm can be used.
The recursion is given by
where is the probability density function of .
Usually, Poisson distribution, binomial distribution, negative binomial distribution, and geometric distribution satisfy the form. For example, if our severity of loss is the Poisson distribution seen above,
then and .
A limitation of Panjer’s algorithm is that only discrete probability distributions are valid. This shows that our severity of loss distribution, which is generally continuous, must be made discrete before it can be used. Another much larger drawback to the practical use of this method is that the calculation of convolutions is extremely long and it becomes impossible as the number of losses in the time interval under consideration becomes large.
(b) Monte Carlo Method
The Monte Carlo simulation is the simplest and often most direct approach. It involves the following steps (cf. Dahen [6]).(1)Choose a severity of loss and frequency of loss probability model. (2)Generate number of loss daily or weekly regarding the frequency of loss distribution. (3)Generate losses , regarding the loss severity distribution. (4)Repeat steps 2 and 3 for (for daily losses) or (for weekly). Summing all the generated to obtain which is the annual loss. (5)Repeat the steps 2 to 4 many times (at least 5000) to obtain the annual aggregate loss distribution. (6)The VaR is calculated taking the th percentile of the aggregate loss distribution.
Now focusing on our example taking as Lognormal() as the severity loss distribution and Poisson(2) as the frequency distribution and by applying Monte Carlo we arrive to calculate the VaR corresponding to the operational risk for a specific risk type (let us say internal fraud).
To explain a bit the example given, we took into consideration the Poisson and Lognormal as the weekly loss frequency and severity distributions, respectively. For the aggregate loss distribution we generate number of loss each time regarding the Poisson distribution and losses according the Lognormal distribution and so by summing the losses , and repeating the same steps times we obtain which would be the one annual total loss.
At the end, we repeat the same steps over and over again 100,000 times; we obtain the aggregate loss distribution (see Figure 8) on which we calculate the Value at Risk at .
The programming was done using Matlab software and it resulted in the output and calculations as shown in Table 10.

3.3. Treatment of Truncated Data
Generally, not all operational losses are declared. Databases are recorded starting from a threshold of a specific amount (e.g., 5,000€). This phenomenon, if not properly addressed, may create unwanted biases of the aggregate loss since the parameter estimation regarding the fitted distributions would be far from reality.
In this section, we will discuss the various approaches used in dealing with truncated data.
Data are said to be truncated when observations that fall within a given set are excluded. Left-truncated data is when the numbers of a set are less than a specific value, which means that neither the frequency nor the severity of such observations has been recorded (cf. Chernobai et al. [13, 14]).
In general, there are four different kinds of approaches that operational risk managers apply to estimate the parameters of the frequency and severity distributions in the absence of data due to truncation.
Approach 1. For this first approach, the missing observations are ignored and the observed data are treated as a complete data set in fitting the frequency and severity distributions. This approach leads to the highest biases in parameter estimation. Unfortunately, this is also the approach used by most practitioners.
Approach 2. The second approach is divided into two steps (see Figure 9). (i)Similar to the first approach, unconditional distributions are fitted to the severity and frequency distribution. (ii)The frequency parameter is adjusted according to the estimated fraction of the data over the threshold .

In the end, the adjusted frequency distribution parameter is expressed by where represents the adjusted (complete data) parameter estimate, is the observed frequency parameter estimate, and depicts the estimated conditional severity computed at threshold .
Approach 3. This approach is different from the previous approaches since the truncated data is explicitly taken into account in the estimation of the severity distribution to fit conditional severity and unconditional frequency (see Figure 10).
The density of the truncated severity distribution would result in

Approach 4. The fourth approach is deemed the best in application as it combines the second and third procedures by taking into account the estimated severity distribution and, as in Approach 2, the frequency parameter adjustment formula .
In modelling operational risk, this is the only relevant approach out of the four proposed as it addresses both the severity and the frequency of a given distribution.
3.3.1. Estimating Parameters Using MLE
The MLE method can then be applied to estimate our parameters. To demonstrate, let us define as losses exceeding the threshold so the conditional Maximum Likelihood can be written as follows: and the log-likelihood would be When losses are truncated, the frequency distribution observed has to be adjusted to consider the particular nondeclared losses. For each period , let us define as the number of losses which have to be added to , which is the number of estimated losses below the threshold, so that the adjusted number of losses is .
To reiterate, the ratio between the number of losses below the threshold, , and the observed loss number, , is equal to the ratio between the left and right severity functions: where is the truncated cumulative distribution function with parameters estimated using MLE. Finally, we have
3.3.2. Kolmogorov-Smirnov Test Adapted for Left Truncated Data
The Kolmogorov-Smirnov (KS) test measures the absolute value of the maximum distance between empirical and fitted distribution function and puts equal weight on each observation. So regarding the truncation criteria KS test has to be adapted (cf. Chernobai et al. [13, 14]).
For that, let us assume the random variables iid following the unknown probability distribution .
The null hypothesis related would be has a cumulative distribution , where .
Let us note and so that is where,
The value associated is then calculated using Monte Carlo simulation.
3.4. Working with Extremes for Catastrophic Risks
“If things go wrong, how wrong can they go?” is a particular question which one would like to answer (cf. Gilli and Kellezi [11]).
Extreme Value Theory (EVT) is a branch of statistics that characterises the lower tail behavior of the distribution without tying the analysis down to a single parametric family fitted to the whole distribution. This theory was pioneered by Leonard Henry Caleb Tippett who was an English physicist and statistician and was codified by Emil Julis Gumbel, a German mathematician in 1958. We use it to model the rare phenomena that lie outside the range of available observations.
The theory’s importance has been heightened by a number of publicised catastrophic incidents related to operational risk.(i)In February 1995, the Singapore subsidiary of Barings, a long-established British bank, lost about $1.3 billion because of the illegal activity of a single trader, Nick Leeson. As a result, the bank collapsed and was subsequently sold for one pound. (ii)At Daiwa Bank, a single trader, Toshihide Igushi, lost $1.1 billion in trading over a period of 11 years. These losses only became known when Iguchi confessed his activities to his managers in July 1995.
In all areas of risk management, we should put into account the extreme event risk which is specified by low frequency and high severity.
In financial risk, we calculate the daily Value-at-Risk for market risk and we determine the required risk capital for credit and operational risks. As with insurance risks, we build reserves for products which offer protection against catastrophic losses.
Extreme Value Theory can also be used in hydrology and structural engineering, where failure to take proper account of extreme values can have devastating consequences.
Now, back to our study, operational risk data appear to be characterized by two attributes: the first one, driven by high-frequency low impact events, constitutes the body of the distribution and refers to expected losses; the second one, driven by low-frequency high-impact events, constitutes the tail of the distribution and refers to unexpected losses. In practice, the body and the tail of data do not necessarily belong to the same underlying distribution or even to distributions belonging to the same family.
Extreme Value Theory appears to be a useful approach to investigate large losses, mainly because of its double property of focusing its analysis only on the tail area (hence reducing the disturbance on small- and medium-sized data) as well as treating the large losses by a scientific approach such as the one driven by the Central Limit Theorem for the analysis of the high-frequency low-impact losses.
We start by briefly exploring the theory.
EVT is applied to real data in two related ways. The first approach deals with the maximum (or minimum) values that the variable takes in successive periods, for example, months or years. These observations constitute of the extreme events, also called block (or per-period) maxima. At the heart of this approach is the “three-type theorem” (Fisher and Tippet, 1928), which states that only three types of distributions can arise as limiting distributions of extreme values in random samples: the Weibull, the Gumbel, and the Frechet distributions. This result is important as the asymptotic distribution of the maxima always belongs to one of these three distributions, regardless of the original distribution.
Therefore, the majority of the distributions used in finance and actuarial sciences can be divided into these three categories as follows, according to the weight of their tails (cf. Smith [15]). (i)Light-tail distributions with finite moments and tails, converging to the Weibull curve (Beta, Weibull). (ii)Medium-tail distributions for which all moments are finite and whose cumulative distribution functions decline exponentially in the tails, like the Gumbel curve (Normal, Gamma, Log-Normal). (iii)Heavy-tail distributions, whose cumulative distribution functions decline with a power in the tails, like the Frechet curve (-Student, Pareto, Log-Gamma, Cauchy).
The second approach to EVT is the Peaks Over Threshold (POT) method, tailored for the analysis of data bigger than the preset high thresholds. The severity component of the POT method is based on the Generalised Pareto Distribution (GPD). We discuss the details of these two approaches in the following segments.
3.4.1. Generalized Extreme Value Distribution: Basic Concepts
Suppose are independent random variables, identically distributed with common distribution and let and .
We have the following two theorems (cf. Smith [15]).
Theorem 3.1. Consider where is the distribution function of the normal distribution,
Theorem 3.2. If there exists suitable normalising constants , and some nondegenerate distribution function such that
Then belongs to one of the three standard extreme value distributions (see Figure 11) (cf. Gilli and Kellezi [11]): (i)Gumbel:
(ii)Fréchet:
(iii)Weibull:
Jenkinson and Von Mises generalize the three functions by the following distribution function:
where , a three-parameter family is obtained by defining for a location parameter and a scale parameter .
The case corresponds to Fréchet with , to Weibull with , and the limit to Gumbel.

3.5. Block Maxima Method
As we have seen previously, observations in the block maxima method are grouped into successive blocks and the maxima within each block are selected. The theory states that the limit law of the block maxima belongs to one of the three standard extreme value distributions mentioned before.
To use the block-maxima method, a succession of steps need to be followed. First, the sample must be divided into blocks of equal length. Next, the maximum value in each block (maxima or minima) should be collected. Then, we fit the generalized extreme value distribution, and finally, we compute the point and interval estimates for the return level .
Determining the Return Level
The standard generalized extreme value is the limiting distribution of normalized extrema. Given that in practice we do not know the true distribution of the returns and, as a result, we do not have any idea about the norming constants and ; we use the three parameter specification of the generalized extreme value:
where
The two additional parameters and are the location and the scale parameters representing the unknown norming constants. The log-likelihood function that we maximize with respect to the three known parameters is
where
is the probability density function if and . If , the function is
As defined before, the return level is the level we expect to be exceeded only once every years:
Substituting the parameters , , and by their estimates, we get
3.5.1. Generalized Pareto Distribution
The Generalized Pareto (GP) Distribution has a distribution function with two parameters: where , and where when and when .
The value of determines the type of distribution: for , the model gives the type II Pareto distribution; for , we get the exponential distribution; for , we get a reparameterised Pareto distribution.
For , we have the following formula: We use this formula to calculate the mean.
For , and : and we calculate the variance for :
3.5.2. Excess Loss Distribution
Excess losses are defined as those losses that exceed a threshold. So given a threshold value for large losses, the excess loss technique can be applied to determine the amount of provisions needed to provide a reserve for large losses. We consider a distribution function of a random variable which describes the behavior of the operational risk data in a certain business line (BL). We are interested in estimating the distribution function of a value above a certain threshold (cf. Medova and Kyriacou [16]). The distribution is called the conditional excess distribution function and is formally defined as We verify that can be written in terms of as For a large class of underlying distribution function the conditional excess distribution function for a large is approximated by where is the Generalized Pareto Distribution.
We will now derive an analytical expression for and . First, we define as Then, we estimate by where is the total number of observations and the number of observations above the threshold . So we have which simplifies to Inverting the last equation, we have For the calculation of the expected shortfall, we notice that Since we have and as is the shape parameter, we can immediately conclude that and now, we estimate the expected shortfall:
3.5.3. The Peak over Threshold
The POT method considers observations exceeding a given high threshold. As an approach, it has increased in popularity as it uses data more efficiently than the block maxima method. However, the choice of a threshold can pose a problem.
To use the peak over threshold methods, we first select the threshold. Then, we fit the Generalised Pareto Distribution function to any exceedances above . Next, we compute the point and interval estimates for the Value-at-Risk and the expected shortfall (cf. Medova and Kyriacou [16]).
Selection of the Threshold
While the threshold should be high, we need to keep in mind that with a higher threshold, fewer observations are left for the estimation of the parameters of the tail distribution function.
So it is better to select the threshold manually, using a graphical tool to help us with the selection. We define the sample mean excess plot by the points:
where is the sample mean excess function defined as
and where represent the increasing order of the observations.
Fitting the GPD Function to the Exceedances over
As defined in the previous sections, the distribution of the observations above the threshold in the right tail and below the threshold in the left tail should be a generalized Pared distribution. The best method to estimate the distribution’s parameters is the Maximum Likelihood estimation method, explained below.
For a sample the log-likelihood function for the GPD is the logarithm of the joint density of the observations:
3.6. Bayesian Techniques in Operational Risk
The ideas behind Bayesian theory are easily applicable to operational risk, especially in the early days of measurement when data was not available. While Bayes (1763), an English clergyman and statistician, developed his theory long ago, it has recently enjoyed a renaissance amongst academics due to advances in computational techniques to solve complex problems and formulas.
Under the new regulations of Basel II and Solvency II, many financial institutions have adopted a Loss Distribution Approach (LDA) to estimate their operational risk capital charge. A Bayesian inference approach gives a methodic approach to combine internal data, expert opinions, and relevant external data. The main idea is as follows. We start with external market data which determines a prior estimate. This estimate is then modified by integrating internal observations and expert opinions leading to a posterior estimate. Risk measures are then calculated from this posterior knowledge.
3.6.1. The Bayesian Approach: Internal Data, External Data, and Expert Opinion
The Basel Committee has mentioned explicitly that (cf. BCBS [17], paragraph 675): “A bank must use scenario analysis of expert opinion in conjunction with external data to evaluate its exposure to high-severity events. This approach draws on the knowledge of experienced business managers and risk management experts to derive reasoned assessments of plausible severe losses. For instance, these expert assessment could be expressed as parameters of an assumed statistical loss distribution.”
As mentioned earlier, the Basel Committee has authenticated an operational risk matrix of risk cells. Each of these 56 risk cells leads to the modelling of loss frequency and loss severity distribution by financial institutions. Let us focus on a one risk cell at a time.
After choosing a corresponding frequency and severity distribution, the managers estimate the necessary parameters. Let refer to the company’s risk profile which could accord to the location, scale, or shape of the severity distribution. While needs to be estimated from available internal information, the problem is that a small amount of internal data does not lead to a robust estimation of . Therefore, the estimate needs to include other considerations in addition to external data and expert opinions.
For that, the risk profile is treated as the adjustment of a random vector which is calibrated by the use of external data from market information. is therefore a random vector with a known distribution, and the best prediction of our company-specific risk profile would be based on a transformation of the external knowledge represented by the random vector. The distribution of is called a prior distribution.
To explore this aspect further, before assessing any expert opinion and any internal data study, all companies have the same prior distribution generated from market information only. Company-specific operational risk events and expert opinions are gathered over time. As a result, these observations influence our judgment of the prior distribution and therefore an adjustment has to be made to our company-specific parameter vector (see Table 11). Clearly, the more data we have on and , the better the prediction of our vector and the less credibility we give to the market. So in a way, the observations and the expert opinion transform the market prior risk profile into a conditional distribution of given and denoted by (cf. Lambrigger et al. [18]).
We Denote , the unconditional parameter density, , the conditional parameter density also called posterior density, and let us assume that observations and expert opinions are conditionally independent and identically distributed (i.i.d.) given , so that where and are the marginal densities of a single observation and a single expert opinion, respectively.
Bayes theorem gives for the posterior density of : where is the normalizing constant not depending on . At the end, the company specific parameter can be estimated by the posterior mean .
3.6.2. A Simple Model
Let loss severities be distributed according to a lognormal-normal-normal model for example. Given this model, we hold the following assumptions to be true (cf. Lambrigger et al. [3]). (i)Market profile: let be normally distributed with parameters of mean and standard deviation , estimated from external sources, that is, market data. (ii)Internal data: consider the losses of a given institution , conditional on (), to be i.i.d. lognormal distributed: where is assumed as known. That is, corresponds to the density of a distribution.(iii)Expert opinion: suppose we have experts with opinion around the parameter , where . We let where is the standard deviation denoting expert uncertainty. That is, corresponds to the density of a distribution.
Moreover, we assume expert opinion and internal data to be conditionally independent given a risk profile .
We adjust the market profile to the individual company’s profile by taking into consideration internal data and expert opinion to transform the distribution to be company specific. The mean and standard deviation of the market are determined from external data (e.g., using maximum likelihood or the method of moments) as well as by expert opinion.
and for the market profile distribution are estimated from external data (maximum likelihood or the method of moments).
Under the model assumption, we have the credibility weighted average theorem. With , the posterior distribution is a normal distribution with parameters where the credibility weights are given by , , and .
The theorem provides a consistent and unified method to combine the three mentioned sources of information by weighting the internal observations, the relevant external data, and the expert opinion according to their credibility. If a source of information is not believed to be very plausible, it is given a smaller corresponding weight, and vice versa. As expected, the weights , , and add up to 1.
This theorem not only gives us the company’s expected risk profile, represented by , but also the distribution of the risk, which is allowing us to quantify the risk and its corresponding uncertainty.
3.6.3. Illustration of the Bayesian Approach
Assuming that a bank models its risk according to the lognormal-normal-normal model and the three assumptions mentioned above, with scale parameter , external parameters , , and the expert opinion of the company given by with . The observations of the internal operational risk losses sampled from a distribution are given in Table 12.
So to reiterate, we have the parameters that are shown in Table 13.
Now we can calculate the estimation and the credibility weights using the formulas given previously (as shown in Table 14).
In the end, we compare the classical maximum likelihood estimator to the estimator without expert opinion corresponding to and the Bayes estimator, as shown in Figure 12.

Figure 12 shows that the Bayesian approach has a more stable behavior around the true value of even when just a few data points are available, which is not the case with the MLE and the SW estimators.
In this example, we see that in combining external data with the expert opinions, we stabilize and smooth our estimators, in a way that works better than the MLE and the no expert opinion estimators. This shows the importance of the Bayesian approach for estimating the parameters and calculating the capital requirement under Basel II or Solvency II for Operational Risk.
3.7. Application to a Legal Events Database
To check and understand the concepts, let’s apply them to an exercise using the four distributions: Exponential, Lognormal, Weibull, and Pareto.
Table 15 shows a legal event database depicting four years’ of losses. The units are €.
All the tables and figures were generated using Matlab and R softwares.
An initial analysis calculates the average, standard deviation, skewness, and kurtosis of the database and shows that the database is leptokurtic as the skewness is greater than 3 (see Table 16). So given the heavy tail, it would be a good idea to start testing the database with exponential distributions.
3.7.1. Some Probability Distributions
We will be applying the four distributions—Exponential, Lognormal, Weibull, and Pareto—to the database in an attempt to fit and estimate the parameter of the distributions. But before doing that, let us take a quick look at the four types of distributions.
(a) Exponential Distribution
We say that has an exponential distribution with parameter if it has a PDF of the form:
The expected value and variance of an exponentially distributed random variable with rate parameter is given by
The cumulative distribution function is
and the moment estimation for the one-parameter case is simply calculated by
(b) Lognormal Distribution
If is a random variable with a normal distribution, then has a log-normal distribution. Likewise, if is lognormally distributed, then is normally distributed.
The probability density function (PDF) of a log-normal distribution is
where and are called the location and scale parameter, respectively. So if is a lognormally distributed variable, then and .
(c) Weibull Distribution
The Weibull distribution is a continuous probability distribution. It is named after Waloddi Weibull who described it in detail in 1951, although it was first identified by Fréchet in 1927 and first applied by Rosin and Rammler in 1933 to describe the size distribution of particles. This is the distribution that has received the most attention from researchers in the past quarter century.
The probability density function (PDF) of a Weibull random variable is
The cumulative distribution function (CDF) is given by
The mean and variance of a Weibull random variable can be expressed as
(d) Pareto Distribution
The Pareto distribution was named after the economist Vilfredo Pareto, who formulated an economic law (Pareto’s Law) dealing with the distribution of income over a population. The Pareto distribution is defined by the following functions: CDF: ; ; , PDF: ; ; .
Few well-known properties are
3.7.2. Output Analysis
The four distributions have been fitted to the database and the parameters were estimated according to the Maximum Likelihood Estimation. Also, a QQ-plot has been graphed to see how well the distributions fit the data. The Kolmogorov-Smirnov test was also carried out to see how well the distributions compare to the actual data.
As we will see in the outputs generated in Table 17, the best model is the Lognormal as it does not differ much from the data set. However, we also observe that none of these models deal very well with the largest of events, which confirms that we need to apply extreme value theory.
As we have seen before, a Q-Q plot is a plot of the quantiles of two distributions against each other. The pattern of points in the plot is used to compare the two distributions.
Now, while graphing the Q-Q plots to see how the distributions fit the data, the results show that the Lognormal, Weibull, and Pareto distributions are the the best models since the points of those three distributions in the plot approximately lie on the straight line of , as seen in Figure 13.

Nevertheless, Kolmogorov-Smirnov test clearly depicts that the Lognormal distribution is better in accepting the null hypothesis that the data comes from the same continuous distribution (see Table 18).
3.8. LDA and Application of Extreme Value Theory
As seen in previous sections, the Loss Distribution Approach has many appealing features since it is expected to be much more risk sensitive. It is necessary to remember that VaR is calculated for a specific level of confidence and a given period of time, assuming normal conditions, which means that VaR does not include all aspects of risks. So one cannot estimate or predict losses due to extreme movements, such as losses encountered in major companies throughout the years (see Table 19). For that, Extreme Value Theory is applied to characterize the tail behavior and model the rare phenomena that lie outside the range of available observations.
In this section, we are going to take an internal loss database related to external fraud for a particular business line of retail banking and apply to it the Loss Distribution Approach and calculate the VaR by using the Extreme Value Theory.
3.8.1. Application to an Internal Loss Data
Our internal database was provided by a local Lebanese bank; the bank defines a reportable incident as any unusual event, operational in nature, which caused or had the potential to cause damage to the bank, whether tangibly or not, in readily measurable form (with financial impact, even in the bank’s favor), or as an estimate (in economic or opportunity cost terms). In simple terms, operational risk events are anything that went wrong or that could go wrong.
Hence, given our data we were able to compute the Severity and Frequency Distributions related to retail banking business line and external fraud event type (see Figure 14).

As so, and by using Monte Carlo method treated in Section 3.2.3(b), for Poisson and as our Lognormal Severity Distribution, we obtained our aggregated annual loss with the density function shown in Figure 15.

Our Value at Risk calculated is shown in Table 20.
(a) Application of the Extreme Value Theory
Now, by applying the Extreme Value Theory explained in Section 3.4 for the excess loss distribution and by setting our upper threshold as the quantile, we could obtain a more robust Value-at-risk calculation that could mitigate our risk in a more precise manner.
Fitting the Generalized Pareto Distribution:
and calculating the VaR and Expected Shortfall related:
we obtain the results shown in Table 21.
Yet, if the calibration of severity parameters ignores external data, then the severity distribution will likely be biased towards low-severity losses, since internal losses are typically lower than those recorded in industry-wide databases. As so, LDA would be more accurate if both internal and external data are merged together in the calibration process; this point is illustrated in Frachot and Roncalli [19] for mixing internal and external data for managing operational risk and Dahen and Dionne [20] for Scaling for the Severity and frequency of External Operational Loss Data.
4. Insurance Covering Operational Risks
The role that insurance plays in diminishing the financial impact of operational losses of a bank is highly important. The transfer of a risk to an insurer can contribute to a better performance preventing critical situation and covering a variety of losses. The Basel Committee approved that insurance can be used as a tool to reduce the financial impact of operational risks for banks, meaning that a specific type of insurance against operational risks can lead to a lower level of minimal capital allocated to a particular risk category.
While the purchase of insurance covering operational risks is still in its early stages of development, it would allow banks to replace operational risk by counterparty risk.
All the insurance policies, clauses, and types were given by a Lebanese bank.
4.1. Insurance Policies and Coverage
Many types and categories of insurance can be purchased, each with a specific clause and price regarding the demand of the insured customer. Following, we explain the different types of insurance policies with their respective coverages and exclusions.
4.1.1. Bankers Blanket Bond Insurance
Intended for banks and other institutions that are engaged in providing financial services, the policy indemnifies the assured for direct monetary losses due to loss, damage, and misplacement during the policy period (which is usually one year).
Scope of Cover
(i)Clause 1 (Infidelity of Employees) covers loss of property due to dishonest or fraudulent acts of one or more employees of the insured resulting in unethical financial gains. (ii)Clause 2 (On Premises) covers loss of the insured or the customers’ property on the insured’s premises due to theft, burglary, damage, destruction, or misplacement. (iii)Clause 3 (In Transit) covers loss or damage to property from any cause while in transit either in the custody of the assured’s employees or the custody of any security company or its vehicles but excluding property in mail and property subject to amounts recoverable from a security company under the latter’s own insurance. (iv)Clause 4 (Forged Cheques et al.) covers loss due to forgery or fraudulent alteration of any financial instrument or payment on the above basis. (v)Clause 5 (Counterfeit Currency) covers the insured’s loss due to acceptance in good faith of any counterfeit or fraudulently altered currency or coins. (vi)Clause 6 (Damage to Offices and Contents) covers loss or damage suffered to all contents owned by the assured in their offices (excluding electronic equipment) due to theft, robbery, hold-up vandalism, and so forth.
Limit of Indemnity
As per sums agreed by both parties according to nature, size, and volume of business handled by the insured in all their offices and branches. Usually specifies amounts for every loss under the insuring clauses and sometimes on an aggregate or overall annual basis. (i) US$ any one loss in respect of Infidelity of Employees. (ii) US$ any one loss in respect of On Premises. (iii) US$ any one loss in respect of In Transit. (iv) US$ any one loss in respect of Forgery or Alteration.(v) US$ any one loss in respect of Counterfeited Currency. (vi) US$ any one loss in respect of Offices and Contents. (vii) US$ any one loss in respect of Securities or Written Instruments. (viii) US$ any one loss in respect of Books of Accounts and Records. (ix) US$ any one loss in respect of Legal Fees.
All in excess of:
US$ each loss, however, reducing to: (i) US$ every loss in respect of insuring In transit, Offices and Contents, and Legal Fees. (ii) US$ on aggregate in respect of insuring Counterfeited Currency.
Premium Rating
A sum rated on the basis of amounts and limits of indemnity agreed, deductibles, claims history and insured, and so forth.
Exclusions
Loss or damage due to war risks, and so forth. Loss not discovered during the policy period. Acts of directors’ defaults. Shortage, cashier’s error, or omissions.
4.1.2. Directors and Officers Liability Insurance
The following insurance covers are applied solely for claims first made against an insured during the period and reported to the insurer as required by the policy.
Management Liability
(i)Individuals: the insurer shall pay the loss of each insured person due to any wrongful act. (ii)Outside Entity Directors: the insurer shall pay the loss of each outside entity director due to any wrongful act. (iii)Company Reimbursement: if a company pays the loss of an insured person due to any wrongful act of the insured person, the insurer will reimburse the company for such loss.
Special Excess Protection for Nonexecutive Directors
The insurer will pay the nonindemnifiable loss of each and every nonexecutive director due to any wrongful act when the limit of liability, all other applicable insurance, and all other indemnification for loss have all been exhausted.
Exclusions
The insurer shall not be liable to make any payment under any extension or in connection with any claim of the following. (i)A wrongful act intended to secure profit gains or advantages to which the insured was not legally entitled. (ii)The intentional administration of fraud. (iii)Bodily injury, sickness, disease, death or emotional distress, or damage to destruction, and loss of use of any property provided.
Limit of Liability
US$—Aggregate (i)Per nonexecutive director special excess limit: separate excess aggregate limit for each nonexecutive director of the policyholder US$ each. (ii)Investigation: of the limit of liability under the insurance covers of Company Reimbursement, Management Liability, and of the per nonexecutive director special excess limit.
4.1.3. Political Violence Insurance
This kind of policy indemnifies the insured with the net loss of any one occurrence up to but not exceeding the policy limit against the following.(i)Physical loss or damage to the insured’s buildings and contents directly caused by one or more of the following perils occurring during the policy period: (1)Act of Terrorism; (2)Sabotage; (3)Riots, Strikes, and/or Civil Commotion; (4)Malicious Damage; (5)Insurrection, Revolution, or Rebellion; (6)War and/or Civil War.(ii)Expenses incurred by the insured in the removal of debris directly caused by any one or more of the Covered Causes of Loss.
Exclusions
(i)Loss or damage arising directly or indirectly from nuclear detonation, nuclear reaction, radiation, or radioactive contamination. (ii)Loss or damage directly or indirectly caused by seizure, confiscation, nationalization, requisition, detention, and legal or illegal occupation of any property insured. (iii)Any loss arising from war between any two or more of the following: China, France, Russia, United States of America, and the United Kingdom. (iv)Loss or damage arising directly or indirectly through electronic means including computer hacking or viruses. (v)Loss or damage arising directly or indirectly from theft, robbery, house-breaking, and mysterious or unexplained disappearance of property insured.
Limitations
(i)In respect of loss or damage suffered under this extension, the underwriters’ maximum liability shall never be more than the Business Interruption Policy Limit (if applicable), or the Policy Limit (if applicable) where this Policy Limit is a combined amount for losses arising from both physical loss or physical damage and Business Interruption, for any one occurrence. (ii)To clarify, when a business interruption policy limit applies to losses suffered under this extension, it shall apply to the aggregate of all claims by all insureds and in respect of all insured locations hereunder, and underwriters will have no liability in excess of the business interruption policy limit whether insured losses are sustained by all of the insureds or any one or more of them, or whether insured losses are sustained at any one or more of the insured locations. (iii)With respect to loss under this extension resulting from damage to or destruction of film, tape, disc, drum, cell and other magnetic recording or storage media for electronic data processing, the length of time for which underwriters will be liable hereunder will not exceed. Thirty consecutive calendar days or the time required with exercised due diligence and dispatch to reproduce the data thereon from duplicates or from originals of the previous generation, whichever is less; or the length of time that would be required to rebuild, repair or reinstate such property but not exceeding twelve calendar months, whichever is greater.
4.2. Electronic and Computer Crime Policy
This kind of policy covers electronic and computer crimes related to the following.
Computer Systems
Loss due to the fraudulent preparation, modification or input of electronic data into computer systems, a service bureau’s computer system, an electronic find transfer system, or a customer communication system.
Electronic Data, Electronic Media, and Electronic Instruction
(i)Losses due to the fraudulent modification of electronic data or software programs within computer systems. (ii)Losses due to robbery, burglary, larceny, or theft of electronic data or software programs. (iii)Losses due to the acts of a hacker causing damage or destruction to electronic data or software programs. (iv)Losses due to damage or destruction of electronic data or software programs using computer virus.
Electronic Communications
Loss due to the transfer of funds as a result of unauthorized and fraudulent electronic communications from customers, a clearing house, custodians, or financial institutions.
Insured’s Service Bureau Operations
Loss due to a customer transferring funds as a result of fraudulent entries of data whilst the insured is acting as a service bureau for customers.
Electronic Transmissions
Loss due to the transfer of funds on the faith of any unauthorized and fraudulent customer voice-initiated funds transfer instructions.
Customer Voice-Initiated Transfers
Loss due to the transfer of funds on the faith of any unauthorized and fraudulent customer voice-initiated finds transfer instructions.
Extortion
Loss by a third party who has gained unauthorized access into the insured’s computer systems threatening to cause the transfer of funds, disclosure of confidential security codes to third parties, or damage to electronic data or software programs.
Limit of Indemnity
US$ any one loss and in the aggregate for all clauses. The amount of the deductible under this policy for each and every loss is in excess of US$.
4.2.1. Plastic Card Insurance Policy
These kinds of policies will indemnify the insured against losses sustained through alteration, modification, or forgery in any Visa Electron Card, Bankernet, Visa, and Master Card issued by the insured or issued on his behalf and resulting from cards that have been lost, stolen, or misused by an unauthorized person.
Exclusions
The policy does not cover the following. (i)Loss for which the assured obtained reimbursement from its cardholder, any financial institution, plastic card association, or clearing house representing the assured. (ii)Loss not discovered during the policy period. (iii)Loss which arises directly or indirectly by reason of or in connection with war, invasion, act of foreign enemy, hostilities, or civil war. (iv)Loss resulting from the issue of any plastic card to guarantee the cashing of any cheque. (v)Loss resulting wholly or partially, directly or indirectly from any fraudulent or dishonest act performed alone or with others, by an officer, director, or employee of the assured or by any organization that authorizes, clears, manages, or interchanges transactions for the assured.
Limit of Indemnity
US$—per card per year. US$—in the annual aggregate for all cards.
Deductible
US$ in the annual aggregate.
4.3. Advantages of Insuring Operational Risks
In general, the role of insurance is to transfer the financial impact of a risk from one entity to another. However, transferring risk is not the same as controlling it as we do not avoid, prevent, or reduce the actual risk itself. Nevertheless, insurance as a risk reduction tool helps the bank to avoid or optimize the loss by buying a policy related to operational risk for which the bank pays an insurance premium in exchange for a guarantee of compensation in the event of the materialization of a certain risk. This means that insuring against operational risks enables a bank to eliminate or reduce large fluctuations of cash flow caused by high and unpredictable operational losses. By doing so, the bank benefits by improving income and increasing its market value, allowing it to avoid severe situations that would lead to insolvability. (i)A variety of factors influence banks to purchase insurance to cover operational risks: the size of a bank matters as smaller banks have lower equity and free cash flows, thus making them more vulnerable to losses from operational risks. Consequently, large banks have the resources to manage their operational risks, though they also purchase insurance policies to protect themselves from any type of major loss, especially when it affects investors’ confidence or would result in extreme negative effects. (ii)The time horizon also has its effect: the extent to which a bank can cover the immediate expense of an insurance premium in exchange for a benefit that may materialize only in the long run depends on the time horizon over which the bank is willing to pay premiums to cover a risk that may or may not happen in the long term. (iii)The better the rating, the higher the cost of refinancing: banks with very good rating can opt to finance losses by contracting credits rather than insurance. However, the bank might suffer high losses when it incurs considerable deficits that were not subject to insurance causing restrictions in its access to financing.
4.4. Basel II Views
The Basel II Committee (cf. BCBS [21]) stated that any effort to improve risk management should be viewed independently from the request of capital and hence insurance should not affect the required minimum capital. However, many bankers and insurers believe that insurance should be treated as an instrument of reducing the required minimum capital for operational risk. The problem here arises in determining how much of the insured amount needs to be deducted from the level of required capital.
Moreover, the Basel Committee is against the use of insurance to optimize the capital required for operational risk for banks that use either the Basic Indicator Approach or the Standardized Approach, but a bank using the AMA is allowed to consider the risk mitigating impact of insurance in the measuring of operational risk used for regulatory minimum capital requirements. The recognition of insurance mitigation is limited to 20% of the total operational risk capital charge.
In addition to this, the insurance policy must have an initial term of at least one year. For policies with a residual term of less than one year, the bank must make appropriate haircuts reflecting the declining term of the policy, up to a full 100% haircut for policies with a residual term of 90 days or less. Additionally, the insurance policy should not have exclusions or limitations based upon regulatory action or for the receiver or liquidator of a failed bank.
The insurance coverage must be explicitly mapped to the actual operational risk exposure of the bank and have a minimum claims paying ability rating of A as shown in Table 22 (cf. BCBS [22]).
4.5. Capital Assessment under Insurance on Operational Losses
In this section, we discuss insurance coverage and its effects on operational losses. Individual operational losses are insured with an external insurer under an excess of loss (XL) contract. So to include insurance contracts in the operational risk model, we take into consideration many other factors such as deductibles and policy limit (cf. Bazzarello et al. [23]).
Let’s consider as the th loss drawn from the severity distribution in the year , and as the number of losses in year drawn from the frequency distribution. Then the insurance recovery for the individual loss would be where is the number of simulated annual losses.
On annual basis, if we set the aggregated deductibles as , the aggregated policy limit as , and we let be the th annual loss, then the annual recovery loss can be rewritten as Hence, the net annual loss would result in Adhering to the Basel II standards for AMA, we take into consideration the following (cf. BCBS [22]): (i)Appropriate haircuts. (ii)Payment uncertainty. (iii)Counterparty risk.
4.5.1. Appropriate Haircuts
For policies with a residual term of less than one year, the bank must make appropriate haircuts reflecting the declining residual term of the policy, up to a full 100% haircut for policies with a residual term of 90 days or less. Accounting for the haircuts, the recovered annual loss can be written as where
4.5.2. Payment Uncertainty
Payment uncertainty occurs when the insuring party cannot commit to its contractual obligations on a timely basis. To account for such deviations from full recovery, we use as the average recovery rate to discount the insurance payments. Beta can be estimated from internal data as We then integrate this factor into our calculation for the recovery on annual loss:
4.5.3. Counterparty Risk
Counterparty risk occurs when the insurance company fails to fulfill its payment obligations.
To model this particular risk, let us consider as the probability of default and as the recovered loss given default. So if is the number of years containing annual losses, then the full insurance recoveries can be obtained for only years as we expect a coverage only when the insurer is in good financial health. Now, for the remaining years, insurance recoveries must be discounted using the factor according to the formula: where is the set of simulated years where the insurer has defaulted and is the set of simulated years where the insurer has not defaulted.
5. Conclusion
Until recently, credit risk and market risk were perceived as the two biggest sources of risk for financial institutions. However, as seen in this paper, the weight of operational risk has risen to the point that operational risk is not just another type of risk but holds a significant position in risk assessment, as seen by the fact that many banking failures in the last 20 years have demonstrated the serious dangers of operational risk.
Operational risk quantification is a challenging task both in terms of its calculation as well as in its organization. Regulatory requirements (Basel II for banks and Solvency II for insurance companies) are put in place to ensure that financial institutions mitigate this risk and many calculation criteria have been developed, ranging from the Basic to Standardized to the Advanced Measurement Approach.
This paper has defined operational risk by presenting the different theories and approaches for financial institutions wishing to model operational risk. While a standardized formula is widely applied by banks and insurance companies, applying more complex approaches and theories such as the Loss Distribution, Extreme Value Theory, or Bayesian techniques may present more robust analyses and framework to model this risk.
Additionally, with the use of insurance, a percentage of the risk carried by a bank or a financial institution can be transferred to the insurance company. Thus, we can say that an insurance policy plays an important role in decreasing the financial impact of operational losses and can therefore contribute to a better performance by covering a variety of potential operational losses. The Basel Committee recognizes this potential and has accordingly allowed a reduction in the required minimum capital for a particular risk category for any institution possessing an insurance policy. In the long run, this regulation would allow banks to replace operational risk with counterparty risk.