In the Fundamental Review of the Trading Book (FRTB), the latest regulation for minimum capital market risk requirements, one of the major changes, is replacing the Incremental Risk Charge (IRC) with the Default Risk Charge (DRC). The DRC measures only the default and does not consider the migration rating risk. The second new change in this approach was that the DRC now includes equity assets, contrary to the IRC. This paper studies DRC modeling under the Internal Model Approach (IMA) and the regulator conditions that every DRC component must respect. The FRTB presents the DRC measurement as Value at Risk (VaR) over a one-year horizon, with the quantile equal to 99.9%. We use multifactor adjustment to measure the DRC and compare it with the Monte Carlo Model to understand how the approach fits. We then define concentration in the DRC and propose two methods to quantify the concentration risk: the Ad Hoc and Add-On methods. Finally, we study the behavior of the DRC with respect to the concentration risk.

1. Introduction

Since 2013, the Basel Committee has led works for a new regulation to implement a more consistent regulatory market risk capital platform. This project is known as the Fundamental Review of the Trading Book. It replaces the Basel II International Convergence of Capital Measurement and Capital Standards text. The complete version was prepared by the Basel Committee [1] under the title, “Minimum Capital Requirements for Market Risk.” This regulation is summarized into four streams.

The first stream refers to the boundary between the trading and the banking books. Thus, this stream aims to improve the visibility of products that include market risk exposure. The banks are brought in to list all desks of the trading book. They must define the link between the positions held for the trading objective and the regulatory trading book. The regulatory purpose is to reduce arbitrage across the trading and banking books.

The second stream refers to rebuilding the Internal Models Approach (IMA). Hence, this stream covers all internal market risk models developed by banks. The regulator suggests changing all market risk measurements. First, the regulator replaces the market Value at Risk (VaR) and the Stressed Value at Risk (SVaR) from two perspectives given a 10-day horizon and a 99% confidence level by the Expected Shortfall (ES) and given a 10-day horizon with different liquidity horizons (10, 20, 40, 60, 120 days) and a 97.5% confidence level. The Basel formula is given as follows:where  = 10 days and the liquidity horizons are equal to  = 10, 20, 40, 60, 120 days. ES is computed sequentially by liquidity horizons and class assets. Figure 1 gives us an example of the calculation.

Second, banks must distinguish between modellable and non-modellable risk factors (NMRFs). Then, those risk factors must be quantified using a stress scenario with zero correlation. Third, the Comprehensive Risk Measure must be processed according to the standard approach. Finally, the Incremental Risk Charge is changed by the Default Risk Charge. We will see the detailed regulation requirement for its modeling next. At this stage, we are reminded that the migration rate risk is deleted to keep only the default one, and the equity scope is added.

The third stream refers to improving model adequacy and backtesting. In this stream, the regulator has established two levels of VaR backtesting: 97.5% and 99%. Then, the regulator adds the profit and loss (P&L) attribution as a new test. Such a test is based on the minimization of two ratios. On the one hand, banks must minimize the unexplained-daily P&L. It equals the difference between a risk-theoretical P&L and a hypothetical P&L over the standard deviation of hypothetical-daily P&L. On the other hand, we find the ratio between the variances of unexplained-daily P&L and hypothetical-daily P&L. The first ratio boundary is [−10%, +10%]. The second ratio has to be between 0% and 20%. This test aims to bring the market P&L distribution closer to the theoretical P&L distribution because we know that VaR backtesting is focused on the distribution tail alone. Figure 2 shows an example of the unexplained mean and variance.

Nevertheless, one limitation of those ratios is that when the portfolio is perfectly hedged, it leads to the zero value of the hypothetical-daily P&L variance. The two metrics converge to infinity, and we then analyze the conclusions of this test.

The final stream refers to rebuilding the Standardized Approach (SA). The new SA is based on the Sensitivities Method and covers the trading book’s nonsecuritization and securitization exposures. The regulator also defines the SA for the DRC and the residual risk add-on that is not captured by others risk metrics. Banks will then use a linear approach based on the Delta and Vega sensitivities and a nonlinear approach for the instruments that integrate curvatures (e.g., options for ES computing). However, the DRC SA uses risk weight to weigh the Jump to Default (JTD) by obligor rating to calculate the DRC.

After that, we will focus on the FRTB guidelines for modeling the DRC on the IMA. The regulator defines default risk as the direct or indirect loss arising from the obligor’s default. This risk is measured by a VaR based on a one-year horizon and a 99.9% confidence level. The computing frequency is weekly, and the DRC capital requirement is equal to the following:

We should calibrate four components and model to implement the loss function. The first component is the obligor’s correlation. Initially, the regulator allows using the credit spread or the historical listed equity price data. These historical data must include at least 10 years and the stressed period, as defined in the ES model. The chosen liquidity horizon is the one-year liquidity horizon, and the minimum for the equities is set at 60 days. These data must give a higher correlation for portfolios, including short and long positions. On the other hand, a low correlation is assigned to the portfolios that contain only long exposures. Next, the obligor’s default must be modeled using two types of systematic factors to deduce the model correlation. Finally, the correlation measurement must be done on the one-year liquidity horizon.

The second component is the Probability of Default (PD). The FRTB defines some conditions and priorities for PD estimation. The first two conditions are as follows: (1) the market PDs are not allowed and (2) all default probabilities are floored to 0.03%. The Internal Ratings-Based (IRB) PDs typically become the best choice when the model is validated. Otherwise, a model must be developed respective to the IRB methodology. Therefore, historical market PDs should not be used for calibration. Institutions must base their evaluations on a historical default uploaded with a 5-year observation as a minimum calibration period. Banks could also use the external rating provided by rating agencies (e.g., S&P, Fitch, or Moody’s) to estimate PDs. In this case, they must define the priority ranking choice.

The third component is the Loss Given Default (LGD) model. The LGD model must catch the correlation between recovery and systematic factors. The model must be calibrated based on IRB data if the institution already has a homologated model. The historical data should be relevant to get accurate estimates. All LGDs must be floored to zero, and the external LGDs could be used, respective to some defined ranking choice.

The final component is the Jump to Default (JTD) model. The JTD model must catch each obligor’s long and short positions. Additionally, the set assets must contain the credit (i.e., sovereign and corporate credit) and the equity exposures. This measure can be defined as a function of the LGD and the Exposure at the Default (EAD) for credit assets. However, it must also measure the P&L for equities when the default occurs since we know that the LGD is equal to 100% for equity assets. The model includes equity derivatives pricing within the zero value of the stock price. The nonlinear product JTD must integrate multidefault obligors in the case of the derivative products with a multiple underlying. A linear approach could be used for these products, such as the sensitivities approach, based on obligor default and subject to supervisor approval.

There are a few studies present and suggest frameworks to model the DRC. The first one was made by Laurent et al. [2] where they use the Hoeffding decomposition to explain the loss function. The second one was implemented by Wilkens and Predescu [3] and they propose a complete framework to build the DRC model. However, they all use the Merton model with multifactor (structural approach) and they are not studying the concentration risk issue.

In this paper, we will study the DRC modeling under the Internal Model Approach (IMA) and the regulator conditions that every DRC component must respect. The FRTB presents the DRC measurement as a Value at Risk (VaR), over a one-year horizon, with the quantile equal to 99.9%. We will use the multifactors adjustment to measure the DRC, and we will compare it with the Monte Carlo Model to study the fitting of this approach. We will then define the concentration in the DRC, and we will propose an approach to quantifying the concentration risk. We will finally study the behavior of the DRC with respect to the concentration risk.

2. Mathematical Modeling of the DRC

2.1. Obligor Default Model

The FRTB requires two types of systematic factors to simulate obligor default. We run a Principal Component Analysis (PCA) to select the set of systematic factors and their types. The results of the PCA give us four systematic factors for each obligor and two types: (1) global factors and (2) sectorial factors. The first set of factors is built by one global factor and two global asset types: (1) sovereign and (2) corporate. The second asset type contains regional and industry factors. We note these sets, respectively, by , , and as we know that two approaches allow default modeling: (1) the structural model and (2) the intensity model. In this study, we will use the Merton [4] model with multifactors and deem a portfolio with obligor, containing credit (i.e., sovereign and corporate) and equity position. The return variable is written for an obligor (i) as follows:where are independent of set and follow , with . gives the correlation between obligors and systematic factors, whereas represents the specific risk, and they are independent and identically distributed for and independent of all systematic factors. Also, the following formula is used to keep :

The initial choice of a systematic factor does not allow an independent set structure. However, we can run the Gram–Schmidt algorithm to get the orthogonal sets before calibrating the model correlations. We then fix the global systematic factor and orthogonalize each axis in set with the global one. The new axes are defined as follows:

We do the same thing for regions and add the orthogonal projection under and to get . Finally, we proceed for industries by adding the projection on to build . We center and reduce at each orthogonalization step to keep a centered and reduced variable.

We will use the new systematic factors in the following calculation. Therefore, the implied correlation between obligors can be deduced bywhere represents the obligor implied correlation matrix, is the systematic factor intracorrelation matrix, and represents the correlation factors between the obligors matrix, , and the systematic factors; represents the transposed matrix; is the vector of ; and I is the identity matrix.

We deem a set of 1,481 issuers within a 10-year historical spread. Our population contains 69 sovereigns in six regions and 11 industries. Therefore, we build a set of systematic factors orthogonal by subsets, which gives us the block of zeros in building a with as the dimensions.

The relevance of our model could be measured by comparing the implied correlation with the historical correlation. We assume that we use the log return of historical data and note , the Pearson historical correlation between obligors. First, we propose plotting the density function of and to see if the implied distribution fits with the historical one. Figure 3 shows that the implied density is close to the historical density, and both look very similar.

However, the plot of densities is not enough to measure fit. We can also use the same ratios defined on the P&L attribution to compare the mean and standard deviation of the two distributions. Hence, these ratios equal 2.26% and 12.14%, respectively. We suggest building the confidence interval on using the following Fisher transformation to complete this analysis:

We directly conclude the confidence interval, with , the confidence level. We then compute the percent numbers of this confidence interval pair-wise to conclude the model accuracy. Given that , the result tells us that of the population is inside the confidence interval. We observe that the quality of our model correlation is sensitive to how we build the systematic factor, and we can use this to rebuild the set that gives the closest implied correlation to the observed one. However, the drawback of this approach is that it is very time-expensive.

After this calibration, the obligor default is defined under the Merton model as follows:

In other words, we can write with , which represents the probability of default for the obligor . We will use the Standard & Poor’s (S&P) PDs with a floor of as specified in the FRTB. Table 1 shows the one-year probability of default by rating and category.

Hence, the conditional default probability for the systematic factors under this model is equal to

Given that are the obligor lines of the matrix, is the systematic vectors transpose and is defined as .

2.2. LGD Model

The LGD computation depends on the recovery rate. However, the FRTB guidelines require the dependency between recovery rate and systematic factors (“The model must incorporate the dependence of the recovery on the systemic risk factors” [1], p. 62). Hence, we should resort to the models that allow this condition. There are many models developed in this optic. For example, Michael [5] proposed an exponential function between the recovery rate and the systematic factors. In another approach, Hull and White [6] suggested an exponential function between recovery and default rates. This model indirectly links the LGD to systematic factors because the default rate is a function of them. In this paper, we opt for a similar model to the one based on the default rate. We deem the following relation between the LGD and the conditional default probability of the systematic factor:where

We use the IRB data for the calibration of and to conform to the FRTB regulation. The asset class can make this calibration, so we have to define for sovereign and for corporate obligors. It could also be done by seniority. However, we keep the sovereign and corporate subdivisions in our case, taking the following values for calibration:

Given these values, we find our parameters as follows:

Thus, we deem the following transformation:where ; and .

We can calculate the distribution and the density of . Hence, the calculations lead to the following results:

Therefore, we can deduce the recovery rate distribution since it is a function of :


We then conclude the expectation and the variance of the recovery:where

For the small values of , we can approximate . We then have a close formula for the recovery expectation and variance:with

2.3. JTD Model

The standardized approach defines a long and short JTD for the same obligor. We then aggregate to get the gross JTD. In this approach, JTD is a function of LGD, the notional amount, and the P&L. Therefore, we have the following equations:where P&L = market value-notional and LGD = 25%, 75%, 100%, respectively, for covered bonds, senior debts, and nonsenior debts.

We could keep the same formula by replacing it with the LGD model. However, the other option is to compute the EAD as we did in the banking book. Therefore, it will be explained by asset type (i.e., credit and equity) and in mono and multi-underlying contexts. We then describe the formula for each asset type as follows:where year; is the obligor underlying price (i.e., the stock price in the case of equity); is the function value of the obligor (i) (i.e., the aggregate position of mono and multi-underlying); represents the sensitivity weight; and define the underlying context.

Note that we use the simplification suggested by the FRTB, and it needs a supervisor validation. The JTD for the obligor is given by

Figure 4 gives the exposure density of the portfolio used in this paper.

2.4. DRC Model

We now have all components to define the loss function, given by the following equation:

This quantity can be computed using the Monte Carlo simulation to generate the loss distribution. Indeed, we note , the number of simulations, and , the sampled path for  1…M. As we know, the DRC is a VaR at for a one-year horizon; thus, it can be estimated as follows:

Knowing that represents the path that gives the loss, the Monte Carlo simulation gives the following DRC value with after decreasingly ordering all simulated paths:

Figure 5 shows the loss distribution density of the portfolio.

This approach is straightforward and gives a good result with a large enough M. However, it takes much time for large portfolios and does not support quantifying concentration risk cost and defining whether it has been captured or not. First, we describe the loss induced by the systematic risk factor as follows:

This equality comes where the JTD is Z-measurable. Therefore, we deem the following transformation aswhere ; and .

By substituting the loss function, we get the following result:

We applied a Monte Carlo simulation to this expression to compute the distribution of and the VaR because the model contains more than one factor. However, we must relate our model to the one-factor model to get a direct computation of the VaR. Michael [5] defined this relation using an aggregate systematic factor as follows:where is chosen to maximize the correlation between elements of and . Furthermore, we can rewrite the obligor default variable as follows:where .

For the rest of this study, we redefine the recovery rate as a function of . Given these results, the loss function under the one-factor model becomeswhere .

We use the following problem optimization to find the appropriate . This method allows minimizing the norm one between the DRC value under the Monte Carlo approach and :

In our case, the quantile of the systematic loss for the optimized is equal to

Hence, the remaining part of the DRC is the difference between the Monte Carlo DRC and the systematic loss, which equals 1,465,919. This quantity represents 23.9%, and it will be approximated using the adjustment.

It remains to compute the correlation and concentration effects since the FRTB guideline specifies that the model must reflect the name concentration risk and the sectorial one by asset class (“The model must reflect the effect of issuer and market concentrations, as well as concentrations that can arise within and across product classes during stressed conditions” [1], p. 62).

3. Concentration Risk under DRC

3.1. Concentration Adjustment

We will use an adjustment to catch the concentration part, defining the loss function adjustment as . Using these notations, this adjustment is defined as follows:

By applying the Taylor expansion [7] on with the second order, according to and by replacing , we get

By computing the first and the second derivative terms, we find the following results [8]:where defines the density function of .

Since is a deterministic and decreasing function of , we can replace with and get the following result:

The first order of the derivative equals zero, leaving behind the second order. We then havewhere

Therefore, the first and second derivatives of according to the systematic factor are given bywith

We use the variance decomposition to compute as follows:

We can explain that the first term gives correlation effects between issuers and sectors. Hence, it indirectly gives the sector correlation since implied correlation depends on intrasectorial correlation. The second one integrates the name concentration (i.e., specific) risk and is known as the Granularity Adjustment (GA). We then getwhere is the individual loss function.

We compute the covariance between two individual loss issuers as follows:

Therefore, the first term is equal towhere is the bivariate normal cumulative distribution function and represents the implied correlation between two issuers conditional to the systematic factors. The derivative with respect to is equal towhere

The second term gives the name concentration part of the adjustment, and we can compute it knowing that the individual losses are independent conditional to the systematic factors:

By computing the individual variance of loss, we get

By substitution, the result is

The derivative respect to then is equal to

Given these results, we can rewrite as the sum of the two quantities. The first one will represent the effect correlation and the sectorial concentration, while the second will represent the name concentration. We then havewhere

The DRC approximation is calculated by the following formula:

However, is a monotonically decreasing function of This property leads to

Thus, the calculations give the following results:

This approximation explains that the DRC is a sum of the systematic, specific, and correlation contribution losses. The relative error with the Monte Carlo approach is 1.6%. We then have a granularity contribution to make on the concentration risk effects. In the next section, we propose two approaches for determining concentration risk. The first one uses the concentration ratio (Ad Hoc), and the second employs granularity adjustment (Add-On).

3.2. DRC and Concentration

The IMA text’s guidelines impose that the model must catch concentration risk effects. Since we can have two types of concentrations, we will ensure that the DRC increases with the name and sector concentrations. For this, we will define a concentration ratio that provides this property for the name concentration.

However, building this ratio is not straightforward, like in the case of the loan book that defines only a positive exposure. We have long and short EADs in the DRC. The first one increases the concentration, whereas the second should decrease it. Thus, the first step is to define two subsets by EAD issuers. The first one contains long exposures, and the second is built with short exposures:

We now define the share for each subset as follows:

The concentration ratio is a function of these shares, and there are many of these ratios. However, we will use the Herfindahl–Hirschman Index (HHI) [9], getting one for the long positions and another for the short positions:

Therefore, the concentration ratio of the global portfolio is defined as follows:

This ratio is verified by constructing the concentration properties [10], and we then compute it directly for our portfolio:

We conclude that the portfolio concentration under the HHI measure is very small. Hence, we can increase the concentration by increasing the long EADs and decreasing the short EADs to study the concentration effect. However, the impact cannot be significant since the contribution is minimal in both the Monte Carlo and the GA approaches.

Therefore, the second step is ordering the EAD issuers by the distance to the default . We define a decreasing order for the long and short EADs. This approach allows us to see most EADs that contribute to the DRC in the case of the Monte Carlo approach and the contribution weight in the case of the GA model. We now have all the tools to verify that the DRC model has caught the name concentration. We then stress the portfolio by augmenting the first long EADs and decreasing the final long EADs. The first impact arises on the HHI because it automatically increases the concentration under this measure. It remains to verify that the same effect appears in the Monte Carlo DRC and the GA . For that, we sort the EADs by decreasingly, and we use the transfer principal property to increase the concentration. We then compute the Monte Carlo DRC and to study the behavior of the concentration effect. Figure 6 shows that the DRC also increases with HHI, which proves that the model has captured the name concentration.

We also conclude the same behavior for in Figure 7.

The DRC behavior, respective to the sector concentration, can be studied using the intrasectorial correlation. Therefore, we can increase these correlations and recompute the Monte Carlo DRC to verify whether it augments or not. We can also use to see if it increases. Figures 8 and 9 show that sectorial concentration increases with intrasectorial correlation.

4. Conclusion

In this paper, we attempt to implement an approach that allows DRC modeling respective to the FRTB guidelines. First, we describe the regulatory requirement to build the conformance model. The DRC model needs four components: (1) PD, (2) recovery, (3) JTD, and (4) loss function. We propose the model and calibration issues for each of these situations. We also describe the Monte Carlo approach to compute the DRC VaR. Nevertheless, this approach cannot give the concentration risk contribution. Additionally, it does not provide its impact on the DRC model. We suggest multiadjustment to fix this issue since the model must include multisystematic factors. Furthermore, we propose an adaptable HHI ratio to measure the portfolio name concentration since we have long and short positions. We then compute the evolution of DRC and with respect to the HHI measure. We conclude that the model captured the concentration risk since the DRC increased with concentration. Regarding the sector concentration risk, we conclude that it increased respective to intrasectorial correlation. Therefore, all of these results prove that the model includes this component and verifies the regulatory requirement.

However, this approach is based on assumptions that may carry risks. The first assumption supposes that we are in the Merton environment, and the second assumption uses the Gaussian copula. Hence, we suggest using other copulas, like the Student or Gumbel copula, to study the impact of the second assumption on the obtained results. For the first assumption, we suggest to replace the structural approach with the intensity approach and remaking the study to see if these results remain the same.

Data Availability

The input.zip data used to support the findings of this study are included within the supplementary information file(s) in the following google drive link: https://drive.google.com/file/d/1O2e74mFDlPiJyNphRZlqXjIxCdG8yNVz/view?usp=sharing. All data are in csv file and only who has this link could access these data.

Conflicts of Interest

The author declares that there are no conflicts of interest.


BSUNIVERS CONSULTING is a French company specializing in consulting and IT in Finance. The firm provides support to financial institutions with their quantitative modeling challenges related to financial market behavior and to investment strategies. The firm uses advanced mathematical modeling tools including AI.