Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 6611237 | https://doi.org/10.1155/2021/6611237

Ziting Pei, Xuhui Wang, Xingye Yue, "Risk Measurement by G-Expected Shortfall", Mathematical Problems in Engineering, vol. 2021, Article ID 6611237, 13 pages, 2021. https://doi.org/10.1155/2021/6611237

Risk Measurement by G-Expected Shortfall

Academic Editor: Mostafa M. A. Khater
Received11 Dec 2020
Revised29 Mar 2021
Accepted09 Apr 2021
Published21 Apr 2021

Abstract

G-expected shortfall (G-ES), which is a new type of worst-case expected shortfall (ES), is defined as measuring risk under infinite distributions induced by volatility uncertainty. Compared with extant notions of the worst-case ES, the G-ES can be computed using an explicit formula with low computational cost. We also conduct backtests for the G-ES. The empirical analysis demonstrates that the G-ES is a reliable risk measure.

1. Introduction

The Basel Committee on Banking Supervision publicly released the new market risk framework, “Fundamental Review of the Trading Book” (FRTB) on January 14, 2016, to address the shortcomings of the prior market risk capital framework, Basel 2.5, and to design a minimum capital standard to apply market risk more uniformly across jurisdictions. The FRTB suggests using expected shortfall (ES) at the 97.5% confidence level to replace the 10-day value-at-risk (VaR) and stressed VaR at the 99% confidence level because the ES is a coherent risk measure that satisfies all axioms proposed in Artzner et al. [1] and it prioritizes tail risk to a greater degree. However, ES is more sensitive than VaR in estimating errors on distributions. If there is no good model for the tail of the distribution, then the ES value may be quite misleading; that is, the accuracy of the ES estimation is heavily affected by the accuracy of the tail modelling. An alternative method is to consider the worst-case ES.

This paper presents a new and simple method to calculate the worst-case ES when facing infinite distributions induced by volatility uncertainty. We employ a newly developed probability theory, G-expectation (G-normal distribution) established by Peng [2] to define the worst-case ES, which we call the G-ES. The G-expectation is a sublinear expectation that is the supremum of a set of linear expectations. The G-normal distribution is a distribution defined under the sublinear expectation. Hence, the quantiles of the G-normal distribution and the average of their tails are natural candidates for the worst-case VaR and ES. We explain the advantages of using the theory of sublinear expectation to characterize the worst-case risk exposure in Section 2 in detail.

The G-ES can be easily backtested since it has an explicit formula, whereas the ES is usually difficult to be backtested when the model is uncertain, although significant progress was made in this direction. For instance, Du and Escanciano [3] adjust the conditional backtests (Christoffersen [4] and Berkowitz et al. [5]) and the unconditional backtest (Kupiec [6]) for VaR to ES.

Our contribution is threefold:(1)We provide an explicit formula to compute the G-ES, making it easy to conduct a backtest.(2)The G-ES method can be applied to high-dimensional portfolio risk management.(3)Dynamic backtests are conducted for worldwide indexes. The G-ES performs robustly and reliably in the backtests.

Many papers investigate the worst-case ES. Under a moment-cone uncertainty set, Natarajan et al. [7] define the worst-case conditional VaR measure (CVaR (CVaR is an equivalent concept to ES)) using a semidefinite programming method that generalizes the idea of the worst-case VaR introduced by Ghaoui et al. [8] to the worst-case CVaR. Zhu and Fukushima [9] study the worst-case CVaR under mixture distribution uncertainty, box uncertainty, and ellipsoidal uncertainty. An explicit formula is not given, but it can be solved efficiently by numerical methods. Compared with Zhu and Fukushima [9] and Natarajan et al. [7], the G-ES has an explicit formula with a low computational cost. We assume that the mean and volatility vary within bounded intervals and that the return follows the G-normal distribution, which is a worst-case distribution induced by mean and volatility uncertainty. This kind of worst-case distribution covers not only a set of normal distributions but also some other distributions, although we cannot specify all of them. In previous work, Pei et al. [10] show that the G-normal distribution covers some bimodal skew-normal distributions.

Chen et al. [11] and Natarajan et al. [12] study the worst-case CVaR for positions with uncertain distributions, but with given means and variances, and obtain closed-form expressions. However, their worst-case CVaR often heavily overestimates risks. The empirical tests of nine international indexes from 2007 to 2018 suggest that the G-ES is robust and reliable most of the time.

The rest of this paper is organized as follows. In Section 2, we explain the basics of the G-normal distribution and the G-ES. We provide the definition and several properties of the G-ES in Section 3. The empirical backtests are conducted in Section 4. Section 5 concludes the paper.

2. G-Normal Distribution

2.1. Background

Peng [1315] established G-stochastic analysis, which is a path analysis that extends the classical Wiener analysis to a framework of sublinear expectation on events field , the space of all -valued continuous paths with , equipped with a uniform norm on compact subspaces. Notions such as the G-normal distribution, G-Brownian motion, and G-expectation were introduced (see Appendix or Peng’s review paper [16] and summative book [2]). The representation for G-expectation [17],indicates that G-expectation naturally induces a weakly compact set of probabilities . The G-Brownian motion is a martingale under each [18], and there exists a unique adapted process such that almost surely (a.s.) andwhere is a standard Brownian motion under the linear expectation . There are infinite probability measures in the set , as one process induces one probability measure. In this paper, we aim to identify the worst-case ES in the setting of ambiguous volatility.

A G-normal distributed random variable is characterized by the solution of a Hamilton–Jacobi–Bellman equation (HJB). Let , where is bounded and Lipschitz-continuous. In particular, . Peng [2] shows that a random variable is G-normal distributed if is the viscosity solution of the following partial differential equation (PDE):where is the Hessian matrix of ; that is, and tr ; denotes the space of symmetric matrices; represents the set of all possible covariance matrices, which is a given bounded, closed, and convex subset of . PDE (3) is the so-called G-heat equation. If is a singleton, then it is the classical heat equation. When , the volatility belongs to an interval , where and , and the mean is . In this case, we denote .

2.2. The Worst-Case Distribution

Note that to obtain the best-case distribution function and the worst-case distribution (given a confidence level and for the same position , the VaR obtained by is always greater than or equal to the one obtained by , so we identify as the best-case distribution while as the worst-case distribution) is equivalent to solving the PDE (3) with the initial condition of indicator type. We find that the “similar solutions” method (Bluman and Cole [19]) works for the PDE (3) with initial condition or (see Pei et al. [10]). We assume that follows a 1-dimensional -normal distribution with . Pei et al. [10] show thatwhere is the distribution function of the standard normal distribution. When , the best- and worst-case distribution functions are the limits of those in (4) and (5), respectively, as converges to zero. It is easy to check that the functions and are distribution functions for some random variables with the corresponding best/worst-case density functions:

A close look reveals that .

The best/worst-case distributions have more skewness than the normal distribution does. Figure 1 compares the best/worst-case density functions with a family of normal density functions with volatility varying within . A look at Figure 1 suggests that uncertain volatility results in a skewed density function. For example, the best-case density function has a left fat tail, such that the best-case distribution function is greater than any normal distribution with volatility in . We use the best-case density function to calculate the G-ES for return data and the worst-case density function to calculate the G-ES for loss data. It is worth noting that the best- (worst-) case distribution function is not the maxima (minima) of the set of normal distributions, which suggests that the set not only includes normal distributions, but some other distributions that are not normally distributed. Pei et al. [10] specify that the set may include some bimodal skew-normal distributions.

The following proposition calculates the skewness and kurtosis of the worst-case density function .

Proposition 1. Let be a random variable with density function . Then,(i)Its mean is and the standard deviation is(ii)Its skewness isand kurtosis iswhere . Moreover, both the skewness and kurtosis are decreasing with respect to k.

Proof. Let be the linear expectation under which has the worst-case density . From equation (7), we can getthen, the standard deviation is .
For (ii), by direct calculation, the skewness isThe kurtosis isDefine and . Then, taking the derivative of , we havewhere , , and for .where , , and for . Consequently, Proposition 1 holds true.
The variable measures the uncertainty of the volatility. Figure 2 demonstrates that as the uncertainty of the volatility decreases ( increases), the skewness and kurtosis decrease. When , the equation corresponds to a normal distribution with skewness 0 and kurtosis 3. Hence, the worst-case distribution has the elasticity to fit data by adjusting its skewness and kurtosis. The worst-case distribution enhances a lot in skewness and kurtosis from normal distributions. However, the skewness and kurtosis of are below those of the Gumbel distribution, which has skewness 1.14 and kurtosis 5.4. Thus, we find a distribution with skewness and kurtosis between those of the normal and Gumbel distributions.

3. G-ES

ES is the average value of the loss greater than the VaR for a given confidence level (Rockafellar and Uryasev [20, 21]). It estimates how much the tail loss exceeds the VaR given the distribution of the investment within a fixed period. Let be a random variable that represents the possible loss and let be its distribution function. For a given confidence level , the traditional ES is defined aswhere is the -quantile for a given distribution. However, for a financial position, future distributions are often not precisely known, even if we have a past distribution in hand. We now define the G-ES, which incorporates the distribution uncertainty.

3.1. Definition of G-ES

Let be the set of probability measures induced by processes valued in . Since we consider the worst-case scenario, and it only makes sense to consider nonnegative risk exposures, we employ the first part of (5) to calculate the G-ES. The following lemma for G-VaR comes from Pei et al. [10].

Lemma 1. Assume that follows a 1-dimensional G-normal distribution, that is, . We have the following explicit formula for the G-VaR:Similar to the traditional ES, we can define the G-ES as the average of tail G-VaRs.

Definition 1. Assume that . For a confidence level , the G-ES is defined asBy Lemma 1, the G-ES has the following closed form.

Theorem 1. Assume . We obtain the G-ES for in the following closed form:

Proof. From equation (17) and definition (18), for , we haveThe proof is complete.
From equation (19), we see that we can calculate the G-ES only via several parameters, avoiding the computation of a set of G-VaRs. Its simple operability could be appealing for the financial industries.
Similar to the traditional ES, it is easy to see that G-ES has the following properties.

Proposition 2. Given a confidence level , the G-ES satisfies the following axioms:(1)Monotonicity: if (2)Translation invariance: for (3)Homogeneity: for

Proof. The three properties are direct consequences of definition (18).

3.2. G-ES for a Portfolio

In this section, we show how to calculate the G-ES for a portfolio. Our results show that doing so is equivalent to computing the weighted volatility of the portfolio, which dramatically simplifies the computation.

Proposition 3. Assume that follows a d-dimensional -normal distribution. Then, for each , ,where with and .

Proof. By Peng [2], for a -dimensional G-normal distributed random variable , for each vector , follows a 1-dimensional G-normal distribution. Consequently, Proposition 3 holds true.
Note that the dimension of the portfolio can be of several hundreds or thousands because calculating the G-ES of a portfolio is simply equivalent to finding the weighted volatility bounds and . Given the set of possible covariance matrices, it is not costly to determine and by a certain search algorithm. Particularly, the following corollary states a result with given boundaries of covariance matrices for a portfolio.

Corollary 1. Let and be two dimensional nonnegative definite matrices such that is also nonnegative definite. Assume that follows a d-dimensional -normal distribution with covariance matrix C satisfying (for two matrices and , means is nonnegative definite). Then, for each vector , the G-ES of the portfolio iswhere within which represents the trace of the matrix.

Proof. From Proposition 3, we obtain with and . Then, by Guo et al. [22], we havewhere , are the eigenvalues of with being the Cholesky decomposition. It is easy to verify that is a nonnegative definite matrix, so and . As for ,where , are the eigenvalues of . Since is a nonpositive definite matrix, we have , ; therefore, . Consequently, Corollary 1 holds true.

Corollary 2. Let be a nonnegative definite matrix, and assume that follows a two-dimensional -normal distribution with covariance matrix C satisfying , . Then, for a given , .
Corollary 2 states the subadditivity of the G-ES. Particularly, we can take ; that is, .

Proof of Corollary 2. Since are 2-dimensional G-normal distributed, and are 1-dimensional G-normal distributed random variables. Without loss of generality, we assume , , and the matrix . By the condition , we obtain . From Corollary 1, we obtain , where , . Observing that , and by the explicit formula of the G-ES, we obtainThe proof is complete.
We now present a numerical example (this numerical example is worked on ThinkPad T450 (CPU 2.4 GHz, Intel Core i7-5500U, Memory 8 GB)). Given the number of assets, we randomly generate two nonnegative definite matrices and as the upper and lower bounds of the covariance matrix, satisfying that is a nonnegative definite. Table 1 reports the weighted volatility bounds, the G-ES with a 97.5% confidence level, and the time cost to calculate the G-ES, given the bounds of the volatility matrices. As we see, it takes only several seconds to calculate the G-ES for a 5,000-dimensional portfolio (the time does not include that the generation of the volatility matrixes). A close look also shows that as the number of assets increases, the G-ES decreases, which is in line with the basic principle of portfolio diversification.


Number of assets
1010050010005000

0.04330.01280.00560.00390.0017
0.24920.15250.09830.08190.0548
0.13040.05250.02190.01620.0072
CPU time (seconds)0.00080.00130.01250.07925.5610

Remark 1. Empirically, we can also find the two boundary matrices and . Suppose there are covariance matrices , . If one of these covariance matrices for some satisfies () for all , then we set (, respectively). Otherwise, we define as the zero matrix, and , where for , and for , and . It is easy to check that, for each n, is a diagonally dominant symmetric real matrix, and its main diagonal elements are , , . By Ye [23], for .
Another way to obtain the bounds and is by the perturbation of a prediction matrix C. For instance, with C in hand, we can define and as the bounds, where .

4. Backtest

In this section, we test the robustness of the G-ES empirically. We use daily loss data of nine global market indexes: the CSI300, SHSCI, and SZSCI from China; the S&P500, DJIA, and NASDAQ from the US; and the CAC40, FTSE100, and DAX from Europe. The sample period is from 1 January 2007 to 31 December 2018. The loss data are normalized with zero mean. We use the exponentially weighted moving average (EWMA, see Hull [24]) model to predict daily volatility.

Peng et al. [25] obtain the upper/lower volatility bounds by dividing the data into several windows and then taking the maximum volatility as and the minimum as . They use window sizes of 250, 500, and 1000 days to make predictions. However, even for 250 days, the G-VaR does not change sensitively over time. In particular, for 1000 days, the G-VaR graph is nearly a straight line for the four years from 2009 to 2012 for the S&P500 index.

To avoid this insensitivity, for the G-ES at the 97.5% confidence level, we multiply the daily volatility by 1.16/0.84 to obtain the upper/lower bounds of volatility. We tested several multipliers and choose these two multipliers. The empirical calibration later shows that we can obtain a reliable G-ES in this way, not only for the China indexes but also for the US and Europe indexes. However, there is no consensus on how to determine volatility ambiguity because different sample sizes or different partitions of the data may yield different volatility bounds. We must therefore find the best method. We first adjust the multipliers for the CSI300 and then apply the best results (1.16/0.84) to the other indexes. We find that the G-ES performs robustly and reliably most of the time. Furthermore, for different confidence levels, we should adopt different levels of volatility ambiguity, that is, different multipliers. Due to limited space, we show only several ESs at the 97.5% confidence level based on the CSI300, S&P500, and CAC40 for the years 2007 to 2012. Figure 3 shows that the G-ES performs similarly to the Gumbel-ES, (the formula for the Gumbel-ES is , where and are the location and scale parameters, respectively, and is the confidence level) while the G-VaR is close to the normal-ES with the same confidence level. A close look indicates that the G-ES is obviously more robust than the G-VaR. Interestingly, Figures 4 and 5 also show the same pattern.

To confirm the reliability of the G-ES, we first apply a nonparametric method of Acerbi and Szekely [26] to backtest the ES models. Then we conduct a comparative backtest following Nolde and Ziegel [27].

Let represent the percentage loss in day (). These losses are distributed according to a real but unknown distribution and forecasted by a predictive distribution . We assume that the random variables are independent of each other. Let be the indicator function of a VaR violation.

We now describe this test, derived from the representation of the ES as an unconditional expectation (as for the G-ES, corresponds to the worst-case distribution )which suggests defining the following test statistic:

The null and the alternative hypothesis will berespectively, where and denote the tail distributions of when , and and denote the values of the risk measures when . Let and be the conditional expectations under hypothesis and , respectively. Similar to Acerbi and Szekely [26], it is easy to obtain and . To calculate the value of a realization , we need to simulate the distribution under . The first step is to simulate , , independently. Second, we must calculate for each . Therefore, we estimate the value by , where is a suitably large number of scenarios. Given a significance level , the test is not rejected if . When , then we conclude that the model underestimates the shortfall risk, and hence, the model does not pass the test. According to Acerbi and Szekely [26], if , then the model overestimates the shortfall risk.

Table 2 shows the results of the test for each model. A large value means a large value of the corresponding ES. “” means that the value is less than 0.05 or greater than 0.95; that is, the corresponding model does not pass the test, while “ ” implies that the value belongs to the interval ; that is, the corresponding model passes the test. As we can see, the G-ES passes all the tests for the nine international indexes, while the Gumbel-ES does not pass the tests for indexes SZSCI, CAC40, and DAX because it overestimates the shortfall risks. The normal-ES does not pass the test in any case because it underestimates the shortfall risks.


Normal-ESG-ESGumbel-ES

CSI3000.0000 ()0.3908 ()0.9400 ()
SHSCI0.0000 ()0.2280 ()0.7396 ()
SZSCI0.0000 ()0.5204 ()0.9517 ()
S&P5000.0000 ()0.0666 ()0.5298 ()
DJIA0.0000 ()0.1081 ()0.6617 ()
NASDAQ0.0000 ()0.1684 ()0.7588 ()
CAC400.0000 ()0.9338 ()0.9997 ()
FTSE1000.0000 ()0.4543 ()0.9484 ()
DAX0.0000 ()0.8887 ()0.9994 ()

Note: “” means that the corresponding ES model does not pass the test, while “” implies that the corresponding ES model passes the test.

Remark 2. We also apply the backtesting methodology of McNeil and Frey [28] to the three ES models for a 12-year sample period of the nine worldwide indexes. Given a significance level , normal-ES does not pass the test for any of the nine international indexes. The G-ES passes the test only for the DAX index, while the Gumbel-ES passes most tests for the international indexes except for the S&P500 index.
However, as shown in Figures 35 and Table 2, the G-ES is closed to and outperforms the Gumbel-ES. Moreover, Acerbi and Szekely [26] indicate the test method in McNeil and Frey [28] alone or its variation is not valid to backtest models for ES because the accuracy of VaR forecast affects the outcomes of this test significantly. This is also supported by findings in Roccioletti (2016, Chap. 5, Sect. 5.3) [29].
Next, we conduct the comparative backtests introduced by Nolde and Ziegel [27]. We consider the pair . Let be the set of natural numbers and and be two sequences of predictions of , which are referred to as the internal model and the standard model, respectively. Let be a consistent scoring function for . Then, we say S-dominates if .
Now we conduct the comparative backtests proposed in Nolde and Ziegel [27]. For a consistent scoring function and a set with data, the sample average is defined as follows:Given a confidence level , Table 3 reports the sample average of consistent scoring functions and , respectively, along with the corresponding model rankings, where and are defined asCombining the outcomes of Part A and Part B of Table 3, we see that, in most cases rankings obtained from both consistent scoring functions coincide with each other except for indexes SZSCI, DJIA, and NASDAQ. In general, for the Chinese and the American stock indexes, the scoring functions rank the G-ES as the best or second best performing model. For the European stock indexes, the two scoring functions result in a good agreement with G-ES being the best forecaster. For all the nine indexes, the normal-ES shows the worst performance according to both the two scoring functions.
As a complement to model rankings in Table 3, now we compare the internal model with a given standard model using the test method proposed in Fissler et al. [30] and Nolde and Ziegel [27]. DefineThen the comparative backtesting hypotheses can be formulated asWe defineGiven a significance level , we use the test statisticwhere represents a heteroscedasticity and autocorrelation consistent estimator of the asymptotic variance, , and then we obtain an asymptotic level- test of if we reject the null hypothesis when , and of if we reject the null hypothesis when . Based on the outcome of the tests of and , we say that the internal model fails the comparative backtest if is rejected (i.e., it is in the red region of Figure 6). The internal model passes the backtest if is rejected (i.e., it is in the green region of Figure 6). The internal model needs further investigation if neither nor can be rejected (i.e., it is in the yellow region of Figure 6).
Figure 6 displays the traffic light matrices for three ES models from 1 January 2007 to 31 December 2018. Due to limited space, we only show outcomes for the indexes of CSI300, S&P500, and CAC40. Along the vertical axis, we consider hypothetical “standard” models with the investigated “internal” models displayed along the horizontal axis. The red (green) cells correspond to situations in which the comparative backtest is failed (passed), while yellow cells indicate cases where no conclusive evidence is available to pass or fail the comparative backtest. The left part of Figure 6 corresponds to the scoring function , while the right part corresponds to the scoring function . For the CSI300 index and the S&P500 index, the two scoring functions result in a good agreement with normal-ES being the worst forecaster (i.e., failing the comparative backtests against all the other models) and the two scoring functions cannot identify differences of performance between the G-ES and the Gumbel-ES at the given significance level. As for the CAC40 index, we show that the G-ES outperforms Gumbel-ES under both scoring functions. In particular, the scoring function is better at identifying models than the scoring function .


Part A
Normal-ESG-ESGumbel-ES

CSI3000.2192 (3)0.2148 (1)0.2150 (2)
SHSCI0.2148 (3)0. (2)0. (1)
SZSCI0.2313 (3)0.2271 (1)0.2276 (2)
S&P5000.1775 (3) (2) (1)
DJIA0.1696 (3)0.1663 (1)0.1665 (2)
NASDAQ0.1868 (3)0.1826 (1)0.1829 (2)
CAC400.1866 (3)0.1849 (1)0.1860 (2)
FTSE1000.1701 (3)0.1682 (1)0.1683 (2)
DAX0.1849 (3)0.1829 (1)0.1830 (2)

Part B
CSI300−3.0563 (3)−3.1223 (1)−3.1221 (2)
SHSCI−3.0845 (3)−3.1617 (2)−3.1676 (1)
SZSCI−2.9043 (3)−2.9747 (2)−2.9768 (1)
S&P500−3.4161 (3)−3.5173 (2)−3.5290 (1)
DJIA−3.5224 (3)−3.6046 (2)−3.6122 (1)
NASDAQ−3.3176 (3)−3.4112 (2)−3.4166 (1)
CAC40−3.3588 (3)−3.4034 (1)−3.3933 (2)
FTSE100−3.5474 (3)−3.5995 (1)−3.5967 (2)
DAX−3.3801 (3)−3.4264 (1)−3.4226 (2)

Note. Before rounding off, the number is 0.21004, ∗∗the number is 0.21002, the number is 0.17310, and the number is 0.17307.

5. Conclusion

We have presented a simple method, the G-ES, to measure shortfall risk by incorporating volatility uncertainty. We have extended G-ES to compute risks for a portfolio with low cost, which either has closed-form formulas or can be implemented by simple numerical computation. The dimension of the portfolio can be of several hundreds. The empirical tests show that G-ES performs well based on the test statistic introduced by Acerbi and Szekely [26] and on the comparative backtests following Nolde and Ziegel [27]. Compared with extant ES models, the G-ES is robust and reliable.

Appendix

A. Basic Knowledge about G-Expectation

In this section, we recall some basic knowledge about Peng’s G-stochastic calculus. Readers are referred to [2] for more information.

We denote by the collection of symmetric matrices and the positive-semidefinite elements of . Let denote the space of all -valued continuous paths with . denotes the Borel -algebra of . Let be a linear space of real functions defined on such that if , then for each , where denotes the linear space of (local Lipschitz) functions satisfying , , for some , depending on . is considered as a space of “random variables”. In this case, is called an -dimensional random vector, denoted by .

Definition A.1. A sublinear expectation on is a functional satisfying the following properties: for all , we have(a)Monotonicity: if , then (b)Constant preserving: (c)Subadditivity: (d)Positive homogeneity:

Definition A.2. Let and be two -dimensional random vectors defined on the sublinear expectation spaces . They are called identically distributed, denoted by , if

Definition A.3. In a sublinear expectation space , a random vector is said to be independent of another random vector under if for each test function we have

Definition A.4. (-normal distribution). A -dimensional random vector in a sublinear expectation space is called G-normal distributed if for each we havewhere is an independent copy of .

Remark A.1. It is easy to check that . The so-called “G” is related to defined byAssume follows a normal distribution. For each , define , where is bounded and Lipschitz-continuous. Peng [2] shows that is G-normal distributed if is the viscosity solution of the following HJB equation: