Abstract

The conditional tail expectation (CTE) is an important actuarial risk measure and a useful tool in financial risk assessment. Under the classical assumption that the second moment of the loss variable is finite, the asymptotic normality of the nonparametric CTE estimator has already been established in the literature. The noted result, however, is not applicable when the loss variable follows any distribution with infinite second moment, which is a frequent situation in practice. With a help of extreme-value methodology, in this paper, we offer a solution to the problem by suggesting a new CTE estimator, which is applicable when losses have finite means but infinite variances.

1. Introduction

One of the most important actuarial risk measures is the conditional tail expectation (CTE) (see, e.g., [1]), which is the average amount of loss given that the loss exceeds a specified quantile. Hence, the CTE provides a measure of the capital needed due to the exposure to the loss, and thus serves as a risk measure. Not surprisingly, therefore, the CTE continues to receive increased attention in the actuarial and financial literature, where we also find its numerous extensions and generalizations (see, e.g., [28], and references therein). We next present basic notation and definitions.

Let be a loss random variable with cumulative distribution function (cdf) . Usually, the cdf is assumed to be continuous and defined on the entire real line, with negative loss interpreted as gain. We also assume the continuity of throughout the present paper. The CTE of the risk or loss is then defined, for every , by

where is the quantile function corresponding to the cdf . Since the cdf is continuous, we easily check that

Naturally, the CTE is unknown since the cdf is unknown. Hence, it is desirable to establish statistical inferential results such as confidence intervals for with specified confidence levels and margins of error. We shall next show how to accomplish this task, initially assuming the classical moment assumption . Namely, suppose that we have independent random variables , , , each with the cdf , and let denote the order statistics of . It is natural to define an empirical estimator of by the formula

where is the empirical quantile function, which is equal to the order statistic for all , and for all . The asymptotic behavior of the estimator has been studied by Brazauskas et al. [9], and we next formulate their most relevant result for our paper as a theorem.

Theorem 1.1. Assume that . Then for every , we have the asymptotic normality statement when , where the asymptotic variance is given by the formula

The assumption is, however, quite restrictive as the following example shows. Suppose that is the Pareto cdf with index , that is, for all . Let us focus on the case , because when , then for every . Theorem 1.1 covers only the values in view of the assumption . When , we have but, nevertheless, is well defined and finite since . Analogous remarks hold for other distributions with Pareto-like tails, an we shall indeed work with such general distributions in this paper.

Namely, recall that the cdf is regularly varying at infinity with index if

for every . This class includes a number of popular distributions such as Pareto, generalized Pareto, Burr, Fréchet, Student, and so forth, which are known to be appropriate models for fitting large insurance claims, fluctuations of prices, log-returns, and so forth (see, e.g., [10]). In the remainder of this paper, therefore, we restrict ourselves to this class of distributions. For more information on the topic and, generally, on extreme value models and their manifold applications, we refer to the monographs by Beirlant et al. [11], Castillo et al. [12], de Haan and Ferreira [13], Resnick [14].

The rest of the paper is organized as follows. In Section 2 we construct an alternative, called “new”, CTE estimator by utilizing an extreme value approach. In Section 3 we establish the asymptotic normality of the new CTE estimator and illustrate its performance with a little simulation study. The main result, which is Theorem 3.1 stated in Section 3, is proved in Section 4.

2. Construction of a New CTE Estimator

We have already noted that the “old” estimator does not yield the asymptotic normality (in the classical sense) beyond the condition . Indeed, this follows by setting , in which case becomes the sample mean of , and thus the asymptotic normality of is equivalent to the classical Central Limit Theorem (CLT). Similar arguments show that the finite second moment is necessary for having the asymptotic normality (in the classical sense) of at any fixed “level” . Indeed, note that the asymptotic variance in Theorem 1.1 is finite only if .

For this reason, we next construct an alternative CTE estimator, which takes into account different asymptotic properties of moderate and high quantiles in the case of heavy-tailed distributions. Hence, from now on we assume that . Before indulging ourselves into construction details, we first formulate the new CTE estimator:

where we use the simplest yet useful and powerful Hill's [15] estimator

of the tail index . Integers are such that and when , and we note at the outset that their choices present a challenging task. In Figures 1 and 2, we illustrate the performance of the new estimator with respect to the sample size , with the integers chosen according to the method proposed by Cheng and Peng [16]. Note that when increases through the values , , , and (panels (a)–(d), resp.), the vertical axes of the panels also increase, which reflects the fact that the larger the gets, the more erratic the “new” and “old” estimators become. Note also that the empirical (i.e., “old”) estimator underestimates the theoretical , which is a well known phenomenon (see [17]).

We have based the construction of on the recognition that one should estimate moderate and high quantiles differently when the underlying distribution is heavy-tailed. For this, we first recall that the high quantile is, by definition, equal to for sufficiently small . For an estimation theory of high quantiles in the case of heavy-tailed distributions we refer to, for example, Weissman [18], Dekkers and de Haan [19], Matthys and Beirlant [20], Gomes et al. [21], and references therein. We shall use the Weissman estimator

of the high quantile . Then we write as the sum with the two summands defined together with their respective empirical estimators and as follows:

Simple integration gives the formula

Consequently, the sum is an estimator of , and this is exactly the estimator introduced above. We shall investigate asymptotic normality of the new estimator in the next section, accompanied with an illustrative simulation study.

3. Main Theorem and Its Practical Implementation

We start this section by noting that Hill's estimator has been thoroughly studied, improved, and generalized in the literature. For example, weak consistency of has been established by Mason [22] assuming only that the underlying distribution is regularly varying at infinity. Asymptotic normality of has been investigated under various conditions by a number of researchers, including Csörgő and Mason [23], Beirlant and Teugels [24], Dekkers et al. [25], see also references therein.

The main theoretical result of this paper, which is Theorem 3.1 below, establishes asymptotic normality of the new CTE estimator . To formulate the theorem, we need to introduce an assumption that ensures the asymptotic normality of Hill's estimator . Namely, the cdf satisfies the generalized second-order regular variation condition with second-order parameter (see [26, 27]) if there exists a function which does not change its sign in a neighbourhood of infinity and is such that, for every ,

When , then the ratio on the right-hand side of (3.1) is interpreted as . For statistical inference concerning the second-order parameter , we refer, for example, to Peng and Qi [28], Gomes et al. [21], Gomes and Pestana [29]. Furthermore, in the formulation of Theorem 3.1, we shall also use the function , where .

Theorem 3.1. Assume that the cdf satisfies condition (3.1) with . Then for any sequence of integers such that and when , we have that for any fixed , where the asymptotic variance is given by the formula

The asymptotic variance does not depend on , unlike the variance in Theorem 1.1. This is not surprising because the heaviness of the right-most tail of makes the asymptotic behaviour of “heavier” than the classical CLT-type behaviour of , for any fixed . This in turn implies that under the conditions of Theorem 3.1, statement (3.2) is equivalent to the same statement in the case . The latter statement concerns estimating the mean of a heavy-tailed distribution. Therefore, we can view Theorem 3.1 as a consequence of Peng [30], and at the same time we can view results of Peng [30] as a consequence of Theorem 3.1 by setting in it. Despite this equivalence, in Section 4 we give a proof of Theorem 3.1 for the sake of completeness. Our proof, however, is crucially based on a powerful technique called the Vervaat process (see [3133], for details and references).

To discuss practical implementation of Theorem 3.1, we first fix a significance level and use the classical notation for the -quantile of the standard normal distribution . Given a realization of the random variables (e.g., claim amounts), which follow a cdf satisfying the conditions of Theorem 3.1, we construct a level confidence interval for as follows. First, we choose an appropriate number of extreme values. Since Hill's estimator has in general a substantial variance for small and a considerable bias for large , we search for a that balances between the two shortcomings, which is indeed a well-known hurdle when estimating the tail index. To resolve this issue, several procedures have been suggested in the literature, and we refer to, for example, Dekkers and de Haan [34], Drees and Kaufmann [35], Danielsson et al. [36], Cheng and Peng [16], Neves and Fraga Alves [37], Gomes et al. [38], and references therein. In our current study, we employ the method of Cheng and Peng [16] for an appropriate value of the “parameter” . Having computed Hill's estimator and consequently determined , we then compute the corresponding values of and , and denote them by and , respectively. Finally, using Theorem 3.1 we arrive at the following -confidence interval for :

To illustrate the performance of this confidence interval, we have carried out a small-scale simulation study based on the Pareto cdf , , with the tail index set to and , and the level set to and . We have generated independent replicates of three samples of sizes , , and . For every simulated sample, we have obtained estimates . Then we have calculated the arithmetic averages over the values from the repetitions, with the absolute error (error) and root mean squared error (rmse) of the new estimator reported in Table 1 () and Table 2 (). In the tables, we have also reported -confidence intervals (3.4) with their lower and upper bounds, coverage probabilities, and lengths.

We note emphatically that the above coverage probabilities and lengths of confidence intervals can be improved by employing more precise but, naturally, considerably more complex estimators of the tail index. Such estimators are described in the monographs by Beirlant et al. [11], Castillo et al. [12], de Haan and Ferreira [13], and Resnick [14]. Since the publication of these monographs, numerous journal articles have appeared on the topic. Our aim in this paper, however, is to present a simple yet useful result that highlights how much Actuarial Science and developments in Mathematical Statistics, Probability, and Stochastic Processes are interrelated, and thus benefit from each other.

4. Proof of Theorem 3.1

We start the proof of Theorem 3.1 with the decomposition

where

We shall show below that there are Brownian bridges such that

Assuming for the time being that statements (4.3) and (4.4) hold, we next complete the proof of Theorem 3.1. To simplify the presentation, we use the following notation:

Hence, we have the asymptotic representation

The sum is a centered Gaussian random variable. To calculate its asymptotic variance, we establish the following limits:

Summing up the right-hand sides of the above six limits, we obtain , whose expression in terms of the parameter is given in Theorem 3.1. Finally, since converges in probability to (see, e.g., the proof of Corollary in [39]), the classical Sultsky's lemma completes the proof of Theorem 3.1. Of course, we are still left to verify statements (4.3) and (4.4), which make the contents of the following two subsections.

4.1. Proof of Statement (4.3)

If were continuously differentiable, then statement (4.3) would follow easily from the proof of Theorem in [39]. We do not assume differentiability of and thus a new proof is required, which is crucially based on the Vervaat process (see [3133], and references therein)

Hence, for every such that , which is satisfied for all sufficiently large since is fixed, we have that

It is well known (see [3133]) that is nonnegative and does not exceed . Since the cdf is continuous by assumption, we therefore have that

where is the uniform empirical process , which for large looks like the Brownian bridge . Note also that with the just introduced notation , the integral on the right-hand side of (4.9) is equal to . Hence,

We shall next replace the empirical process by an appropriate Brownian bridge in the first integral on the right-hand side of (4.11) with an error term of magnitude , and we shall also show that the second and third summands on the right-hand side of (4.11) are of the order The replacement of by can be accomplished using, for example, Corollary on page 48 of Csörgő et al. [40], which states that on an appropriately constructed probability space and for any , we have that

This result is applicable in the current situation since we can always place our original problem into the required probability space, because our main results are “in probability”. Furthermore, since , we have that . Hence, statement (4.12) implies that

Changing the variables of integration and using the property when , we obtain that

The main term on the right-hand side of (4.14) is . We shall next show that the right-most summand of (4.13) converges to when .

Changing the variable of integration and then integrating by parts, we obtain the bound

We want to show that the right-hand side of bound (4.15) converges to when . For this, we first note that

Next, with the notation , we have that

when , where the convergence to follows from Result 1 in the Appendix of Necir and Meraghni [39]. Taking statements (4.15)–(4.17) together, we have that the right-most summand of (4.13) converges to when .

Consequently, in order to complete the proof of statement (4.3), we are left to show that the second and third summands on the right-hand side of (4.11) are of the order . The third summand is of the order because and . Hence, we are only left to show that the second summand on the right-hand side of equation (4.11) is of the order , for which we shall show that

To prove statement (4.18), we first note that

The first summand on the right-hand side of bound (4.19) is of the order due to statement (4.12) with . The second summand on the right-hand side of bound (4.19) is of the order due to a statement on page 49 of Csörgő et al. [40] (see the displayed bound just below statement () therein). Hence, to complete the proof of statement (4.18), we need to check that

Observe that, for each , the distribution of is the same as that of , where is the uniform empirical quantile function. Furthermore, the processes and are equal in distribution. Hence, statement (4.20) is equivalent to

From the Glivenko-Cantelli theorem we have that almost surely, which also implies that since by our choice of . Moreover, we know from Theorem and Remark of Wellner [41] that

from which we conclude that

Since the function is slowly varying at zero, using Potter's inequality (see the 5th assertion of Proposition on page 367 of de Haan and Ferreira [13],we obtain that

for any . In view of (4.23), the right-hand side of (4.24) is equal to , which implies statement (4.21) and thus finishes the proof of statement (4.3).

4.2. Proof of Statement (4.4)

The proof of statement (4.4) is similar to that of Theorem in Necir et al. [42], though some adjustments are needed since we are now concerned with the CTE risk measure. We therefore present main blocks of the proof together with pinpointed references to Necir et al. [42] for specific technical details.

We start the proof with the function that was already used in the formulation of Theorem 3.1. Hence, if is a random variable with the distribution function , , then because is a uniform on the interval random variable. Hence,

and so we have

We next show that the right-most term in (4.26) converges to when . For this reason, we first rewrite the term as follows:

The right-hand side of (4.27) converges to (see notes on page 149 of Necir et al. [42]) due to the second-order condition (3.1), which can equivalently be rewritten as

for every , where . Note that when . Hence, in order to complete the proof of statement (4.4), we need to check that

With Hill's estimator written in the form

we proceed with the proof of statement (4.29) as follows: Furthermore, we have that

Arguments on page 156 of Necir et al. [42] imply that the first term on the right-hand side of (4.32) is of the order , and a note on page 157 of Necir et al. [42] says that . Hence, the first term on the right-hand side of (4.32) is of the order . Analogous considerations using bound (2.5) instead of (2.4) on page 156 of Necir et al. [42] imply that the first term on the right-hand side of (4.31) is of the order . Hence, in summary, we have that

We now need to connect the right-hand side of (4.33) with Brownian bridges . To this end, we first convert the -based order statistics into -based (i.e., uniform on ) order statistics. For this we recall that the cdf of is , and thus is equal in distribution to , which is . Consequently,

Next we choose a sequence of Brownian bridges (see pages 158-159 in [42] and references therein) such that the following two asymptotic representations hold:

Using these two statements on the right-hand side of (4.34) and also keeping in mind that is a consistent estimator of (see [22]), we have that

Dividing both sides of equation (4.36) by , we arrive at (4.29). This completes the proof of statement (4.4) and of Theorem 3.1 as well.

Acknowledgments

Our work on the revision of this paper has been considerably influenced by constructive criticism and suggestions by three anonymous referees and the editor in charge of the manuscript, Edward Furman, and we are indebted to all of them. Results of the paper were first announced at the 44th Actuarial Research Conference at the University of Wisconsin, Madison, Wisconsin, July 30–August 1, 2009. The authors are grateful to participants of this most stimulating conference, organized by the Society of Actuaries, for generous feedback. The research has been partially supported by grants from the Society of Actuaries (SOA) and the Natural Sciences and Engineering Research Council (NSERC) of Canada.