Journal of Probability and Statistics

Journal of Probability and Statistics / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 6751574 | https://doi.org/10.1155/2020/6751574

J. Qi, M. Rekkas, A. Wong, "Inference for the Difference of Two Independent KS Sharpe Ratios under Lognormal Returns", Journal of Probability and Statistics, vol. 2020, Article ID 6751574, 6 pages, 2020. https://doi.org/10.1155/2020/6751574

Inference for the Difference of Two Independent KS Sharpe Ratios under Lognormal Returns

Academic Editor: Dejian Lai
Received01 Mar 2020
Revised31 Jul 2020
Accepted21 Sep 2020
Published10 Oct 2020

Abstract

A higher-order likelihood-based asymptotic method to obtain inference for the difference between two KS Sharpe ratios when gross returns of an investment are assumed to be lognormally distributed is proposed. Theoretically, our proposed method has distributional accuracy, whereas conventional methods for inference have distributional accuracy. Using an example, we show how discordant confidence interval results can be depending on the methodology used. We are able to demonstrate the accuracy of our proposed method through simulation studies.

1. Introduction

Let be the price of an investment at time , and assume this investment does not pay out dividends. The net return , of this investment between time and time is given aswhere is the gross return or relative price of that investment. Moreover, letdenote the log return at time which represents the continuously compounded return. One of the most common measures used to assess an investment’s performance is the Sharpe ratio. This ratio, proposed by Sharpe [1], takes the formwhere is a risk-free return. then measures the excess expected return, or risk premium, relative to its volatility.

When data are given in terms of relative prices rather than returns, Knight and Satchell [2] propose the following extension to the Sharpe ratio:

This extension is known as the KS Sharpe ratio.

In statistical literature, it is common to assume that log returns (i.e., the ) are identically and independently distributed as normal with mean and variance . Equivalently, the are identically and independently distributed as lognormal with mean and variance given byrespectively. Under this assumption, the Sharpe ratio and the KS Sharpe ratio can be written as follows:

Liu et al. [3] applied the standard likelihood method to obtain inference for the Sharpe ratio with independent data, and Ji et al. [4] extended the methodology to obtain inference for this ratio with autocorrelated return data. Fu et al. [5] relaxed the distributional assumption and applied the adjusted empirical likelihood ratio method to obtain inference for the Sharpe ratio. Despite work on the Sharpe ratio, there does not appear to be any statistical studies on the KS Sharpe ratio except for the original paper by Knight and Satchell [2], in which they obtained the uniformly minimum variance unbiased estimator for the ratio.

As the KS Sharpe ratio depends only on , one sample exact inference is straightforward. However, inference for either the difference or the ratio of two independent KS Sharpe ratios is complicated and no exact analysis is available. In this paper, we propose to use a higher-order likelihood-based asymptotic method to obtain inference for the difference between two independent KS Sharpe ratios. The proposed method provides an important indicator to practitioners who are interested in comparing the performance of two assets. We compare our method to two standard asymptotic methods through an example and simulations. While the example shows the differences between the three methods, the simulation studies demonstrate the extreme accuracy of our proposed method. Accuracy is important as sample sizes can be limited in practice. The proposed method is straightforward and easy to use in practice. The methodology can also be applied to obtain inference for the ratio of two independent KS Sharpe ratios.

2. Likelihood-Based Inference

In this section, we review first-order likelihood-based inferential procedures. We then propose a modification to the first-order procedures such that the resulting inference procedure has third-order accuracy. Let be a sample from a population with probability density function , where is a -dimensional parameter. Let be the scalar parameter of interest and be the -dimensional nuisance parameter. As defined by Kalbfleisch [6], the log-likelihood function for based on the observed sample iswhere is an arbitrary additive constant. Reid [7] notes that the value of the likelihood function is only meaningful in relative terms. The likelihood function contains almost everything that a model has to say about the observed data. Among the possible values of , the overall maximum likelihood estimator (mle) of ,maximizes the likelihood function regardless of the value of the additive constant . Hence, without loss of generality, we set to be zero. Moreover, let

be the observed information evaluated at mle.

2.1. First-Order Methods

With regularity conditions stated in Cox and Hinkley [8], the asymptotic mean and variance of the mle are given as and . Hence, for a large sample size , by the multivariate central limit theorem, the limiting distribtion of is the dimensional multivariate normal distribution with mean and variance . Thus, is asymptotically distributed as , with first-order accuracy or equivalently, with a rate of convergence . By applying the delta method, we havewhereand is the derivative of with respect to . The statistic in (10) is asymptotically distributed as a standard normal distribution with first-order accuracy. A confidence interval for is thus given bywhere is the quantile of the standard normal distribution. The statistic is referred to as the Wald statistic or the standardized mle statistic. Jobson and Korkie [9] applied this method to obtain approximate inference for the Sharpe ratio.

Another commonly applied asymptotic method is based on the likelihood ratio method. Since is a scalar parameter of interest, we havewheredenotes the constrained mle (i.e., the mle of for a given ). The statistic is likewise asymptotically distributed as standard normal with first-order accuracy. A confidence interval for is

The statistic is the Wilks statistic or the log-likelihood ratio statistic and is referred to as the signed log-likelihood ratio statistic. Reid [7] provided a detailed review of the asymptotic distribution of and . In practice, the Wald statistic is used more frequently than the signed log-likelihood ratio statistic due to its simplicity in calculation. The determination of the constrained mle required for the signed log-likelihood ratio statistic can be a much more difficult task. However, theoretically, the advantage of the signed log-likelihood ratio statistic lies in its invariance to reparameterization. The Wald statistic does not possess this property and results will vary depending on the parameterization used.

2.2. Third-Order Method

Many methods have emerged in recent years that improve upon the accuracy of the signed log-likelihood ratio statistic, see Skovgaard [10] for a detailed overview. In particular, there are three major improvement methods which are proposed by the following papers: Barlett [11]; Barndorff-Nielsen [12, 13]; and Fraser and Reid [14]; and they are reviewed in Skovgaard [10]. Bartlett [11] proposed the Bartlett correction method, a method that has fourth-order accuracy and is applicable to any vector parameter of interest. Except in special cases, the Bartlett correction factor is extremely difficult to calculate. Barndorff-Nielsen [12, 13] proposed the modified signed log-likelihood ratio method. This method has third-order accuracy and is applicable only to a scalar parameter of interest. It further requires the existence of an ancillary statistic which may not exist, and even if it does exist, it may not be unique. Fraser and Reid [14] generalized the modified signed log-likelihood ratio method such that it is applicable to any model and does not require an ancillary statistic. The method achieves third-order accuracy. In this paper, we apply Fraser and Reid’s method to obtain inference for the difference of two independent KS Sharpe ratios under lognormal returns.

Barndorff-Nielsen [12, 13] defined the modified signed log-likelihood ratio statistic for a scalar parameter to bewhere is the signed log-likelihood ratio statistic defined in (13), and is a special statistic that depends on the existence of an ancillary statistic. Fraser and Reid [14] showed that for the natural exponential family model with as the canonical parameter and a component of , , which is the standardized mle statistic defined in (10). Fraser and Reid [14] further extended their methodology to the exponential family model where is not a component of the canonical parameter. Their approach takes the following steps:(1)Obtain the canonical parameter (2)The statistic as defined in (13) remains unchanged as the signed log-likelihood ratio statistic is invariant to reparametrization(3)Obtain the standardized mle statistic in the canonical parameter scale aswhere is the derivative of with respect to evaluated at the constrained mle, and is the derivative of with respect to evaluated at the overall mle

Fraser and Reid [14] showed that is asymptotically distributed as standard normal with third-order distributional accuracy. Brazzale et al. [15] have a collection of examples where they apply this third-order method and demonstrate the extreme accuracy of the method even when the sample size is small. In what follows, we apply the above methods to obtain confidence intervals for the difference of two independent KS Sharpe ratios.

3. Inference for the Difference of Two Independent KS Sharpe Ratios

As discussed in Section 1, with IID normally distributed with mean and variance , or alternatively, IID lognormally distributed, the KS Sharpe ratio is a function of alone. It is well-known that is distributed as . Exact confidence intervals for can therefore be obtained. Since the KS Sharpe ratio is a one-to-one function of , exact confidence intervals for can also be obtained. However, inference for the difference between two independent KS Sharpe ratios is not as straightforward. Exact results are not available. In this section, we apply the likelihood-based methods discussed to this particular inferential problem.

Consider two independent investments with log returns from a normal distribution with mean and variance . The are drawn from a normal distribution with mean and variance . The parameter of interest is the difference between the two KS Sharpe ratios:where . The resultant log-likelihood function is

The overall mle is

The inverse of the observed information evaluated at the mle is used as an approximation to the variance of . By applying the delta method, we are able to estimate the variance of as

The statistic is asymptotically distributed as normal with mean and approximate variance . can be obtained from (10). This can be viewed as an extension of the method discussed by Jobson and Korkie [9].

Moreover, it can be shown that the constrained mle iswhere

The variances given by and do not have closed-form solutions and must be obtained numerically. With this information, can be obtained from (13). Confidence intervals for based on the first-order methods can likewise be obtained. We note that the model is an exponential family model with the canonical parameter given by

Hence, can be obtained from (17) and the modified signed log-likelihood ratio statistic can be obtained from (16). Confidence intervals for based on third-order methods can thus be obtained.

3.1. Example

To illustrate our proposed method, consider the data reported in Table 1. This table records the monthly relative returns of Tesla, Inc. (TSLA) and Netflix, Inc. (NFLX) downloaded from Yahoo Finance for the period January 2019 to January 2020. Our interest is in comparing the performance of the two stocks using the difference between the two KS Sharpe ratios. In terms of the underlying assumptions, we note that a Shapiro–Wilks test of normality for TSLA and NFLX gives values of 0.7107 and 0.0664, respectively. Correlation test ( value = 0.1177) between the two series suggests the two populations are independently distributed as normal. The value from a Durbin Watson test of serial correlation is 0.8834 for TSLA and 0.3769 for NFLX, which suggests no serial correlation.


TSLANFLX

1.04188651.0547865
0.87489060.9956995
0.85289071.0392080
0.77573420.9264317
1.20684811.0700303
1.08122260.8793150
0.93377760.9094709
1.06763880.9110468
1.30742721.0739481
1.04769471.0948123
1.26789721.0283163
1.35515021.0771085

Table 2 records the 95% confidence intervals for the difference between the two KS Sharpe ratios calculated using the three different likelihood methods discussed in this paper. While the intervals all suggest NFLX is the preferred stock, they do not produce quantitatively similar results.


Method95% confidence interval

Wald(−13.3134, −1.6099)
Signed log-likelihood ratio(−12.4562, −1.9002)
Proposed(−13.0647, −1.4752)

3.2. Simulation Results

The results from the abovementioned example show how the three methodologies can result in quite different confidence intervals. In this section, simulation studies are carried out to assess the performance of these methods. Extensive simulation studies were performed but only a selection of the results is reported in this paper. The presented results are representative of the findings of all simulations conducted. For each combinations of , , and , 10,000 Monte Carlo replications are performed. For each generated sample, the 95% confidence interval for the difference of the KS Sharpe ratios is calculated. The performance of a method is judged using the following three criteria:(1)The central coverage probability (CP): the proportion of the true difference of the KS Sharpe ratios falls within the 95% confidence interval(2)The lower tail error rate (LE): the proportion of the true difference of the KS Sharpe ratios falls below the lower limit of the 95% confidence interval(3)The upper tail error rate (UE): the proportion of the true difference of the KS Sharpe ratios falls above the upper limit of the 95% confidence interval

The nominal values for these three criteria are 0.95, 0.025, and 0.025, respectively. The nominal value for the CP criteria is 0.95. Moreover, since the three methods considered in this paper have the same limiting standard normal distribution, the nominal values for the LE and UE criteria are 0.025 and 0.025, respectively. Tables 35 record the simulation results.


SettingMethodCPLEUE

, Wald0.93680.01790.0453
Signed log-likelihood ratio0.91150.01650.0720
Proposed0.94950.02260.0279

, Wald0.93370.01910.0472
Signed log-likelihood ratio0.91910.02030.0606
Proposed0.94880.02170.0295

, Wald0.92670.02810.0452
Signed log-likelihood ratio0.91910.02860.0523
Proposed0.94160.03040.0280


SettingMethodCPLEUE

, Wald0.94160.02480.0336
Signed log-likelihood ratio0.93250.02440.0431
Proposed0.94980.02380.0264

, Wald0.93940.02960.0310
Signed log-likelihood ratio0.93430.03220.0335
Proposed0.95010.02550.0244

, Wald0.93660.03450.0289
Signed log-likelihood ratio0.93180.03760.0306
Proposed0.94520.02820.0266


SettingMethodCPLEUE

, Wald0.94290.03500.0221
Signed log-likelihood ratio0.93830.03630.0254
Proposed0.95110.02580.0231

, Wald0.94020.03760.0222
Signed log-likelihood ratio0.93570.04270.0216
Proposed0.94880.02650.0247

, Wald0.94320.03660.0202
Signed log-likelihood ratio0.93870.04130.0200
Proposed0.94880.02430.0269

The central coverage of our proposed method is superior to both the Wald and signed log-likelihood ratio methods, even for small sample sizes. Furthermore, our proposed method has extremely accurate and symmetric tail error rates. The tail error probabilities produced by the other two methods are wildly asymmetric but perform better as the sample size increases, indicating that these two methods are converging to the normal distribution. From all our additional simulation studies, we can conclude that our proposed method is indisputably superior and thus recommended for all empirical applications.

4. Conclusion

We considered lognormally distributed returns and used the KS Sharpe ratio as a measure of an asset’s expected return relative to its volatility. We proposed a third-order likelihood-based method for the difference between two such ratios. Numerical studies verified the extreme accuracy obtained by the proposed method. When returns are assumed to be lognormally distributed, we advocate the use of our proposed method. It is both easy to implement and extremely accurate regardless of the size of the samples.

Data Availability

The dataset used in the example is from a public domain (downloaded from Yahoo Finance). The other numerical examples in the submitted paper are based on simulation studies, which are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

References

  1. W. F. Sharpe, “Mutual fund performance,” The Journal of Business, vol. 39, no. 1, pp. 119–138, 1966. View at: Publisher Site | Google Scholar
  2. J. Knight and S. Satchell, “A re-examination of sharpe’s ratio for log-normal prices,” Applied Mathematical Finance, vol. 12, no. 1, pp. 87–100, 2005. View at: Publisher Site | Google Scholar
  3. Y. Liu, M. Rekkas, and A. Wong, “Inference for the sharpe ratio using a likelihood-based approach,” Journal of Probability and Statistics, vol. 2012, Article ID 878561, 24 pages, 2012. View at: Publisher Site | Google Scholar
  4. Q. Ji, M. Rekkas, and A. Wong, “Highly accurate inference on the sharpe ratio for autocorrelated return data,” Journal of Statistical and Econometric Models, vol. 7, no. 1, pp. 21–55, 2018. View at: Google Scholar
  5. Y. Fu, H. Wang, and A. Wong, “Adjusted empirical likelihood method in the presence of nuisance parameters with application to the sharpe ratio,” Entropy, vol. 20, pp. 313–330, 2018. View at: Publisher Site | Google Scholar
  6. J. D. Kalbfleisch, Probability and Statistical Inference Volume 2: Statistical Inference, Springer, Berlin, Germany, 1985.
  7. N. Reid, “Likelihood inference,” Wiley Interdisciplinary Reviews: Computational Statistics, vol. 2, no. 5, pp. 517–525, 2010. View at: Publisher Site | Google Scholar
  8. D. R. Cox and D. V. Hinkley, Theoretical Statistics, CRC Press, Boca Raton, FL, USA, 1979.
  9. J. D. Jobson and B. M. Korkie, “Performance hypothesis testing with the sharpe and treynor measures,” The Journal of Finance, vol. 36, no. 4, pp. 889–908, 1981. View at: Publisher Site | Google Scholar
  10. I. M. Skovgaard, “Likelihood asymptotics,” Scandinavian Journal of Statistics, vol. 28, no. 1, pp. 3–32, 2001. View at: Publisher Site | Google Scholar
  11. M. S. Bartlett, “Properties of sufficiency and statistical tests,” Proceedings of the Royal Society of London, vol. 160, pp. 268–282, 1937. View at: Publisher Site | Google Scholar
  12. O. E. Barndorff-Nielsen, “Infereni on full or partial parameters based on the standardized signed log likelihood ratio,” Biometrika, vol. 73, no. 2, pp. 307–322, 1986. View at: Publisher Site | Google Scholar
  13. O. E. Barndorff-Nielsen, “Approximate interval probabilities,” Journal of the Royal Statistical Society: Series B, vol. 52, no. 3, pp. 485–496, 1990. View at: Publisher Site | Google Scholar
  14. D. A. S. Fraser and N. Reid, “Ancillaries and third order significance,” Utilitas Mathematica, vol. 7, pp. 33–53, 1995. View at: Publisher Site | Google Scholar
  15. A. R. Brazzale, A. C. Davison, and N. Reid, Applied Asymptotics: Case Studies in Higher Order Asymptotics, Cambridge Univerisity Press, Cambridge, UK, 2007.

Copyright © 2020 J. Qi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views694
Downloads769
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.