Table of Contents Author Guidelines Submit a Manuscript

A corrigendum for this article has been published. To view the corrigendum, please click here.

Advances in Decision Sciences
Volume 2012 (2012), Article ID 150303, 9 pages
Research Article

Comparison of Some Tests of Fit for the Inverse Gaussian Distribution

1School of Mathematical and Physical Sciences, University of Newcastle, NSW 2308, Australia
2Centre for Statistical and Survey Methodology, School of Mathematics and Applied Statistics, University of Wollongong, NSW 2522, Australia
3Department of Mathematical Modelling, Statistics and Bioinformatics, 9000 Gent, Belgium

Received 27 April 2012; Revised 18 July 2012; Accepted 26 July 2012

Academic Editor: Shelton Peiris

Copyright © 2012 D. J. Best et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


This paper gives an empirical investigation of some tests of goodness of fit for the inverse Gaussian distribution.

1. Introduction

The inverse Gaussian distribution is an important statistical model for the analysis of positive data. See, for example, Seshadri [1]. In its standard form the distribution, denoted , depends on the shape parameter and the mean . Its probability density function is

Let be a sequence of independent observations. We wish to test H0: is distributed as for and against and not H0.

The maximum likelihood estimators are Put .

We consider the following test of fit statistics:(i)the smooth test statistic of Ducharme [2], denoted by ,(ii)the Laplace transform test statistic of Henze and Klar [3], denoted by , (iii)the empirical likelihood ratio test statistic of Vexler et al. [4], denoted by ,(iv)the traditional Anderson-Darling test statistic, denoted by ,(v)conventional smooth tests following Rayner et al. [5].

We strongly suggest that values for all tests be found using the parametric bootstrap. A link for smooth test statistics for many distributions and bootstrap values for their tests of fit is

2. Definitions

2.1. Ducharme’s

The usual approach to constructing a smooth test as outlined, for example, in Rayner et al. [5], produces inconsistent and less powerful tests; see Ducharme [2, page 279] and Henze and Klar [3, Tables 1 and 2]. Ducharme [2] suggests testing H0 above by first defining ; he then states that has a random walk distribution, say. The Ducharme [2] statistic is , where and are, under H0, asymptotically independent and asymptotically standard normal variables, and hence is asymptotically distributed. Thus does not depend on , as do (see Section 2.2) and the Anderson-Darling test statistic . Ducharme [2, page 279] notes this implies loses power for some values can take. The components and are defined by in which .

A positive feature of smooth tests is that their components can often shed light on how the data differ from the hypothesised distribution. This is somewhat less evident with Ducharme’s test given that he transforms the data. Another positive feature of smooth tests is that their components often give highly focused tests with good power. Ducharme’s test has components that are likely to fulfil this role.

2.2. Henze and Klar’s

This is defined using the exponentially scaled complementary error function where . Note that in we divide by and not as in Henze and Klar [3, page 428], as we believe a typographical error was made in their paper. Now let and . Then Tests based on the empirical Laplace transform, as is , have produced powerful tests for other distributions, and so it is useful to compare with other recently suggested tests.

2.3. The Statistic of Vexler et al.

Order the observations so that , and let . Then where can be taken to be 0.5. Observe that, for small , such as , the statistic can take an infinite value when there are tied data. Vexler et al. [4] do not appear to note this. For the Poisson alternative in Table 1 and the statistic is often infinite. Choi and Lim [6] show that an entropy-type statistic-like has good power for the Laplace distribution and so it is of interest to see how the entropy method works with a skewed distribution.

Table 1: Powers (in percentages) of goodness of fit tests for the inverse Gaussian distribution for and (a) , (b) .
2.4. The Anderson-Darling Statistic

Again order the data from smallest to largest to obtain and take where is the distribution function for the IG distribution. Then the Anderson-Darling statistic is The Anderson-Darling has stood the test of time as a useful general option for tests of fit for many distributions. Have newer tests improved on its power performance?

2.5. Conventional Smooth Test Third and Fourth Components

Henze and Klar [3] consider the test based on the conventional second-order component and show it has poor power for some alternatives. Ducharme [2] notes that these conventional smooth tests discussed, for example, in Rayner et al. [5], can be inconsistent. However we decided to include tests based on and in our comparisons.

In general the rth-order component is , in which is an orthonormal polynomial of degree on the inverse Gaussian distribution. We find in which , , , , , , and .

Moreover in which , and are as for , , , , , and .

The parameters and can be estimated by maximum likelihood (ML) estimation using the previous formula for and , thereby giving and . We also looked at and where the parameters are estimated using method of moments (MOM) estimators and .

As indicated previously, smooth tests can indicate in terms of moments how data and the hypothesised distribution differ. This feature, and good power in previous studies for testing for other distributions, prompted us to include conventional smooth tests in our comparisons.

3. Sizes and Powers

In this section wherever possible we have used IMSL routines to generate random deviates. Calculations were done using double precision arithmetic and FORTRAN code. For the inverse Gaussian random deviates were found as in Michaels et al. [7].

We examine a similar range of alternatives to that given by Henze and Klar [3] so that comparisons can be made with the other statistics in their Table 1(a). We note that(i)for the lognormal alternative the probability density function should be ,(ii)the alternative is a standard exponential alternative. Note that in Henze and Klar [3, Table 1] , , and are equivalent.

It appears that the tests based on and generally do well while that based on is only competitive for the symmetric uniform alternative. The smooth tests based on and , like that based on in Henze and Klar [3], were not competitive. The tests based on and were generally even less competitive. This is unfortunate as these components help describe the data and this facility is not available with the other tests. All powers were calculated using parametric bootstrap.

The alternatives in Table 1 are defined in Henze and Klar [3]. However note that in Henze and Klar [3] is here. There is, however, one exception, and that is the Poisson-type alternative POI(3) which has probability function ! in which here and if a random value is zero we take this to be . This alternative was suggested by the comment in Henze and Klar [3] that for this shelf life data they examine and the other EDF statistics have a much smaller value than and the other empirical Laplace transform statistics. A feature of the shelf life data was that there were tied observations — not something one would expect for an inverse Gaussian distribution. The POI(3) alternative gives parametric bootstrap simulated samples with many ties and, as can be seen in Table 1, the power of the test based on is much greater than those for or . The test based on classifies infinite values as rejections of the null hypothesis. We have no explanation as to why the Anderson-Darling test is quite powerful for tied data when the null hypothesis specifies an inverse Gaussian distribution.

In Table 1(a) our powers for the tests based on and are very similar to those obtained by Henze and Klar [3]. Table 1(b) gives powers for the same alternatives as Table 1(a) but with as this choice of is commonly used in practice. The relative performance of the tests in Tables 1(a) and 1(b) is similar. The traditional smooth tests, based on the statistics, are sometimes particularly poor in Table 1(b).

4. The Approach to

An advantage of the smooth test statistics and their components is that under the null hypothesis they have asymptotic distributions. Thus for larger sample sizes approximate values can be found using the distribution. However for and Table 2 indicates that, to give actual test sizes close to the nominal 5%, for and a sample size of 200 might be needed, while an even greater sample size might be needed for . This ties in with the suggestion, made in Section 1, to use the parametric bootstrap.

Table 2: Empirical null 95% points for , , , , and /(T) for and .

We did not expect the conventional smooth test statistics to be asymptotically distributed; see, for example, Rayner et al. [5, Section 9.3]. As an illustration of this Table 2 shows that 95% points of do not converge to 3.84. If , we observe that because of the MOM estimators used in we can write the numerator of as say. Then , where is the variance of , should be asymptotically . Here . This formula can be found by the delta method. No powers for are shown in Table 1 as they are similar to those for .

5. Examples

(i) Failure Times
Proschan [8] has given failure times for air conditioning in Boeing 720 jets. For jet number 7912 the 30 times were Does the inverse Gaussian provide a good model for these times? We find , , and . With , with value 0.152. Further, with value 0.0007 and with value 0.012. If , say, it makes a difference which test is used: those based on and are significant but that based on is not. If only the test based on is significant. Further results with values in parentheses are , , , , , and . We see the tests based on and are significant at the 5% level; the latter suggests the lack of fit is due to kurtosis differences between the model and the data. See Figure 1. Observe that in Figures 1 and 2 the height of the histogram bars is class frequency/number of observations/class width and that this height is labelled “density” so as to be on the same scale as the probability “density” curve. Figure 1 uses MOM estimators for this curve.
Aside from that, we note that in Henze and Klar [3, Table 3] the value 3.7 should be 3.0. This does not affect the conclusions of Henze and Klar for this data set.

Figure 1: Air conditioner failure times in Boeing jets.
Figure 2: Storm precipitation at Jug bridge, MD, USA.

(ii) Precipitation at Jug Bridge
Ang and Tang [9, page 266] consider precipitation from storms in inches at the Jug bridge in MD, USA. Their data were Figure 2 indicates the inverse Gaussian fit is marginal. We find , , and with , giving a value of 0.09. Further test statistics with values in parentheses are , , , , , , , and . As the test based on is most critical of the IG hypothesis, the data may be more symmetric than the . The inverse Gaussian curve in Figure 2 uses ML estimators.
In passing we note that the exponential distribution with parameter 0.463 does not provide a good fit to the data. When testing for an exponential distribution with an approximate value of 0.03. Visual inspection of Figure 2 may have indicated the exponential model may have been appropriate.

6. Conclusion

The tests based on and do well in the power comparisons while that based on indicates possible kurtosis differences from the distribution in the failure time example. For the precipitation data the test based on is most critical of the fit of the model. In fact apart from the tests based on and , all of the tests studied here had something to recommend them: reasonable power or interpretability. No test was uniformly superior to the others.


The authors thank a referee for a number of constructive comments.


  1. V. Seshadri, The Inverse Gaussian Distribution: Statistical Theory & Applications, Springer, New York, NY, USA, 1998.
  2. G. R. Ducharme, “Goodness-of-fit tests for the inverse Gaussian and related distributions,” Test, vol. 10, no. 2, pp. 271–290, 2001. View at Google Scholar · View at Scopus
  3. N. Henze and B. Klar, “Goodness-of-fit tests for the inverse Gaussian distribution based on the empirical Laplace transform,” Annals of the Institute of Statistical Mathematics, vol. 54, no. 2, pp. 425–444, 2002. View at Publisher · View at Google Scholar · View at Scopus
  4. A. Vexler, G. Shan, S. Kim, W. M. Tsai, L. Tian, and A. D. Hutson, “An empirical likelihood ratio based goodness-of-fit test for Inverse Gaussian distributions,” Journal of Statistical Planning and Inference, vol. 141, no. 6, pp. 2128–2140, 2011. View at Publisher · View at Google Scholar · View at Scopus
  5. J. C. W. Rayner, O. Thas, and D. J. Best, Smooth Tests of Goodness of Fit: Using R, Wiley, Singapore, 2nd edition, 2011. View at Publisher · View at Google Scholar · View at Scopus
  6. B. Choi and K. Kim, “Testing goodness-of-fit for Laplace distribution based on maximum entropy,” Statistics, vol. 40, no. 6, pp. 517–531, 2006. View at Publisher · View at Google Scholar · View at Scopus
  7. J. Michaels, W. Schucany, and R. Haas, “Generating random deviates using transformations with multiple roots,” American Statistician, vol. 30, no. 2, pp. 88–90, 1976. View at Google Scholar
  8. F. Proschan, “Theoretical explanation of observed decreasing failure rate,” Technometrics, vol. 5, no. 3, pp. 375–383, 1963. View at Google Scholar · View at Scopus
  9. A. Ang and W. H. Tang, Probability Concepts in Engineering, Wiley, New York, NY, USA, 2nd edition, 2007.