Abstract and Applied Analysis

Volume 2014 (2014), Article ID 396875, 9 pages

http://dx.doi.org/10.1155/2014/396875

## A Variance Shift Model for Detection of Outliers in the Linear Measurement Error Model

^{1}Department of Statistics, Shahid Chamran University, Ahvaz, Iran^{2}Department of Biostatistics, Tarbiat Modares University, Tehran, Iran^{3}Department of Statistics, Islamic Azad University, Science and Research Branch, Fars, Iran

Received 27 May 2014; Accepted 5 August 2014; Published 14 September 2014

Academic Editor: Allan Peterson

Copyright © 2014 Babak Babadi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We present a variance shift model for a linear measurement error model using the corrected likelihood of Nakamura (1990). This model assumes that a single outlier arises from an observation with inflated variance. The corrected likelihood ratio and the score test statistics are proposed to determine whether the th observation has an inflated variance. A parametric bootstrap procedure is used to obtain empirical distributions of the test statistics and a simulation study has been used to show the performance of proposed tests. Finally, a real data example is given for illustration.

#### 1. Introduction

Outliers are observations that appear inconsistent with the rest of the data set and can have a profound destructive influence on the statistical analysis. To detect these kinds of observations in the linear models, different approaches have been suggested, among those one can refer to the case-deletion and variance shift models. The first approach is based on the assumption that outliers result from a shift in the mean of contaminated observations (see Barnett and Lewis [1] or Weisberg [2]) and the second procedure takes into account the assumption that an outlier arises from an error term with an increased variance (Cook and Weisberg [3]). Cook et al. [4] indicated that the maximum likelihood estimates for the position of the outlier under two methods could be different, unless the largest absolute studentized residual corresponds to the largest absolute residual. Using the residual maximum likelihood (REML), Thompson [5] showed that the residual variance and outlier position are the same under both methods.

In the linear mixed models, case deletion method, variance shift outlier model, and related diagnostics are studied widely by different authors. Christensen et al. [6] presented case deletion diagnostics for both fixed effects and variance components models. Banerjee and Frees [7] introduced case deletion diagnostics for both fixed effects and random subject effect in linear longitudinal models. Xuping and Bocheng [8] presented a unified diagnostic method for linear mixed models based upon the joint likelihood given by Robinson [9]. They showed that the estimates of parameters in case deletion method are equivalent to those in mean shift outlier model. Haslett and Dillane [10] proved a “delete = replace” identity in linear models and applied it to deletion diagnostics for estimators of variance components. Zewotir and Galpin [11] provided routine diagnostic tools for fixed effects, random effects, and variance components, which are computationally inexpensive. Li et al. [12] considered subset deletion diagnostics for fixed effects, random effects and one variance component in varying coefficient mixed models. Gumedze et al. [13] extended the variance shift outlier model (VSOM) to the linear mixed model.

In linear regression models, independent variables are often susceptible to nonnegligible errors, and then it will be more appropriate to consider the measurement error models (see Fuller [14] and Stefanski [15]). However, in measurement error models ordinary maximum likelihood (ML) estimates lose the consistency. In order to correct the bias on parameter estimation, a method in which the score function itself is corrected for measurement errors, is available for the estimation of parameters. This method is based on the corrected log-likelihood of Nakamura [16] (see also Giménez and Bolfarine [17] for more discussion).

On diagnostic methods for measurement error models, some previous works are due to Kelly [18] and Wellman and Gunst [19]. Zhong et al. [20] obtained case deletion and mean shift outlier model for linear measurement error models based upon the corrected likelihood of Nakamura [16]. Rasekh [21] studied multiple outlier detection in multivariate functional measurement error models based on the suitable definition of standardized residuals. Giménez and Galea [22] studied influence measures on corrected score estimators in functional heteroscedastic measurement error models and a local influence study on functional comparative calibration models with replicated data is developed by Giménez and Patat [23].

In this paper, we concentrate on the variance shift model of Cook et al. [4], for the linear measurement error model, using the corrected likelihood [16]. In Section 2, we present the basis of the corrected score method and obtain the estimates of parameters of the model. In Section 3, a variance shift model for the linear measurement error model is derived and the joint corrected maximum likelihood estimates are characterized. In Section 4, we develop the likelihood ratio and the score test statistics. Furthermore, a parametric bootstrap procedure is used to generate the empirical distribution of these statistics. In Section 5, to verify the performance of the proposed test statistics, a simulation study is reported and finally, an illustrative example is given in Section 6.

#### 2. Corrected Log-Likelihood of Measurement Error Models

Consider the linear measurement error models: where , is an matrix of unobservable regressors, is a vector of unobservable parameters, and is the unknown common variance. The matrix is the observed value of with the measurement error . Furthermore, and are independent, is a matrix of known values with nonnegative diagonal elements (Fuller [14]) and is an identity matrix. The model (1) is known as functional linear measurement error model.

The log-likelihood of is given by If we replace by without considering the measurement errors, then the ML estimates are not consistent in general. To correct for the effects of measurement errors on parameter estimation, we use the corrected score method proposed by Nakamura [16]. This method proposes a corrected log-likelihood , which satisfies where denotes the conditional mean with respect to given . For model (1), the appropriate corrected log-likelihood is suggested by Nakamura [16] as By solving the equations and , the corrected score estimates of and , respectively, are given by (see [16])

#### 3. A Variance Shift Model in the Linear Measurement Error

Suppose the th observation is considered with inflated error variance. A variance shift model for this observation takes the form where is an vector with value 1 in the th element and zero elsewhere and is an unknown random coefficient of the form . Model (6) can be considered as a linear mixed measurement error model in which is a random effect with the variance and the covariance matrix of the data are as follows: where and , for .

The log-likelihood and the corrected log-likelihood for this model are given by respectively, which have the following properties: Now, for fixed, the corrected score estimates of and will be obtained with differentiating from the corrected log-likelihood of the variance shift model given in (8) with respect to and [24]. Then we have respectively. In the following theorem we derive the asymptotic expressions of the and as functions of and given no outliers.

Theorem 1. *For model (6), we have
**
where is the th diagonal element of , is the th residual and is the th studentized residual of the model, in which [25].*

*Proof. *From the estimate of given in (10), we have
On the other hand, we know that, , , , , , and (see the appendix for more details). Therefore, the first term in the right hand side of (13) will be
and the second term is
Combining both terms of (13), we have
Consequently, we have
Multiplying the above matrix expression out and simplifying them, will be derived. Next, substituting in the estimate of given in (11), we can write
and hence, will be obtained.

In the rest of paper, we define and . It is obvious that for or , , and for we have

*Remark 2. * and are the corrected score estimates of and , respectively, in a mean shift outlier model for th observation, given by [20, 25].

Now evaluated at is, except for an additive constant, proportional to
Using Taylor series expansion, an approximate to the analogues to one given by Cook et al. [4] will be as
The existence of the value of , say , over the range of that maximizes , was proved by Cook et al*.* [4].

#### 4. Analogue of Likelihood Ratio Test and Score Test Statistics

In this section we derive analogues of likelihood ratio test and score test statistics for testing the null hypothesis that the observation is not unusual () against the alternative that it has an inflated variance, .

##### 4.1. Corrected Likelihood Ratio Test

Let and be the corrected log-likelihood evaluated at and under the null hypothesis, respectively, for testing. Consider The corrected likelihood ratio test is defined as

##### 4.2. Score Test Statistic

In order to derive the score test statistic, we only require the estimate of parameters under the null hypothesis. In the following theorem based on the observed information matrix we obtain this test statistic under the hypothesis of .

Theorem 3. *The score test statistic for the th observation , based on the observed information matrix for testing , is given by
**
where .*

*Proof. *Let the corrected observed information matrix of for and be , then the score test statistic (see [26]) for testing against is
where is the lower right corner of . Substituting , , and into the elements of (25), we have
where , and then substituting in (25) the result is achieved.

Because the null hypothesis is on the boundary of the parameter space, the standard asymptotic theory does not apply in this case [27]. Therefore, a parametric bootstrap procedure can be used to approximate the distributions of the likelihood ratio and the score test statistic (see Section 4.3 and [13]).

##### 4.3. Empirical Distribution

Based on the Gumedze et al. [13] the following parametric bootstrap procedures, for test statistics and , can be used to derive the empirical distributions of these statistics under the hypothesis of no outliers exist in the observations:

*Step 1.* Fit model (1) to the data and calculate estimates (see Zare and Rasekh [28]), and , where

*Step 2a.* Generate a new data vector from
where is randomly generated as and is randomly generated as . Fit model (1) to .

*Step 2b.* Compute the test statistic ( or ) for , by fitting a variance shift model to simulated data for each observation in turn and save the order statistics of the set or .

*Step 3.* Repeat Steps 2a and 2b times, for acceptably large, for example, . Therefore, an empirical distribution of size is generated for each order statistic.

*Step 4.* Calculate the percentile for each order statistic for the level of size .

The percentile of the th order statistic can be considered as a threshold for the th largest value of the test statistic from the original data and if the largest values of the test statistic from the original data all exceed their respective thresholds, then it is concluded that these are all outliers.

#### 5. Simulation Study

A parametric bootstrap simulation study is carried out to demonstrate the empirical performance of the various proposed test statistics in terms of the probability of a type error and power on a single unit.

The response variable is generated from the model ; , where , , is a vector all of whose elements are 1’s, and is rewritten in accordance with . We consider the following combinations for simulation: or 100 where or , , , , or 8 and or . The simulation study was conducted using the software and the codes are available from the second author upon request.

For each simulated data set, the CLRT and the score test statistics were calculated for the first observation. The choice of the first observation was arbitrary. To generate an empirical distribution of the test statistic under the null hypothesis, data sets for were simulated as , where , , , and are the corrected estimate of , and from . The probability of a type error estimate for a given test statistic and was calculated as the number of data sets for which the test statistic exceeded the 95th percentile of the empirical distribution, divided by 1100 [13].

The CLRT and the score test statistics were performed for a variance shift model for the first observation of each simulated data and 95th percentiles from the empirical distribution of each test statistics were used as threshold values for the test statistics observed on the original data set . The empirical probability of type errors for thresholds derived from the empirical distribution under the null hypothesis are calculated for the corrected likelihood ratio and score test statistics for (Tables 1, 2, and 3). A glance at the results of these tables indicate that in general the probability of a type errors of both CLRT and score test statistics are close to the nominal value of 0.05.

In order to access the relative sensitivity of the CLRT and score test statistics, we introduce the shift values 1, 3, and 5 for the first observation and again for each combination of parameters, 1100 data sets are generated from the following model: for , 3, or 5, where is an vector with value 1 in the first element and zero elsewhere. The CLRT, the score test statistic, and their empirical distribution were calculated as for the probability of a type error given above, consequently the power of the test statistics are also derived. Results of Tables 1–3 show that with increase of the displacement, , the power of the CLRT and score test statistic, increases in general. Moreover, we can see that power of the test statistics also increase as sample size increases. These tables also show the result of the CLRT and score test statistics are nearly identical in the empirical probability of a type errors and power.

#### 6. Example: Concrete Compressive Strength Data

These data were given by Wellman and Gunst [19] and contain comprehensive strength measurements of 41 sample of concrete. It was desired to use a linear regression model to predict comprehensive strength of concrete 28 days after pouring from the strength measurements taken two days after pouring. Zhong et al. [20] analyzed this data set using the linear measurement error model with . The zero diagonal element in corresponds to the constant term, a predictor variable measured without error. They indicated that the sample 21 exhibits a strong influence on the fitted model. Here we consider a variance shift measurement error model for this data set.

Figures 1, 2, and 3 show plots of the square of studentized residual of the data under model (1), estimates of the variance shift parameter , and estimated variance under model (6), versus case numbers, respectively. From these figures it is obvious that case 21 stands out as a possible outlier with relatively large values of and and a small estimated .

Next, the corrected likelihood ratio and score test statistics were calculated for each observation under model (6), and then 10000 simulated data sets were generated from the fitted model under the null hypothesis . In each simulation, a variance shift model was fitted for each observation and the test statistics were sorted and used to generate the empirical distribution of the order statistics for each test [13]. Figures 4 and 5 give plots of the test statistics from the real data and 95th percentile from the empirical distribution of the first, second and third largest values for each test statistic. These figures show that the statistics for observation 21 is larger than the 95th percentile of the distribution of the corresponding order statistics.

Finally, the fitted regression line measurement error model and the variance shift measurement error model for case 21 are shown in Figure 6.

#### 7. Conclusions

We extended the variance shift model to the linear measurement error models based on the corrected likelihood of Nakamura [16]. We derived the approximate estimate of parameters of the proposed model under the variance shift and indicated that if the variance shift parameter tends to infinity, these estimates will be the same as those obtained from a mean shift outlier model. Also, we proposed a corrected likelihood ratio test and derived the score test statistic for testing that an observation stands out as possible outlier and it is shown that the score test statistic is function of studentized residuals of model. The performance of both the corrected likelihood ratio and the score test statistics is studied using a parametric bootstrap simulation, and it was found out that with the increase of or sample size, the power of both test statistics increases.

#### Appendix

We assume that as tends to infinity, the limit of exists (). The existence of this limit is assured in Lee and Nelder [29]. Since and , by the law of large numbers, it is easy to get and . Consequently, we have Moreover, , and then also, is obtained from Taylor series expansion.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### References

- V. Barnett and T. Lewis,
*Outliers in Statistical Data*, John Wiley & Sons, New York, NY, USA, 1978. View at MathSciNet - S. Weisberg,
*Applied Linear Regression*, John Wiley & Sons, New York, NY, USA, 1980. View at MathSciNet - R. D. Cook and S. Weisberg,
*Residuals and Influence in Regression*, Chapman & Hall, London, UK, 1982. View at MathSciNet - R. D. Cook, N. Holschuh, and S. Weisberg, “A note on an alternative outlier model,”
*Journal of the Royal Statistical Society. Series B*, vol. 44, no. 3, pp. 370–376, 1982. View at Google Scholar · View at MathSciNet - R. Thompson, “A note on restricted maximum likelihood estimation with an alternative outlier model,”
*Journal of the Royal Statistical Society B*, vol. 47, no. 1, pp. 53–55, 1985. View at Google Scholar - R. Christensen, L. M. Pearson, and W. Johnson, “Case-deletion diagnostics for mixed models,”
*Technometrics*, vol. 34, no. 1, pp. 38–45, 1992. View at Publisher · View at Google Scholar · View at MathSciNet - M. Banerjee and E. W. Frees, “Influence diagnostics for linear longitudinal models,”
*Journal of the American Statistical Association*, vol. 92, no. 439, pp. 999–1005, 1997. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus - Z. Xuping and W. Bocheng, “Influence analysis on linear models with random effects,”
*Applied Mathematics*, vol. 14, no. 2, pp. 169–176, 1999. View at Publisher · View at Google Scholar · View at MathSciNet - G. K. Robinson, “That BLUP is a good thing: the estimation of random effects,”
*Statistical Science*, vol. 6, no. 1, pp. 15–51, 1991. View at Google Scholar · View at MathSciNet - J. Haslett and D. Dillane, “Application of “delete=replace” to deletion diagnostics for variance component estimation in the linear mixed model,”
*Journal of the Royal Statistical Society. Series B: Statistical Methodology*, vol. 66, no. 1, pp. 131–143, 2004. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus - T. Zewotir and J. S. Galpin, “Influence diagnostics for linear mixed models,”
*Journal of Data Science*, vol. 3, pp. 153–177, 2005. View at Google Scholar - Z. Li, W. Xu, and L. Zhu, “Influence diagnostics and outlier tests for varying coefficient mixed models,”
*Journal of Multivariate Analysis*, vol. 100, no. 9, pp. 2002–2017, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus - N. F. Gumedze, S. J. Welham, B. J. Gogel, and R. Thompson, “A variance shift model for detection of outliers in the linear mixed model,”
*Computational Statistics & Data Analysis*, vol. 54, no. 9, pp. 2128–2144, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus - W. A. Fuller,
*Measurement Error Models*, John Wiley & Sons, New York, NY, USA, 1987. View at Publisher · View at Google Scholar · View at MathSciNet - L. A. Stefanski, “Unbiased estimation of a nonlinear function of a normal mean with application to measurement error models,”
*Communications in Statistics: Theory and Methods*, vol. 18, no. 12, pp. 4335–4358, 1989. View at Publisher · View at Google Scholar · View at MathSciNet - T. Nakamura, “Corrected score function for errors-in-variables models: methodology and application to generalized linear models,”
*Biometrika*, vol. 77, no. 1, pp. 127–137, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus - P. Giménez and H. Bolfarine, “Corrected score functions in classical error-in-variables and incidental parameter models,”
*Australian Journal of Statistics*, vol. 39, no. 3, pp. 325–344, 1997. View at Publisher · View at Google Scholar · View at MathSciNet - G. E. Kelly, “The influence function in the errors in variables problem,”
*The Annals of Statistics*, vol. 12, no. 1, pp. 87–100, 1984. View at Publisher · View at Google Scholar · View at MathSciNet - J. M. Wellman and R. F. Gunst, “Influence diagnostics for linear measurement error models,”
*Biometrika*, vol. 78, no. 2, pp. 373–380, 1991. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus - X.-P. Zhong, B.-C. Wei, and W.-K. Fung, “Influence analysis for linear measurement error models,”
*Annals of the Institute of Statistical Mathematics*, vol. 52, no. 2, pp. 367–379, 2000. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus - A. R. Rasekh, “Local influence in measurement error models with ridge estimate,”
*Computational Statistics & Data Analysis*, vol. 50, no. 10, pp. 2822–2834, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus - P. Giménez and M. Galea, “Influence measures on corrected score estimators in functional heteroscedastic measurement error models,”
*Journal of Multivariate Analysis*, vol. 114, pp. 1–15, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus - P. Giménez and M. L. Patat, “Local influence for functional comparative calibration models with replicated data,”
*Statistical Papers*, vol. 55, pp. 431–454, 2014. View at Publisher · View at Google Scholar - K. Zare, A. Rasekh, and A. A. Rasekhi, “Estimation of variance components in linear mixed measurement error models,”
*Statistical Papers*, vol. 53, no. 4, pp. 849–863, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus - K. Zare and A. Rasekh, “Diagnostic measures for linear mixed measurement error models,”
*SORT*, vol. 35, no. 2, pp. 125–144, 2011. View at Google Scholar · View at MathSciNet · View at Scopus - D. R. Cox and D. V. Hinkley,
*Theoretical Statistics*, Chapman & Hall, London, UK, 1974. View at MathSciNet - S. G. Self and K.-Y. Liang, “Asymptotic properties of maximum likelihood estimators and likelihood ratio tests under nonstandard conditions,”
*Journal of the American Statistical Association*, vol. 82, no. 398, pp. 605–610, 1987. View at Publisher · View at Google Scholar · View at MathSciNet - K. Zare and A. Rasekh, “Residuals and leverages in the linear mixed measurement error models,”
*Journal of Statistical Computation and Simulation*, vol. 84, no. 7, pp. 1427–1443, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus - Y. Lee and N. A. Nelder, “Hierarchical generalized linear models,”
*Journal of the Royal Statistical Society B*, vol. 58, no. 4, pp. 619–678, 1996. View at Google Scholar · View at MathSciNet