Journal of Probability and Statistics

Journal of Probability and Statistics / 2017 / Article

Research Article | Open Access

Volume 2017 |Article ID 2170816 | 8 pages | https://doi.org/10.1155/2017/2170816

Robust Group Identification and Variable Selection in Regression

Academic Editor: Aera Thavaneswaran
Received16 Sep 2017
Accepted03 Dec 2017
Published20 Dec 2017

Abstract

The elimination of insignificant predictors and the combination of predictors with indistinguishable coefficients are the two issues raised in searching for the true model. Pairwise Absolute Clustering and Sparsity (PACS) achieves both goals. Unfortunately, PACS is sensitive to outliers due to its dependency on the least-squares loss function which is known to be very sensitive to unusual data. In this article, the sensitivity of PACS to outliers has been studied. Robust versions of PACS (RPACS) have been proposed by replacing the least squares and nonrobust weights in PACS with MM-estimation and robust weights depending on robust correlations instead of person correlation, respectively. A simulation study and two real data applications have been used to assess the effectiveness of the proposed methods.

1. Introduction

The latest developments in data aggregation have generated huge number of variables. The large amounts of data pose a challenge to most of the standard statistical methods. In many regression problems, the number of variables is huge. Moreover, many of these variables are irrelevant. Variable selection (VS) is the process of selecting significant variables for use in model construction. It is an important step in the statistical analysis. Statistical procedures for VS are characterized by improving the model’s prediction, providing interpretable models while retaining computational efficiency. VS techniques, such as stepwise selection and best subset regression, may suffer from instability [1]. To tackle the instability problem, regularization methods have been used to carry out VS. They have become increasingly popular, as they supply a tool with which the VS is carried out during the process of estimating the coefficients in the model, for example, LASSO [2], SCAD [3], elastic-net [4], fused LASSO [5], adaptive LASSO [6], group LASSO [7], OSCAR [8], adaptive elastic-net [9], and MCP [10].

Searching for the correct model raises two matters: the exclusion of insignificant predictors and the combination of predictors with indistinguishable coefficients (IC) [11]. The above approaches can remove insignificant predictors but be unsuccessful to merge predictors with IC. Pairwise Absolute Clustering and Sparsity (PACS, [11]) achieves both goals. Moreover, PACS is an oracle method for simultaneous group identification and VS.

Unfortunately, PACS is sensitive to outliers due to its dependency on the least-squares loss function which is known as very sensitive to unusual data. In this article, the sensitivity of PACS to outliers has been studied. Robust versions of PACS (RPACS) have been proposed by replacing the least squares and nonrobust weights in PACS with MM-estimation and robust weights depending on robust correlations instead of person correlation, respectively. RPACS can completely estimate the parameters of regression and select the significant predictors simultaneously, while being robust to the existence of possible outliers.

The rest of this article proceeds as follows. In Section 2, PACS has been briefly reviewed. The robust extension of PACS is detailed in Section 3. Simulation studies under different settings are presented in Section 4. In Section 5, the proposed robust PACS has been applied to two real datasets. Finally, a discussion concludes in Section 6.

2. A Brief Review of PACS

Under the linear regression model setup with standardized predictors and centered response values , and . Sharma et al. [11] proposed an oracle method PACS for simultaneous group identification and VS. PACS has less computational cost than OSCAR approach. In PACS, the equality of coefficients is attained by adding penalty to the pairwise differences and pairwise sums of coefficients. The PACS estimates are the minimizers of the following:where is the regularization parameter and is the nonnegative weights.

The penalty in (1) consists of that encourages sparseness, , and that encourages equality of coefficients. The second term of the penalty encourages the same sign coefficients to be set as equal, while the third term encourages opposite sign coefficients to be set as equal in magnitude.

Choosing of appropriate adaptive weights is very important for PACS to be an oracle procedure. Consequently, Sharma et al. [11] suggested adaptive PACS that incorporate correlations into the weights which are given as follows:where is consistent estimator of , such as the ordinary least squares (OLS) estimates or other shrinkage estimates like ridge regression estimates and is Pearson’s correlation between the pair of predictors.

Sharma et al. [11] suggest using ridge estimates as initial estimates for ’s to obtain weights perform well in studies with collinear predictors.

3. Robust PACS

3.1. Methodology of Robust PACS

The satisfactory performance of PACS under normal errors has been demonstrated in [11]. However, the high sensitivity to outliers is the main drawback of PACS where a single outlier can change the good performance of PACS estimate completely.

Note that, in (1), the least-squares criterion is used between the predictors and the response. Also, the weighted penalty contains weights which depend on Pearson’s correlation in their calculations. However, the least-squares criterion and Pearson’s correlation are not robust to outliers. To achieve the robustness in estimation and select the informative predictors robustly, the authors propose replacing the least-squares criterion with MM-estimation [12] where the MM- estimators are efficient and have high breakdown points. Moreover, the nonrobust weights replaced with robust weights depend on robust correlations such as the fast consistent high breakdown (FCH) [13], reweighted multivariate normal (RMVN) [13], Spearman’s correlation (SP), and Kendall’s correlation (KN). The RPACS estimates minimizing the following: where is the regularization parameter and is the robust version of the nonnegative weights which are describes in (2). , is M-estimate of scale of the residuals, and it is defined as a solution of where is a constant and function satisfies the following conditions:(1) is symmetric and continuously differentiable, and .(2)There exist such that is strictly increasing on and constant on (3).

The MM estimator in the first part of (3) is defined as an M-estimator of using a redescending score function, , and obtained from (4). It is a solution to where is another bounded function such that .

3.2. Choosing the Robust Weights

The process of choosing the suitable weights is very important in order to obtain an oracle procedure [11]. The weights, which are described in (2), depend on Pearson’s correlation in their calculations. From a practical point of view, it is well known that Pearson’s correlation is not resistant to outliers and thus choosing weights in (2) based on this correlation will cause uncertain and deceptive results. Consequently, in order to get robust weights, there is a need to estimate the correlation by using robust approaches. There are two types of robust versions for Pearson’s correlation. The first type consists of those that are robust to the outliers, without interest in the general structure of the data, whereas the second type gives attention to the general structure of the data when dealing with outliers [14]. KN and MCD (minimum covariance determinant) are examples for the first and second types, respectively. Olive and Hawkins [13] proposed FCH and RMVN methods as practical consistent, outlier resistant estimators for multivariate location and dispersion. Alkenani and Yu [15] employed FCH and RMVN estimators instead of Pearson’s correlation in the canonical correlation analysis (CCA) to obtain robust CCA. The authors showed that these estimators have good performance under different settings of outliers.

In this article, the FCH, RMVN, SP, and KN correlations have been employed instead of Pearson’s correlation in order to obtain robust weights as follows:where is a robust version of Pearson’s correlation such as FCH, RMVN, SP, and KN correlations. is a robust initial estimate for and we suggest using robust ridge estimates as initial estimates for β’s.

4. Simulation Study

In this section, five examples have been used to assess our proposed method RPACS by comparing it with PACS which is suggested in [11]. A regression model has been generated as follows:

In all examples, predictors are standard normal. The distributions of the error term and the predictors are contaminated by two types of distributions, distribution with 5 degrees of freedom and Cauchy distribution with mean equal to 0 and variance equal to 1 . Also, different contamination ratios (5%, 10%, 15%, 20%, and 25%) were used. The performance of the methods is compared by using model error (ME) criterion for prediction accuracy which is defined by where represents the population covariance matrix of . The sample sizes were 50 and 100 and the simulated model was replicated 1000 times.

Example 1. In this example, we choose the true parameters for the model of study as , . The first three predictors are highly correlated with correlation equal to 0.7 and their coefficients are equal in magnitude, while the rest are uncorrelated.

Example 2. In this example, the true coefficients have been assumed as , . The first three predictors are highly correlated with correlation equal to 0.7 and their coefficients differ in magnitude, while the rest are uncorrelated.

Example 3. In this example, the true parameters are , . The first three predictors are highly correlated with correlation equal to 0.7 and their coefficients are equal in magnitude, while the second three predictors have lower correlation equal to 0.3 and different magnitudes. The rest of predictors are uncorrelated.

Example 4. In this example, true parameters are , . The first three predictors are correlated with correlation equal to 0.3 and their coefficients are equal in magnitude, while the second three predictors have correlation equal to 0.7 and different magnitudes. The rest of predictors are uncorrelated.

Example 5. In this example, the true parameters are assumed as , . The first three predictors are highly correlated with pairwise correlation equal to 0.7 and the second two predictors have pairwise correlation of 0.7, while the rest are uncorrelated. It can be observed that the groups of three and two highly correlated predictors have coefficients which are equal in magnitude.

To avoid repetition, the observations about the results in Tables 15 have been summarized as follows.


Dist.Outliers%PACS RPACS.KN RPACS.SPRPACS.FCH RPACS.RMVN

5000.023040.029640.030830.029790.02902
50.201350.081240.081350.055750.04655
100.250430.140480.145430.065790.05664
150.307880.175780.181520.072250.06153
200.347080.192660.212860.081950.06939
250.406920.215840.225330.102420.08238
10000.020040.026440.028630.027720.02700
50.191000.071000.080300.051110.04025
100.230120.130110.140020.061160.05013
150.287150.155230.171370.068990.05902
200.325200.186700.192340.071150.06005
250.366920.205220.214040.090320.07784

5050.181120.070040.072370.043900.03581
100.232630.120010.122730.054720.04454
150.283680.152740.161380.062370.05079
200.335110.171620.185560.073810.05848
250.384880.193300.204050.093420.07211
10050.172140.061110.073350.042770.03581
100.222630.110010.112730.046720.03854
150.273680.142740.151380.052370.04079
200.315110.161620.175560.063810.04848
250.354880.183300.194050.083420.06211


Dist.Outliers%PACSRPACS.KNRPACS.SPRPACS.FCHRPACS.RMVN

5000.113720.120320.121510.120470.11970
50.292010.171910.172030.146440.13725
100.341130.231170.236110.156460.14730
150.398570.266470.272210.162940.15222
200.437780.283360.303550.172630.16006
250.497610.306530.316020.193120.17308
10000.103540.110220.111310.104070.10050
50.281710.161700.171000.141800.13094
100.320820.220800.230720.151850.14082
150.377830.245910.262050.159670.14970
200.415600.277000.283000.161850.15071
250.457620.295920.304730.181010.16854

5050.271820.160720.163060.134600.12650
100.323330.210710.213420.145410.13523
150.374340.243400.252040.153030.14145
200.425810.262320.276260.164510.14918
250.475580.2840.294750.184120.16281
10050.262820.151810.164050.133450.12651
100.313310.200710.203430.137420.12923
150.364350.233410.242050.143040.13149
200.405810.252320.266250.154510.13916
250.445570.274000.284730.174120.15281


Dist.Outliers%PACSRPACS.KNRPACS.SPRPACS.FCHRPACS.RMVN

5000.141720.148310.149500.148440.14743
50.320010.199910.200030.174410.16522
100.369130.259150.264110.184440.17530
150.426530.294430.300210.190940.18022
200.465760.311350.331540.200630.18806
250.525610.334530.344020.221120.20107
10000.130420.135010.136450.133440.13255
50.309710.189710.199010.169820.15894
100.348820.248830.258720.179850.16882
150.405820.273910.290030.187650.17774
200.443650.305010.311030.189830.17871
250.485620.323920.332710.209010.19650

5050.299820.188720.191060.16260.1545
100.351330.238710.241420.173410.16323
150.402340.27140.280040.181030.16945
200.453810.290320.304260.192510.17718
250.503580.3120.322750.212120.19081
10050.320010.199910.200030.174450.16525
100.369130.259170.264110.184410.17536
150.426550.294440.300210.190930.18022
200.465750.311340.331530.200630.18804
250.5256100.334530.344010.221120.20106


Dist.Outliers%PACSRPACS.KNRPACS.SPRPACS.FCHRPACS.RMVN

5000.152510.159100.160350.159210.15823
50.330810.210700.210820.185200.17601
100.379910.269930.274910.195230.18612
150.437320.305210.311010.201750.19102
200.476530.322160.342330.211430.19887
250.536410.345310.354820.231920.21185
10000.133420.139010.141250.138140.13713
50.320510.200510.209810.180620.16973
100.359620.259650.269520.190670.17962
150.416620.284710.300830.198470.18853
200.454460.315810.321830.200660.18951
250.496420.334720.343510.219810.20757

5050.310620.199520.201880.17340.16538
100.362160.249510.252220.184210.17404
150.413160.28220.290870.191840.18025
200.464610.301120.315070.203310.18798
250.514380.322840.333570.222940.20161
10050.330830.210710.210830.185250.17606
100.379930.269950.274910.195210.18613
150.437330.305220.311010.201750.19102
200.476530.322170.342330.211430.19886
250.536410.345330.3548140.231920.21188


Dist.Outliers%PACS RPACS.KNRPACS.SPRPACS.FCH RPACS.RMVN

5000.060310.066950.068150.067010.06602
50.238610.118510.118620.093050.08381
100.287710.1777350.1827120.103030.09392
150.345120.213010.218810.109550.09886
200.384330.229960.250150.119230.10667
250.444240.253150.262620.139720.11965
10000.041250.046840.049080.045970.04496
50.228370.1083130.117650.088460.07755
100.267440.167450.177330.098460.08743
150.324450.192560.208650.106270.09636
200.362280.223650.229660.108440.09733
250.404250.242570.251310.127610.11537

5000.060310.066950.068150.067010.06602
50.218450.107370.109630.081250.07316
100.269970.157340.160060.092060.08183
150.320950.190070.198650.099630.08806
200.372440.208960.222890.111150.09579
250.422170.230670.241350.130730.10948
10000.041250.046840.049080.045970.04496
50.238650.118540.118650.093080.08389
100.287750.177790.182740.103030.09397
150.345130.213040.218850.109580.09885
200.384350.229980.250150.119260.10667
250.444230.253140.262610.139770.11967

From Tables 1, 2, 3, 4, and 5, when there is no contamination data, PACS has good performance compared with our proposed methods. It is clear, when the contamination ratio of or goes up the performance of PACS goes down while RPACS with all the robust weights has a stable performance, and the preference is for RPACS.RMVN and RPACS.RFCH, respectively, for all the samples sizes. The variations in ME values for the RPACS estimates with all the robust weights are close under different setting of contamination and sample sizes, and they are less than the variations of PACS estimates.

5. Analysis of Real Data

In this section, the RPACS methods with all the robust weights and PACS method have been applied in real data. The NCAA sports data from Mangold et al. [16] and the pollution data from McDonald and Schwing [17] have been studied.

The response variable was centered and the predictors were standardized. To verify RPACS, the two data sets have been analyzed by including outliers in the response variable and the predictors. The two data sets have been contaminated with (5%, 10%, 15%, and 20%) data from multivariate distribution with three degrees of freedom.

To evaluate the estimation accuracy of the RPACS methods, the correlation between the estimated parameters according to the different methods under consideration and the estimated parameters from PACS without outliers, denoted as , was presented. Also, the effective model size after accounting for equality of absolute coefficient estimates has been reported.

5.1. NCAA Sports Data

The NCAA sport data is taken from a study of the effects of sociodemographic indicators and the sports programs on graduation rates. The dataset is available from the website (http://www4.stat.ncsu.edu/~boos/var.select/ncaa.html). The data size is and predictors. The response variable is the average of 6 year graduation rate for 1996–1999. The predictors are students in top 10% HS (X1), ACT COMPOSITE 25TH (X2), on living campus (X3), first-time undergraduates (X4), Total Enrollment/1000 (X5), courses taught by TAs (X6), composite of basketball ranking (X7), in-state tuition/1000 (X8), room and board/1000 (X9), avg BB home attendance (X10), Full Professor Salary (X11), student to faculty ratio (X12), white (X13), assistant professor salary (X14), population of city where located (X15), faculty with PHD (X16), acceptance rate (X17), receiving loans (X18), and out of state (X19).

5.2. Pollution Data (PD)

The PD is taken from a study of the effects of different air pollution indicators and sociodemographic factors on mortality. The dataset is available from the website (http://www4.stat.ncsu.edu/~boos/var.select/pollution.html). The data contains observations and predictors. The response is the total Age Adjusted Mortality Rate (y). The predictors are Mean annual precipitation (X1), mean January temperature (X2), mean July temperature (X3), % population that is 65 years of age or over (X4), population per household (X5), median school years (X6), % of housing with facilities (X7), population per square mile (X8), % of population that is nonwhite (X9), % employment in white-collar occupations (X10), % of families with income under 3; 000 (X11), relative population potential (RPP) of hydrocarbons (X12), RPP of oxides of nitrogen (X13), RPP of sulfur dioxide (X14), and % relative humidity (X15).

From Tables 6 and 7, we have the following findings in terms of estimation accuracy and the effective model size:(1)In case of no contamination, it can be observed that RPACS methods give comparable results as PACS. In addition, it can be seen that RPACS.RMVN and RPACS.FCH achieve better performance than RPACS.KN and RPACS.SP.(2)In case of contamination, the performance of PACS is dramatically affected. Also, it is obvious that RPACS.RMVN and RPACS.FCH methods give very consistent results, even with the high contamination percentages. The performance of RPACS.KN and RPACS.SP is less efficient than RPACS.RMVN and RPACS.FCH especially for all the contamination percentages.


MethodsOutliers%
05101520

PACS10.90330.80690.41120.1345
RPACS.KN0.98430.98390.95300.90190.8499
RPACS.SP0.98400.98370.95260.90060.8490
RPACS.FCH0.98500.98460.98430.98410.9839
RPACS.RMVN0.98560.98520.98500.98470.9845

The effective model sizePACS567910
RPACS.KN55667
RPACS.SP55667
RPACS.FCH55555
RPACS.RMVN55555


MethodsOutliers%
05101520

PACS10.92470.82590.70010.5925
RPACS.KN0.98820.98660.95520.90440.8518
RPACS.SP0.98770.98620.95450.90380.8511
RPACS.FCH0.98900.98870.98840.98820.9879
RPACS.RMVN0.98970.98950.98930.98900.9888

The effective model sizePACS56689
RPACS.KN55677
RPACS.SP55677
RPACS.FCH55555
RPACS.RMVN55555

6. Conclusions

In this paper, robust consistent group identification and VS procedures have been proposed (RPACS) which combine the strength of both robust and identifying relevant groups and VS procedure. The simulation studies and analysis of real data demonstrate that RPACS methods have better predictive accuracy and identifying relevant groups than PACS when outliers exist in the response variable and the predictors. In general, the preference is for RPACS.RMVN and RPACS.RFCH, respectively, for all the samples sizes.

Abbreviations

LASSO:Least absolute shrinkage and selection operator
PACS:Pairwise Absolute Clustering and Sparsity
RPACS:Robust Pairwise Absolute Clustering and Sparsity
VS:Variable selection
SCAD:Smoothly clipped absolute deviation
Fused LASSO:Fused least absolute shrinkage and selection operator
Adaptive LASSO:Adaptive least absolute shrinkage and selection operator
Group LASSO:Group least absolute shrinkage and selection operator
OSCAR:Octagonal shrinkage and clustering algorithm for regression
MCP:Minimax concave penalty
IC:Indistinguishable coefficients
FCH:Fast consistent high breakdown
RMVN:Reweighted multivariate normal
SP:Spearman’s correlation
KN:Kendall’s correlation
MCD:Minimum covariance determinant
CCA:Canonical correlation analysis
NCAA:National Collegiate Athletic Association
PD:Pollution data.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

References

  1. L. Breiman, “Heuristics of instability and stabilization in model selection,” The Annals of Statistics, vol. 24, no. 6, pp. 2350–2383, 1996. View at: Publisher Site | Google Scholar | MathSciNet
  2. R. Tibshirani, “Regression shrinkage and selection via the lasso: A retrospective,” Journal of the Royal Statistical Society: Series B (Methodological), vol. 73, no. 3, pp. 273–282, 1996. View at: Publisher Site | Google Scholar
  3. J. Fan and R. Li, “Variable selection via nonconcave penalized likelihood and its oracle properties,” Journal of the American Statistical Association, vol. 96, no. 456, pp. 1348–1360, 2001. View at: Publisher Site | Google Scholar | MathSciNet
  4. H. Zou and T. Hastie, “Regularization and variable selection via the elastic net,” Journal of the Royal Statistical Society B: Statistical Methodology, vol. 67, no. 2, pp. 301–320, 2005. View at: Publisher Site | Google Scholar | MathSciNet
  5. R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight, “Sparsity and smoothness via the fused lasso,” Journal of the Royal Statistical Society B: Statistical Methodology, vol. 67, no. 1, pp. 91–108, 2005. View at: Publisher Site | Google Scholar
  6. H. Zou, “The adaptive lasso and its oracle properties,” Journal of the American Statistical Association, vol. 101, no. 476, pp. 1418–1429, 2006. View at: Publisher Site | Google Scholar | MathSciNet
  7. M. Yuan and Y. Lin, “Model selection and estimation in regression with grouped variables,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 68, no. 1, pp. 49–67, 2006. View at: Publisher Site | Google Scholar | MathSciNet
  8. H. D. Bondell and B. J. Reich, “Simultaneous regression shrinkage, variable selection, and supervised clustering of predictors with OSCAR,” Biometrics, vol. 64, no. 1, pp. 115–123, 2008. View at: Publisher Site | Google Scholar
  9. H. Zou and H. H. Zhang, “On the adaptive elastic-net with a diverging number of parameters,” The Annals of Statistics, vol. 37, no. 4, pp. 1733–1751, 2009. View at: Publisher Site | Google Scholar
  10. C.-H. Zhang, “Nearly unbiased variable selection under minimax concave penalty,” The Annals of Statistics, vol. 38, no. 2, pp. 894–942, 2010. View at: Publisher Site | Google Scholar | MathSciNet
  11. D. B. Sharma, H. D. Bondell, and H. H. Zhang, “Consistent group identification and variable selection in regression with correlated predictors,” Journal of Computational and Graphical Statistics, vol. 22, no. 2, pp. 319–340, 2013. View at: Publisher Site | Google Scholar
  12. V. c. Yohai, “High breakdown-point and high efficiency robust estimates for regression,” The Annals of Statistics, vol. 15, no. 2, pp. 642–656, 1987. View at: Publisher Site | Google Scholar | MathSciNet
  13. D. J. Olive and D. M. Hawkins, “Robust multivariate location and dispersion,” http://lagrange.math.siu.edu/Olive/pphbmld.pdf, 2010. View at: Google Scholar
  14. R. Wilcox, Introduction to robust estimation and hypothesis testing, Statistical Modeling and Decision Science, Academic press, 2005. View at: MathSciNet
  15. A. Alkenani and K. Yu, “A comparative study for robust canonical correlation methods,” Journal of Statistical Computation and Simulation, vol. 83, no. 4, pp. 690–720, 2013. View at: Publisher Site | Google Scholar
  16. W. D. Mangold, L. Bean, and D. Adams, “The Impact of Intercollegiate Athletics on Graduation Rates among Major NCAA Division I Universities: Implications for College Persistence Theory and Practice,” Journal of Higher Education, vol. 74, no. 5, pp. 540–563, 2003. View at: Google Scholar
  17. G. C. McDonald and R. C. Schwing, “Instabilities of regression estimates relating air pollution to mortality,” Technometrics, vol. 15, no. 3, pp. 463–481, 1973. View at: Publisher Site | Google Scholar

Copyright © 2017 Ali Alkenani and Tahir R. Dikheel. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1701 Views | 396 Downloads | 0 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder