Research Article | Open Access
A Novel Margin-Based Measure for Directed Hill Climbing Ensemble Pruning
Ensemble pruning is a technique to increase ensemble accuracy and reduce its size by choosing a subset of ensemble members to form a subensemble for prediction. Many ensemble pruning algorithms via directed hill climbing searching policy have been recently proposed. The key to the success of these algorithms is to construct an effective measure to supervise the search process. In this paper, we study the importance of individual classifiers with respect to an ensemble using margin theory proposed by Schapire et al. and obtain that ensemble pruning via directed hill climbing strategy should focus more on examples with small absolute margins as well as classifiers that correctly classify more examples. Based on this principle, we propose a novel measure called the margin-based measure to explicitly evaluate the importance of individual classifiers. Our experiments show that using the proposed measure to prune an ensemble leads to significantly better accuracy results compared to other state-of-the-art measures.
Ensemble of multiple learning machines has been a very popular research topic during the last decade in machine learning and data mining. The basic idea is to construct multiple classifiers from the original data and then aggregate their predictions when classifying examples with unknown classes. Theoretic and empirical results show that an ensemble is potential to increase the classification accuracy beyond the level reached by an individual classifier alone . Dietterich stated “A necessary and sufficient condition for an ensemble of classifiers to be more accurate than any of its individual members is if the classifiers are accurate and diverse” .
Many approaches have been proposed to create ensemble members with both high accuracy and high diversity, which can be mainly grouped into three categories: by manipulating data set [3, 4], by manipulating features [5–8], and by manipulating algorithms . Bagging  and boosting , the most widely used and successful ensemble learning methods, fall into the first category, where bagging learns individual classifiers on data sets obtained by randomly sampling from the original training sets and, through randomly disturbing, the learned classifiers obtain a high accuracy and sufficient diversity. Unlike bagging, boosting is an iterative learning process. For each iteration, boosting adjusts the distribution of training set such that classifiers focus more on examples that are hardly correctly classified. The approaches by manipulating features try to build the individual classifiers on diverse feature spaces obtained by selecting subset or by generating new ones from the original features. For example, random forests [5, 6] learn each tree on a feature subset obtained by randomly sampling from original features and COPEN  learns the base classifiers on new feature spaces mapped from original feature space using pairwise constraints projection. The individual classifiers can also be built by manipulating algorithms. Through adjusting model structure or parameter setting, classifiers with diversity are learned, such that the negative correlation method explicitly constrains the parameters of individual neural networks to be different by a regularization term .
Ensemble methods have been successfully applied to many fields such as remote sensing , time series prediction , and imbalanced learning problem . However, an obvious problem existing in ensemble learning methods is that they tend to train a very large number of classifiers which need large storage resources to store them and computational resources to calculate outputs of individual learners. Besides, it is not always true that the larger the ensemble, the better its performance. In fact, Zhou et al.  proved that the generalization performance of a subset of an ensemble may be even better than the ensemble consisting of all the given individual learners. These reasons motivate the appearance of ensemble pruning, also called ensemble selection or ensemble thinning, selecting a subset of ensemble members to form subensembles that are subject to less resource consumption and response time with accuracy that is similar to or better than the original ensemble [14–22].
Given an ensemble with members, searching for the best subset of ensemble members by enumerating all subensemble candidates is computational infeasible because of exponential size of the search space , which is NP-complete problem . Several efficient methods that are based on a directed hill climbing search in the space of subsets report good predictive performance results [15, 16, 18, 24–27]. These methods start with an empty (or full) initial ensemble and search the space of different ensembles by iteratively expanding (or contracting) the initial ensemble by a single model. The search is guided by an evaluation measure that is based on either the predictive performance or the diversity of the alternative subsets. The evaluation measure is the main component of a directed hill climbing algorithm and it differentiates the methods that fall into this category.
In this paper, we apply the concepts of example margins proposed by Schapire et al.  to analyse the importance of individual classifiers with respect to an ensemble and conclude that ensemble pruning via directed hill climbing strategy should focus more on examples with small absolute margins as well as classifiers that correctly classify more examples. Based on the gained insight, a criterion called margin-based measure is proposed to supervise the search process of ensemble pruning via directed hill climbing strategy. Our experiments show that using the proposed measure to prune an ensemble leads to significantly better accuracy results compared to other state-of-the-art measures.
The paper is structured as follows. Section 2 briefly describes ensemble pruning via directed hill climbing search. Section 3 proposes a measure for evaluating the importance of individual classifiers. Section 4 reports the experimental settings and results, and we conclude this paper in Section 5.
2. Related Work
Directed hill climbing ensemble pruning (DHCEP) attempts to find the globally best subset of classifiers by taking local greedy decisions for changing the current subset [17, 28, 29]. An example of the search space for an ensemble of four models is presented in Figure 1.
The direction of search and the measure used for evaluating the search are two important parameters that differentiate one DHCEP method from the other. The following sections discuss the different options for instantiating these parameters and the particular choices of existing methods.
2.1. Direction of Search
Based on the direction of search we have two main categories of DHCEP methods: (a) forward selection and (b) backward elimination (see Figure 1).
In forward selection algorithm, ensemble pruning starts with the current classifier subset which is initialized to the empty set. Then the algorithm continues by iteratively adding to the classifier that optimizes an evaluation function. This function evaluates the addition of classifier in the current subset based on the pruning set (labeled data). In the past, this approach has been used in [14, 25, 26] and in reduce-error pruning methods [30, 31].
In backward elimination, the current classifier subset is initialized to the complete ensemble and the algorithm continues by iteratively removing from the classifier that optimizes the evaluation function. This function evaluates the removal of classifier from the current subset based on the pruning set. In the past, this approach has been used in the AID thinning and concurrency thinning algorithms .
In both cases, the traversal requires the evaluation of subsets, leading to a time complexity of , where the term concerns the complexity of the evaluation function, which is linear with respect to (the size of pruning set) and ranges from constant to quadratic with respect to (the size of ), as we will see in the following sections.
2.2. Evaluation Measure
Evaluation measures are the main component that differentiates DHCEP methods, which can be grouped into two major categories: those are based on performance and those are based on diversity.
The goal of performance-based measures is to find the model that maximizes the performance of the ensemble produced by adding (or removing) a model to (or from) the current ensemble. Their calculation depends on the method used for ensemble combination, which usually is voting. Accuracy was used as an evaluation measure by Margineantu and Dietterich  and by Fan et al. , while Caruana et al.  experimented with several measures, including accuracy, root mean squared error, mean cross-entropy, lift, precision/recall break-even point, precision/recall -score, average precision, and ROC area. Another measure is benefit, which is based on a cost model and has been used in Fan et al. . The calculation of performance-based metrics requires the decision of the ensemble on all examples of the pruning set. Therefore, the complexity of these measures is . However, this complexity can be optimized to , if the predictions of the current ensemble are updated incrementally each time a classifier is added to (or removed from) it.
Ensemble diversity, that is, the difference among the individual learners, is a fundamental issue in ensemble methods. Intuitively, it is easy to understand that, in order to gain from a combination, individual learners must be different, and otherwise there would be no performance improvement if identical individual learners were combined.
Let be a classifier and let be subensemble; Partalas et al. [16, 18, 29] identify that the prediction of and on an instance can be categorized into four cases: : , : , : , and : . They concluded that considering the four cases is crucial to design ensemble diversity measure. Many diverse measures are designed by considering some or all the four cases, for example, complementariness  and concurrency . The complementariness of with respect to and a pruning set is calculated aswhere , . The complementariness is exactly the number of examples that are correctly classified by and incorrectly classified by . The concurrency is defined aswhich is similar to the complementariness, with the difference that it considers two more cases and weights them.
Unlike complementariness and concurrency, Partalas et al.  introduce a new metric called uncertainty weighted accuracy (UWA) considering all four cases given above. UWA is defined aswhere is the proportion of classifiers in the current ensemble which correctly predict and is the proportion of classifiers that incorrectly predict . In addition to considering all four cases, UWA takes into account the strength of the decision of the current ensemble.
In this paper, we designed a new measure by considering the margin of examples for ensemble pruning via directed hill climbing. More details are discussed in next section.
3. Importance Assessment for Individual Classifiers
As one of the best off-the-shelf algorithms, AdaBoost demonstrates a high generalization performance. To theoretically analyse this phenomenon, a concept called margin of examples was proposed by Schapire et al. . Let be the training set, where each example is associated with a label . Suppose that is an ensemble with classifiers and suppose that each member maps each example to a label ; namely, . Then the margin of is defined aswhere is the weight of the classifier . Without loss of generality, normalizing , , such that , then (4) can be written as From (5), the margin is a value in , is on the border if , the absolute value of the margin is the confidence of ensemble prediction on , and (or ) indicates that the ensemble correctly (or incorrectly) classifies . Based on this concept, they proved that, for any and , the generalization error is upper bound bywhere is the complex of the base classifier and is the size of the training set. To further explain the correctness of the margin theory, Gao and Zhou  proposed th margin theory. Specifically, for any , with probability at least over the random choice of training set with size , the generalization error is upper bound bywhereFrom (6) and (7), when other variables are fixed, the larger the margin over the training examples, the better the generalization performance, and thus, individual classifiers that correctly classify examples are more important than incorrect ones since the former is helpful to increase the margin of the examples. In addition, we argue that it is more important to increase the margin of examples at the boundary (margin equal to zero), since adding into (or removing from) the ensemble a classifier would lead to ensembles correctly classifying the examples. Therefore, the proposed measure for ensemble pruning should focus more on correct classifiers and the examples lying near the boundary. Therefore, the importance of individual classifiers can be ordered as shown in Figure 2. Based on the order of importance of individual classifiers, the margin-based measure is proposed in Section 4.
4. Margin-Based Measure
In this section, we propose a heuristic measure for evaluating the importance of individual classifiers based on the gained insight obtained in Section 3: ensemble pruning via directed hill climbing strategy should focus more on examples with small absolute margins as well as classifiers that correctly classify more examples. Several methods use a different approach to calculate diversity during the search.
4.1. Measure for Two-Class Problem
For simplicity of presentation, this section focuses on forward ensemble pruning: given an ensemble subset which is initialized to be empty, we iteratively add into the classifier . Here, the symbols are similar to the ones in Section 3. Assuming that ensembles use simple majority voting to obtain the predictions, then the margin of an example of the ensemble iswhere is the size of the ensemble . From (9), is the weight of each classifier , , and is the margin contribution of on the example . Then the proposed measure, margin-based measure (MM), of classifier with respect to ensemble and the pruning set is defined aswhere is the margin-based measure of with respect to the subensemble and current example , defined aswhere the constant parameter is to avoid the denominator equal to zero. Since , then and therefore . From (9) and (10), is exact the margin contribution of on the example and is the weight of . The rationale of the proposed measure is as follows:(i)If individual classifier correctly classifies the example , increases the margin of , and the corresponding increase value isand thus favor correctly classifying , namely, (refer to (10)). If incorrectly classifies , the prediction of reduces the margin of and the reduction is exactand thus is harmful to correctly classifying , namely, (refer to (10)).(ii)From the discussion of Section 3, reflects the confidence that correctly (or incorrectly) classifies the example . If is very small (equal to 0, e.g.), namely, correctly (or incorrectly) classifying with a low confidence, adding into the classifier may change the prediction of on the example and therefore ’s weight is large. On the other hand, if is very large (equal to 1, e.g.), namely, correctly (or incorrectly) classifying with a high confidence, adding into the classifier cannot change the prediction of on the example and therefore ’s weight is small.
The time complexity of calculating (10) or (16) is , which can be by incrementally updating margins of examples each time a classifier is added to/removed from it, where is the number of pruning sets. Therefore, the time complexity of ensemble pruning via directed hill climbing strategy based on the proposed measure is not more than , where is the size of the original ensemble learned from training sets.
In this way, the proposed measure focuses more on correct classifiers and the examples lying near the boundaries, which coincides with the conclusions in Section 3.
4.2. Measure for Multiclass Problem
Let each member of map an example to a label ; namely, , and let where is the number of votes on the th label of example of an ensemble combined by majority voting; is the number of majority votes on the example ; is the second largest votes on the example ; is the number of votes on label .From , for multiclass, the margin of an example is defined as the difference between the number of correct votes and the maximum number of votes received by any incorrect label; namely,Combining (11) and (15) results inwhere (or ) is the set of examples that are correctly (or incorrectly) classified by current classifier and correctly classified by the ensemble; similarly (or ) is the set of examples that are correctly (or incorrectly) classified by and incorrectly classified by . Formally, In this way, and thus (the proposed measure) focus more on correct classifiers and the examples lying near the boundaries, which coincide with the conclusions in Section 3.
Unlike other measures where each classifier is independently evaluated, the proposed margin-based measure uses a more global evaluation. Indeed, this criterion involves instance margin values that result from a majority voting of the whole ensemble. Thus, the proposed measure is not only based on individual properties of ensemble members (e.g., accuracy of individual learners). It also takes into account some form of complementarity of classifiers.
From (11), our margin-based measure considers both the correctness of predictions of current classifier and the confidence of prediction of ensemble. Therefore, this measure deliberately favors classifiers with a better performance in classifying low margin samples. Thus, it is a boosting-like strategy which aims to increase the performance on low margin instances. So our strategy of selection will lead to a subset of classifiers with a potentially improved capability to classify complex data in general and border data in particular. Consequently, it will induce a selection of a subset of learners that are designed to efficiently handle minor classes.
From (16), our measure considers the diversity of between ensemble members. Therefore, the measure considers not only the correctness of classifiers, but also the diversity of ensemble members. Therefore, using the proposed measure to prune an ensemble leads to significantly better accuracy results.
This section first introduces the experiment setting and the characteristics of the data sets used in this paper and then reports the comparison of measures for guild ensemble pruning.
5.1. Data Sets and Experimental Setup
We randomly selected 18 data sets from the UCI repository . Each data set was randomly divided into three subsets of equal sizes: one of the subsets as the training set, one as the testing set, and the other as the pruning set. Therefore, we conducted six trials for each data set. We repeated the experiments 50 times and thus conducted a total of 300 trials on each data set. The details of these data sets are summarized in Table 1, where #insts, #Attrs, and #Cls are the size, attribute number, and class number of the corresponding data sets, respectively.
We evaluated the performance of the proposed measure margin-based measure (MM) using forward ensemble selection, where complementariness (COM) , concurrency (CON) , and uncertainty weighted accuracy (UWA)  were used as the compared measures. In each trial, a bagging  with 200 base classifiers was trained, where the base classifier was J48, which is a Java implementation of C4.5  from Weka . For simplicity, we denote MM, COM, CON, and UMA as the corresponding pruning algorithms supervised by these measures, respectively.
5.2. Accuracy Performance versus the Size of Subensemble
The goal of this experiment was to evaluate the performance of MM by comparing it with UWA, CON, and COM. The experimental results of the 18 tested data sets can be classified into three cases: MM outperforms UWA, CON, and COM; MM performs comparable to one or more of them and outperforms others; and MM is outperformed by one or more of them. The first case contains 13 data sets, the second case contains two data sets, and the last case contains three. Figures 3, 4, and 5 show the representative results from the three cases.
(a) Horse colic
Figure 3 reports the accuracy curves of the four compared measure for six representative data sets that fall into the first case. Results in the figure are reported as average accuracy curves with regard to the number of classifiers, where the horizontal axis is the size of subensembles growing gradually from 5 to 200 with step 1 and the vertical axis is the average accuracy over 300 trials. For the purpose of clarity, the standard deviations are not shown in the figure. The accuracy curves for data sets “audiology,” “autos,” “car,” “glass,” “segment,” and “wine” are reported in Figures 3(a), 3(b), 3(c), 3(d), 3(e), and 3(f), respectively. Figure 3(a) shows that, with the increase of the number of aggregated classifiers, the accuracy curves of subensembles selected by MM, UWA, CON, and COM increase rapidly, reach the maximum accuracy in the intermediate steps of aggregation which are higher than the accuracy of the whole original ensemble, and then drop until the accuracy is the same as the whole ensemble. The remaining five data sets, “autos,” “car,” “glass,” “segment,” and “wine” (shown in Figures 3(b), 3(c), 3(d), 3(e), and 3(f), resp.) have similar accuracy curves to “audiology.”
5.3. Summary of Experimental Results
Table 2 summarizes the accuracy of the 300 trials for each data set, where the value in each parentheses is the rank of compared method and the last row is the average rank. The rank of algorithm is defined as follows: on one data set, the best performing algorithm gets the rank of 1.0, the second best one gets the rank of 2.0, and so on. In the case of ties, average ranks are assigned [36, 37]. The experimental results in Section 5.2 empirically show that MM, UWA, CON, and COM generally reach maximum accuracy when the size of the subensembles is between 20 and 40 (using forward selection for ensemble pruning). Therefore, the subensembles formed by MM with 30 original ensemble members are compared with subensembles formed by UWA, CON, and COM with the same size.
As shown in Table 2, MM outperforms bagging on all the 18 data sets, which indicates that MM efficiently performs ensemble pruning by achieving better predictive accuracies with small subensembles. Table 2 also shows that MM ranks first on 14 out of the 18 data sets and its average rank is 1.33, followed by CON with an average rank of 2.75, COM with an average rank of 2.91, UWA (3.17), and bagging (4.83).
As aforementioned, the backward elimination is another directed hill climbing strategy for ensemble pruning. From experimental results, we observe that performance based on backward elimination strategy is similar to that based on forward selection strategy, and therefore we only present the mean accuracy and ranks of MM, UWA, CON, COM with 30 base classifiers, and bagging (the original ensemble). The corresponding results are illustrated in Table 3. From the table, MM ranks first on 12 data sets and its average rank is 1.42, followed by CON with an average rank of 2.69, COM (2.83), UWA (3.22), and bagging (4.78).
Table 4 shows a summary of the comparisons among the methods, where the pruning methods with “-F” use forward selection to pruning ensemble and similarly, the pruning methods with “-B” use backward elimination to pruning ensemble. The size of each subensemble selected by these ensemble pruning methods is 30. The entry displays the number of times when the method of the column has a better result than the method of the row . The number in the parentheses shows how many of these differences have been statistically significant using pairwise -tests at the 95% significance level. For example, MM-F has been better than CON-F with pruned trees in 16 of the 18 comparisons and worse in 2. The numbers in the parentheses show that, in 14 cases, the difference in favor of MM-F has been statistically significant; hence, the value in row 3, column 1 of the table is 16 .
Table 5 shows the ranking of the comparing methods according to the significant difference between their performances using pairwise -tests at the 95% significance level. Here, we use all pairwise comparisons as summarized in Table 4. For example, the sum of the numbers in the brackets in the column corresponding to MM-F in Table 4 is 94. The sum of the numbers in the brackets in the row corresponding to MM-F is 10. These are used in Table 5 to calculate the nondominance ranking of MM-F (84).
Tables 4 and 5 demonstrate the significant advantage of MM compared with the best benchmark classifier ensemble methods: CON, COM, and bagging. Besides, compared with ensemble pruning methods using backward elimination, the ones with forward selection show better performance.
In this paper, we analysed the importance of individual classifiers with respect to the whole ensembles using margin theory and obtained that ensemble pruning via directed hill climbing strategy should focus more on correct classifiers and the examples lying near the boundary. Based on the derived general principles, we proposed criterion called the margin-based measure to explicitly evaluate the importance of individual classifiers. Experimental comparisons on 18 UCI data sets showed that the proposed measure outperforms other state-of-the-art measures and the original ensemble.
The proposed metric in this paper can apply not only to ensemble pruning based on directed hill climbing search but also to other ensemble pruning methods. Therefore, more experiments will be conducted to evaluate the performance of the proposed measure.
The authors declare that there are no competing interests regarding the publication of this paper.
This work is in part supported by the National Natural Science Foundation of China (Grants no. 61472370, no. 61202207, no. 61501393, no. 61402393, and no. 61572417), Project of Science and Technology Department of Henan Province (no. 162102210310, no. 152102210129, and no. 142400410486), and Science and Technology Research key Project of the Education Department of Henan Province (Grant no. 15A520026).
- X. Wu, V. Kumar, Q. J. Ross et al., “Top 10 algorithms in data mining,” Knowledge and Information Systems, vol. 14, no. 1, pp. 1–37, 2008.
- T. G. Dietterich, “Ensemble methods in machine learning,” in Proceedings of the 1st International Workshop on Multiple Classifier Systems, pp. 1–15, Cagliari, Italy, June 2000.
- L. Breiman, “Bagging predictors,” Machine Learning, vol. 24, no. 2, pp. 123–140, 1996.
- Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of Computer and System Sciences, vol. 55, no. 1, part 2, pp. 119–139, 1997.
- L. Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, 2001.
- F. T. Liu, K. M. Ting, Y. Yu, and Z.-H. Zhou, “Spectrum of variable-random trees,” Journal of Artificial Intelligence Research, vol. 32, no. 1, pp. 355–384, 2008.
- J. J. Rodríguez, L. I. Kuncheva, and C. J. Alonso, “Rotation forest: a new classifier ensemble method,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 10, pp. 1619–1630, 2006.
- D. Zhang, S. Chen, Z. Zhou, and Q. Yang, “Constraint projections for ensemble learning,” in Proceedings of the 23rd AAAI Conference on Artificial Intelligence (AAAI '08), pp. 758–763, Chicago, Ill, USA, July 2008.
- Y. Liu and X. Yao, “Ensemble learning via negative correlation,” Neural Networks, vol. 12, no. 10, pp. 1399–1404, 1999.
- L. Guo, Margin framework for ensemble classifiers. Application to remote sensing data [Ph.D. thesis], University of Bordeaux 3, Pessac, France, 2011.
- Z. Ma, Q. Dai, and N. Liu, “Several novel evaluation measures for rank-based ensemble pruning with applications to time series prediction,” Expert Systems with Applications, vol. 42, no. 1, pp. 280–292, 2015.
- W. M. Zhi, H. P. Guo, M. Fan, and Y. D. Ye, “Instance-based ensemble pruning for imbalanced learning,” Intelligent Data Analysis, vol. 19, no. 4, pp. 779–794, 2015.
- Z.-H. Zhou, J. Wu, and W. Tang, “Ensembling neural networks: many could be better than all,” Artificial Intelligence, vol. 137, no. 1-2, pp. 239–263, 2002.
- G. Martinez-Muverbnoz and A. Suarez, “Aggregation ordering in bagging,” in Proceedings of the IASTED International Conference on Artificial Intelligence and Applications, pp. 258–263, Acta Press, Calgary, Canada, 2004.
- R. E. Banfield, L. O. Hall, K. W. Bowyer, and W. P. Kegelmeyer, “Ensemble diversity measures and their application to thinning,” Information Fusion, vol. 6, no. 1, pp. 49–62, 2005.
- I. Partalas, G. Tsoumakas, and I. P. Vlahavas, “Focused ensemble selection: a diversity-based method for greedy ensemble selection,” in Proceedings of the 18th European Conference on Artificial Intelligence, pp. 117–121, Patras, Greece, July 2008.
- G. Martinez-Muñoz, D. Hernández-Lobato, and A. Suarez, “An analysis of ensemble pruning techniques based on ordered aggregation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 2, pp. 245–259, 2009.
- I. Partalas, G. Tsoumakas, and I. P. Vlahavas, “An ensemble uncertainty aware measure for directed hill climbing ensemble pruning,” Machine Learning, vol. 81, no. 3, pp. 257–282, 2010.
- Z. Lu, X. D. Wu, X. Q. Zhu, and J. Bongard, “Ensemble pruning via individual contribution ordering,” in Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '10), pp. 871–880, Washington, DC, USA, July 2010.
- L. Guo and S. Boukir, “Margin-based ordered aggregation for ensemble pruning,” Pattern Recognition Letters, vol. 34, no. 6, pp. 603–609, 2013.
- C. Qian, Y. Yu, and Z. H. Zhou, “Pareto ensemble pruning,” in Proceedings of the 29th AAAI Conference on Artificial Intelligence, pp. 2935–2941, Austin, Tex, USA, January 2015.
- B. Krawczyk and M. Woźniak, “Untrained weighted classifier combination with embedded ensemble pruning,” Neurocomputing, vol. 196, pp. 14–22, 2016.
- Y. Zhang, S. Burer, and W. N. Street, “Ensemble pruning via semi-definite programming,” Journal of Machine Learning Research, vol. 7, pp. 1315–1338, 2006.
- W. M. Zhi, H. P. Guo, and M. Fan, “Energy-based metric for ensemble selection,” in Proceedings of the 14th Asia-Pacific Web Conference, pp. 306–317, Kunming, China, April 2012.
- W. Fan, F. Chu, H. Wang, and P. S. Yu, “Pruning and dynamic scheduling of cost-sensitive ensembles,” in Proceedings of the 18th National Conference on Artificial Intelligence and Fourteenth Conference on Innovative Applications of Artificial Intelligence, Edmonton, Canada, August 2002.
- R. Caruana, A. Niculescu-Mizil, G. Crew, and A. Ksikes, “Ensemble selection from libraries of models,” in Proceedings of the 21st International Conference on Machine Learning (ICML '04), pp. 137–144, Banff, Canada, July 2004.
- Q. Dai and M. L. Li, “Introducing randomness into greedy ensemble pruning algorithms,” Applied Intelligence, vol. 42, no. 3, pp. 406–429, 2015.
- R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee, “Boosting the margin: a new explanation for the effectiveness of voting methods,” The Annals of Statistics, vol. 26, no. 5, pp. 1651–1686, 1998.
- I. Partalas, G. Tsoumakas, and I. Vlahavas, “A study on greedy algorithms for ensemble pruning,” Tech. Rep. TR-LPIS-360-12, LPIS, Department of Informatics, Aristotle University of Thessaloniki, Thessaloniki, Greece, 2012.
- D. D. Margineantu and T. G. Dietterich, “Pruning adaptive boosting,” in Proceedings of the 14th International Conference on Machine Learning, pp. 211–218, Nashville, Tenn, USA, September 1997.
- Q. Dai, T. Zhang, and N. Liu, “A new reverse reduce-error ensemble pruning algorithm,” Applied Soft Computing, vol. 28, pp. 237–249, 2015.
- W. Gao and Z.-H. Zhou, “On the doubt about margin explanation of boosting,” Artificial Intelligence, vol. 203, pp. 1–18, 2013.
- A. Asuncion and D. Newman, “UCI Machine Learning Repository,” 2007.
- J. R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann, San Francisco, Calif, USA, 1993.
- I. H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques, Morgan Kaufmann, San Francisco, Calif, USA, 2005.
- J. Demšar, “Statistical comparisons of classifiers over multiple data sets,” Journal of Machine Learning Research, vol. 7, pp. 1–30, 2006.
- S. García and F. Herrera, “An extension on ‘statistical comparisons of classifiers over multiple data sets’ for all pairwise comparisons,” Journal of Machine Learning Research, vol. 9, pp. 2677–2694, 2008.
Copyright © 2016 Huaping Guo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.