Research Article

Two-Stage Bagging Pruning for Reducing the Ensemble Size and Improving the Classification Performance

Table 2

Comparison of the classification performance on the 28 datasets between the single classifier DT, bagging using DT as the base learner, and the proposed corresponding pruning methods including AP, DP, “AP+DP” and “DP+AP”. The values in parentheses represent the optimized parameters ta for AP, td for DP, and ta, td for “AP+DP” and “DP+AP” with the best classification performance achieved by the corresponding pruning method.

DatasetDTBaggingAP“AP+DP”DP“DP+AP”

Aba49.4453.9654.61155.097,554.68754.682,7
Adult81.0685.3285.35185.351,1085.35685.351,6
Aus80.0087.1087.39287.548,487.101087.100,10
Bcw93.9996.1496.28796.423,796.28996.423,8
Bld62.3271.3071.88773.333,273.62373.620,3
Cmc47.3251.1251.60852.488,552.68152.895,1
Col82.8886.9686.96087.232,987.50787.501,7
Cre77.6885.6586.23586.672,586.09386.815,9
Der94.2696.4596.45396.455,796.72196.725,1
Ger67.6076.9077.30277.302,1077.00877.103,9
Gla68.2273.3673.36973.838,874.30474.305,4
Hea72.5981.4882.22282.222,1081.481081.489,4
Hep77.4281.2982.58983.872,783.23283.875,6
Ion89.1794.0294.02894.592,394.30394.592,3
kr-vs-kp99.2899.5399.53699.627,699.531099.569,2
Mam75.2377.1177.52577.947,377.21378.468,3
Pid68.7976.2076.46376.723,276.59576.722,1
Spe71.5480.9081.27982.408,382.77182.770,1
Tel81.6587.8887.94187.944,887.94787.972,7
Veh68.6874.0074.11675.894,276.24276.243,2
Vot93.5694.7195.40395.866,595.86395.861,3
Vow79.9090.3090.40290.811,990.61790.610,7
Yea48.5859.1660.44660.785,860.11760.119,7
Spambase91.5794.5794.57994.619,794.61894.632,6
Tictacto88.3196.5696.56396.762,697.18697.181,6
Wdbc92.6296.4997.36897.368,896.66896.849,9
Wpbc61.1173.7475.25676.267,274.24176.268,5
Spect72.6679.7880.15981.278,480.52381.653,4