Research Article

Global Optimization Ensemble Model for Classification Methods

Table 2

(a) Parameter optimization using grid search. (b) Parameter configuration for classifier.
(a)

ParameterOperatorGrid rangeCombinationScaleOptimal value

Population sizeGA-layer 12–1002, 3, 6, 11, 18, 27, 37, 50, 65, 81, 100Quadratic6
Maximum number of generationGA-layer 11–501, 6, 11, 16, 21, 26, 30, 35, 40, 45, 50Linear16
Number of iterationsCV-layer 22–502, 4, 6, 10, 14, 19, 26, 33, 41, 50Quadratic10
Sampling SizeBagging-layer 30–1.00, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1Linear0.6
Number of iterationsBagging-layer 31–1001, 2, 5, 10, 17, 26, 37, 50, 64, 81, 100Quadratic10

(b)

Operator nameParameter configuration

ID 3Criterion: information_gain
Minimal size of split: 4
Minimal leaf size: 2
Minimal gain: 0.1

Decision treeCriterion: Information_gain
Minimal size for split: 4
Minimal leaf size: 2
Minimal gain: 0.1
Maximal depth: 20
Confidence: 0.5

Random forestNumber of trees: 10
Criterion: Information_gain
Minimal leaf size: 2
Minimal gain: 0.1
Maximum depth: 20
Confidence: 0.5

Rule inductionCriterion: information_gain
Sample ratio: 0.7
Pureness: 0.6
Minimal prune benefit: 0.6

K-NNK nearest neighbors: 11
Weighted vote: true
Measure type: nominal measures
Nominal measure: Dice similarity

Naïve bayesLaplace correction: true

W-AODEFrequency for super parents: 1.0

W-PARTConfidence threshold: 0.5
Minimum objects per leaf: 2.0

W-J48Confidence threshold: 0.5
Minimum objects per leaf: 2.0