Credit Risk Prediction Using Fuzzy Immune Learning
Table 5
Comparing precision, recall, and -measure of IFAIS and selected classifiers.
Classifier
Class
Australian
German
pos.
neg.
pos.
neg.
DMNBtext
Precision
0.82
0.83
0.5
0.71
Recall
0.87
0.77
0.09
0.96
F-measure
0.85
0.8
0.16
0.82
DTNB
Precision
0.87
0.84
0.77
0.53
Recall
0.87
0.84
0.41
0.84
F-measure
0.87
0.84
0.47
0.81
IFAIS
Precision
0.93
0.79
0.64
0.80
Recall
0.8
0.93
0.47
0.89
F-measure
0.86
0.85
0.54
0.84
LibSVM
Precision
0.93
0.79
0.78
0.71
Recall
0.8
0.93
0.05
0.99
F-measure
0.86
0.85
0.09
0.83
LWL
Precision
0.93
0.79
∞
0.7
Recall
0.8
0.93
0.0
1.0
F-measure
0.86
0.85
∞
0.82
Kstar
Precision
0.76
0.85
0.5
0.76
Recall
0.91
0.64
0.39
0.83
F-measure
0.83
0.73
0.44
0.79
PART
Precision
0.85
0.82
0.5
0.78
Recall
0.85
0.81
0.49
0.79
F-measure
0.85
0.81
0.5
0.79
SMO with RBFKernel
Precision
0.93
0.79
∞
0.7
Recall
0.8
0.93
0.0
1.0
F-measure
0.86
0.85
∞
0.82
The results are measured by Weka machine learning software. The order of classifiers is alphabetical. The best result is bold and the second most is italic.