Research Article

Credit Risk Prediction Using Fuzzy Immune Learning

Table 5

Comparing precision, recall, and -measure of IFAIS and selected classifiers.

ClassifierClass AustralianGerman
pos.neg.pos.neg.

DMNBtextPrecision0.820.830.50.71
Recall0.870.770.090.96
F-measure0.850.80.160.82

DTNBPrecision0.87 0.84 0.77 0.53
Recall0.87 0.840.410.84
F-measure0.870.84 0.470.81

IFAISPrecision0.930.790.640.80
Recall0.80.930.47 0.89
F-measure0.86 0.850.540.84

LibSVMPrecision0.930.790.780.71
Recall0.80.930.050.99
F-measure0.86 0.850.090.83

LWLPrecision0.930.790.7
Recall0.80.930.01.0
F-measure0.86 0.850.82

KstarPrecision0.760.850.50.76
Recall0.910.640.390.83
F-measure0.830.730.440.79

PARTPrecision0.850.820.50.78
Recall0.850.810.490.79
F-measure0.850.810.5 0.79

SMO with RBFKernelPrecision0.930.790.7
Recall0.80.930.01.0
F-measure0.86 0.850.82

The results are measured by Weka machine learning software. The order of classifiers is alphabetical. The best result is bold and the second most is italic.