Nonlinear Dynamics in Financial Systems: Advances and Perspectives
View this Special IssueResearch Article  Open Access
Credit Risk Evaluation with a Least Squares Fuzzy Support Vector Machines Classifier
Abstract
A least squares fuzzy support vector machine (LSFSVM) model that integrates advantages of fuzzy support vector machine (FSVM) and least squares method is proposed for credit risk evaluation. In the proposed LSFSVM model, the purpose of incorporating the concepts of fuzzy sets is to add generalization capability and outlier insensitivity, while the least squares method is adopted to reduce the computational complexity. For illustrative purposes, a realworld credit risk dataset is used to test the effectiveness and robustness of the proposed LSFSVM methodology.
1. Introduction
Credit risk evaluation has been a major area of focus for financial and banking industries due to recent financial crises as well as the Basel III regulations. Since the seminal work of Altman [1] was published, many different techniques such as discriminant analysis [1], logit analysis [2], probit analysis [3], linear programming [4], integer programming [5], nearest neighbor (KNN) classifiers [6], and classification trees [7] have widely been applied to credit risk assessment tasks. With the advance of modern computing technology, some artificial intelligence (AI) tools such as artificial neural networks (ANN) [8, 9], genetic algorithms (GAs) [10, 11], selforganizing learning [12], support vector machines (SVMs) [13–18], and some variants of SVMs [19–22] have also been employed for credit risk evaluation. Empirical results have revealed that these AI techniques offer advantages over traditional statistical models and optimization techniques in credit risk evaluation.
According to the literature review, it is easy to find that almost all classification methods can be used for credit risk assessment. However, some hybrid and combined (or ensemble) classifiers, which integrate two or more single classification methods, have shown higher predictability than individual methods. Research on combined or ensemble classifiers is currently flourishing in credit risk evaluation. Recent examples are neural discriminant technique [23], neurofuzzy [16, 24], fuzzy SVM [25], rough set based SVM [26], evolving neural network [27], neural network ensemble [28, 29], support vector machine based multiagent ensemble learning [30], and AIbased fuzzy group decision making (GDM) model [31]. Two recent surveys [32, 33] and one monograph [34] cover credit risk analysis in more detail.
In this study, a new credit classification technique, least squares fuzzy SVM (LSFSVM), is proposed to discriminate good from bad customers in customer credit evaluation. In the existing studies, the fuzzy SVM (FSVM) proposed by Lin and Wang [35] has been shown to be suitable for customer credit assessment [34]. The main reason is that in credit risk evaluation we usually cannot label a customer as absolutely good one who is sure to repay in time or absolutely bad one who will default certainly, while the FSVM treats every sample as belonging to good and bad classes to some extent. This enables the FSVM to offer a higher generalization capability without losing the merits of insensitivity to outliers. Although the FSVM has good generalization capabilities and outlier insensitivity, computational complexity of the FSVM makes its use rather difficult because the final solution of FSVM is derived from a quadratic programming (QP) problem [35]. To reduce the computational complexity of FSVM, this study applies the least squares method to reduce the computational complexity of FSVM and to formulate a new classification method—least squares FSVM (LSFSVM). In the proposed LSFSVM model, equality constraints instead of inequality constraints are used. As a result, the solution is obtained from a set of linear equations, instead of QP presenting in the classical FSVM approach [35], thereby reducing the computational complexity, relative to the FSVM.
From the above descriptions, the main advantage of the proposed LSFSVM can be summarized with regard to the following two aspects. On the one hand, fuzzification processing can increase the generalization capability and improve the suitability of SVM due to the fact that the same uncertainties can be well treated by fuzzy membership. On the other hand, the least squares method can reduce the computational complexity of FSVM because the solution of least squares FSVM can be obtained from a set of linear equations instead of a QP problem, thus increasing the computational speed of FSVM which is attractive in solving fuzzy information engineering problems. In the existing literature, Tsujinishi and Abe [36] proposed a fuzzy LSSVMs method to solve the multiclass problems. Regrettably, the performance of the proposed fuzzy LSSVMs model is inferior to the fuzzy SVMs. On the contrary, the proposed LSFSVM model in this paper reported the good performance in the twoclass problems due to the above features of LSFSVM model.
The main motivation of this study is to formulate the least squares version of FSVM for binary classification problems and to compare its performance with some typical classification techniques in the area of credit risk evaluation. The rest of this study is organized as follows. Section 2 illustrates the formulation of the LSFSVM methodology. In Section 3, a realworld credit dataset is used to test the performance of the LSFSVM to classify different samples. Section 4 concludes the paper.
2. Methodology Formulation
In this section, a brief introduction of SVM classifiers [37] is first presented. Then a fuzzy SVM (FSVM) model [35] is briefly reviewed. Finally, the least squares FSVM (LSFSVM) model is formulated.
2.1. SVM for Binary Classification (By Vapnik [37])
Given a training dataset , where is the th input pattern and is its corresponding observed result, which is a binary variable. In credit risk evaluation models, s denote the attributes of customers and is the observed outcome of repayment obligations. If the customer defaults, , or else . The generic idea of SVM is first to map the input data into a highdimensional feature space through a mapping function and then to find the optimal separating hyperplane with minimal classification errors. The separating hyperplane can be represented as where is the normal vector of the hyperplane and is the bias.
Suppose is a nonlinear function that maps the input space into a higher dimensional feature space. If a dataset is linearly separable in this feature space, the classifier is constructed as which is equivalent to
In order to deal with dataset that is not linearly separable, the previous analysis can be generalized by introducing some nonnegative variables , such that (3) can be modified as follows:
The nonnegative in (4) are those for which data point does not satisfy (3). Thus, the term can be considered as a measure of the amount of misclassification, that is, tolerable misclassification errors.
According to the structural risk minimization principle [37], the risk bound is minimized by solving the following optimization problem: where is a free regularization parameter controlling a tradeoff between margin maximization and tolerable misclassification errors.
Searching the optimal hyperplane in (5) is a quadratic programming (QP) problem. After introducing a set of Lagrangian multipliers and for constraints in (5), the primal problem in (5) is to find out the saddle point of the Lagrangian function; that is,
Differentiating (6) with , , and , one obtains
From (7) one has . The key issue is how to determine the value of . To obtain a solution, the dual problem of the primal problem (6) becomes
Function in (8) is then related to by imposing which is motivated by the Mercer theorem [37]. is the kernel function in the input space that determines the inner products of the two data points in the feature space. According to the KarushKuhnTucker (KKT) theorem [38], the KKT conditions are defined as
From this equality, it is deduced that the only nonzero values, , in (10), are those for which constraints in (4) are satisfied with the equality sign. Data points , corresponding to , are called support vectors (SVs). But there are two types of SVs in a nonseparable case. In the case of , the corresponding support vector satisfies equalities and . In the case of , the corresponding is not zero and the corresponding support vector does not satisfy (2), which considered such SVs as misclassification errors. In terms of the above processing, data points , corresponding to , are classified correctly.
Using the support vectors, the optimal solution for the weight vector in (7) can be given by where is the number of SVs. Moreover, in the case of , condition applies to (11) in terms of the KKT theorem [38]. Thus, one may determine the optimal bias by taking any data point in the dataset. However, from the numerical perspective, it is better to take the mean value of , from such data points in the dataset. Once the optimal parameter air () is determined, the decision function of the SVM classifiers can be represented as
2.2. FSVM (by Lin and Wang [35])
SVM has been proved to be a powerful tool for solving classification problems [37], but it has some inherent limitations. From its formulation discussed above, each training point in the training dataset belongs to either one or the other class. But in many realworld applications, every training sample does not exactly belong to one of the two classes; it may belong to one class to the extent of 80 percent and 20 percent of it may be meaningless. That is to say, there is a membership grade associated with each training data point . In this sense, FSVM is an extension of a SVM that takes into consideration the varying significance of the training samples. For FSVM, each training sample is associated with a membership value . The membership value reflects the confidence degree of the data points. The higher the value, the higher the confidence degree of its class label. Similar to SVM, the optimization problem of the FSVM [35] is formulated as follows:
Similar to SVM, the solution of FSVM is obtained from the above quadratic programming (QP) problem. Note that error term is scaled by membership value . The membership values used to weigh the soft penalty term reflect the relative confidence degree of the training samples during training. Important samples with larger membership values will have more impact in FSVM training than those with smaller values.
Similar to Vapnik’s SVM [37], optimization problem of the FSVM can be transformed into the following dual problem:
In the same way, KKT conditions are defined as
Data point , corresponding to , is called a support vector. There are two types of SVs. The one corresponding to lies on the margin of the hyperplane, and the other corresponding to is treated as misclassified.
Solving (15) leads to a decision function similar to (13), but with different support vectors and the corresponding weights . An important difference between SVM and FSVM is that data points with the same value of may be indicated as a different type of SVs in FSVM due to membership factor . Interested readers can refer to [35] for more details.
2.3. Least Squares FSVM
In both SVM and FSVM, the final solution can be described as an issue closely associated with quadratic programming (QP) problem. The main issue with the QP method is that it is a timeconsuming process to find the solutions when handling some largescale realworld application problems. Motivated by Lai et al. [14] and Suykens and Vandewalle [39], the least squares FSVM (LSFSVM) model is introduced by formulating the following optimization problem:
One can define the Lagrangian function as where is the th Lagrangian multiplier, which can be either positive or negative, due to equality constraints in accordance with KKT conditions [38].
The optimal conditions are obtained by differentiating (18):
From (19), one can obtain the following equations:
Using a matrix form, optimal conditions in (20) can be expressed by where , Y, and 1 are (22), (23), and (24), respectively:
From (22), is positive definite. Thus, can be obtained from (21); that is,
Substituting (25) into the second matrix equation in (21), we can obtain
Here, since is positive definite, is also positive definite. In addition, since is a nonzero vector, . Thus, is always obtained. Substituting (26) into (25), can be obtained.
Hence, the separating hyperplane of LSFSVM can be found by solving the linear set of (21)–(24), instead of quadratic programming (QP), thereby reducing the computational complexity, especially for largescale problems.
The main advantages of the LSFSVM can be summarized into the following five aspects. First of all, the LSFSVM requires fewer prior assumptions about the input data than those required in statistical approaches, such as normal distribution and continuity. Second, it can perform nonlinear mapping from an original input space into a high dimensional feature space, in which it constructs a linear discriminant function to replace the nonlinear function in the original input space. This characteristic also solves the dimension disaster problem because its computational complexity is not dependent on the samples’ dimension. Third, it attempts to learn the separating hyperplane to maximize the classification margin, thereby implementing structural risk minimization and realizing good generalization capability. Fourth, a distinct trait of LSFSVM is that it can further lower computational complexity by transforming a quadratic programming problem into a linear equation set problem. Finally, an important advantage of LSFSVM is that support values in LSFSVM are proportional to the membership degree, as well as misclassification errors at the data points, thus making LSFSVM more suitable for some realworld problems, which is the main difference between the proposed LSFSVM and the traditional SVM and LSSVM models. These important characteristics also make LSFSVM preferable in many practical applications. In the following section, some experiments are presented for verification purpose.
3. Experiment Analysis
In this section, a realworld credit dataset is used to test the performance of LSFSVM. For comparison purposes, linear regression (LinR) [14], logistic regression (LogR) [2], artificial neural network (ANN) [8, 9], Vapnik’s SVM [37], Lin and Wang’s FSVM [35], and LSSVM [39] are also used.
The dataset in this study comes from a financial services company of England, obtained from a CDROM published by Thomas et al. [40]. Each applicant has 14 characteristics or variables, listed in Table 1. The dataset includes detailed information of 1225 applicants, of which 323 are observed as bad customers.

In this experiment, LSFSVM, FSVM, LSSVM, and SVM models use RBF kernel for classification. In the ANN model, a threelayer backpropagation neural network with 10 sigmoidal neurons is used in the hidden layer, and one linear neuron is used in the output layer. The network training algorithm is the LevenbergMarquardt (LM) algorithm. Besides, the learning and momentum rates are set to 0.1 and 0.15, respectively. The accepted average squared error is 0.05, and the training epochs are 1600. The above parameters are obtained by trialanderror method. The experiment is run by MATLAB 6.1 with statistical toolbox, NNET toolbox, and LSSVM toolbox. In addition, three evaluation criteria are used to measure the efficiency of classification:
To show the classification capability of LSFSVM in distinguishing potentially insolvent customers from good customers, we perform the test with LSFSVM at the beginning. This testing process includes five steps.
First, the number of observed bad customers is tripled to make their number nearly equal to the number of observed good customers. The main purpose of such processing is to avoid the impact on performance of imbalanced samples. A similar processing method can be found in Wang et al. [25]. Of course, some other imbalanced data processing methods, for example, the information granulation based method [41], can also be used.
Second, the original data is preprocessed to impute the missing data and transform category data. In this study, interpolation method is used to impute the missing data. For category data, a numerical method is used for transformation.
Third, the original dataset is randomly separated into two parts, that is, training samples and testing samples. In this study, 1500 samples are used for training and the remaining 371 samples are used for holdout testing and performance evaluation.
Fourth, membership grades are generated by the linear transformation function proposed by Wang et al. [25], in terms of the initial score obtained by experts’ experience and opinions. Of course, some new membership generation methods in [42] can also be adopted.
Finally, the LSFSVM classifier is trained and accordingly the results can be evaluated. The above five steps are repeated 20 times to confirm the robustness of the proposed method. In this study, the efficiency and robustness of credit risk evaluation using the LSFSVM model are shown in Table 2.

As can be seen from Table 2, the proposed LSFSVM model exhibits significant classification capabilities. In the 20 experiments, type I accuracy, type II accuracy, and total accuracy are 81.34%, 93.41%, and 89.21%, respectively, in the mean sense. Furthermore, the standard deviation is rather small, revealing that robustness of the LSFSVM classifier is good. These results imply that the LSFSVM model is a promising credit risk evaluation technique.
For further illustration, the classification power of the LSFSVM is also compared with six other commonly used classifiers: linear regression (LinR) [14], logistics regression (LogR) [2], artificial neural network (ANN) [8, 9], Vapnik’s SVM [37], FSVM [35], and LSSVM [39]. The results of the comparison are reported in Table 3.

From Table 3, several important results can be observed.(a)For type I accuracy, LSFSVM is the best of all the listed approaches, followed by FSVM, LSSVM, Vapnik’s SVM, logistics regression, artificial neural network model, and linear regression model, implying that the LSFSVM is a very promising technique in credit risk assessment. Particularly, performance of the two fuzzy SVM techniques (Lin and Wang’s FSVM [35] and LSFSVM) is better than other classifiers listed in this study, implying that the fuzzy SVM classifier may be more suitable for credit risk assessment tasks than other deterministic classifiers, such as linear regression (LinR) and logit regression (LogR).(b)From the viewpoint of type II accuracy, LSFSVM and LSSVM outperform the other five models, implying the strong capability of the least squares version of the SVM model in credit risk evaluation. In the meantime, the proposed LSFSVM model seems to be slightly better than LSSVM, revealing that the LSFSVM is a feasible solution to improve the accuracy of credit risk evaluation. Interestingly, performance of the FSVM is slightly worse than that of the LSSVM. The reasons for this phenomenon are worth exploring further.(c)According to the total accuracy, performances of the two statistical models (LinR and LogR) are much worse than those of the other five models. The main reason is that the five models can effectively capture the nonlinear patterns hidden in the credit data. As is known, there are many factors that affect customer credit. Usually, the relationships between customer default and these factors are very subtle and complex. Besides some linear relationships, some nonlinear relationships often exist in the credit data. Therefore, nonlinear intelligent models can offer advantage over traditional linear models.(d)From the perspective of computational time, two traditional classification models (i.e., LinR and LogR) are faster than all the intelligent models due to their simplicity. In all the intelligent models, LSSVM and LSFSVM are faster than other intelligent models due to the least squares principle. However, LSFSVM is slightly slower than LSSVM because the fuzzification needs some processing time, but LSFSVM performs faster than the SVM and FSVM, indicating that the proposed LSFSVM model is a very effective model in credit risk evaluation.(e)In the five intelligent models, the performance of ANN and SVM models is much worse than the other three intelligent models. The main reason is that the standard ANN and SVM models have their own shortcomings, such as the sensitivity of parameters and outliers, thus affecting their classification performance. For example, ANN models often get trapped into local minima and suffer from overfitting problems, while SVM models occasionally encounter overfitting problem [43]; moreover, some fuzzy information is not handled well by the standard SVM models.(f)LSSVM, FSVM, and LSFSVM models outperform the other four models listed in this study, implying that the variants of SVM have a strong classification potentiality for credit risk evaluation. The possible reason is that the improvement of SVM variants overcomes some inherent limitations of standard SVM proposed by Van Gestel [44], thereby increasing the generalization capability. From a general point of view, LSFSVM outperforms the other six classifiers from the above three measurements, revealing that the LSFSVM is used as an effective tool for credit risk evaluation.
In terms of Table 3 and the three measurements, it is easy to judge which model is the best and which model is the worst. However, it is unclear what the differences between good models and bad ones are. For this purpose, McNemar’s test [45] is conducted to examine whether the proposed LSFSVM classifier significantly outperforms the other six classifiers listed in this study.
As a nonparametric test for the two related samples, McNemar’s test is particularly useful for beforeafter measurement of the same subjects [46]. Taking the total accuracy results from Table 3, Table 4 shows the results of the McNemar’s test for the credit dataset to statistically compare the performance in respect of testing data among the seven classifiers. It should be noted that the results listed in Table 4 are the Chisquared values, and values are in brackets.

According to the results reported in Table 4, some important conclusions can be drawn in terms of McNemar’s statistical test.(1)The proposed LSFSVM classifier outperforms the standard SVM, ANN, logit regression (LogR), and linear regression (LinR) models at 1% statistical significance level. However, the proposed LSFSVM model does not significantly outperform the LSSVM model and the FSVM model. These results are consistent with those of Table 3.(2)Similar to the LSFSVM model, LSSVM and FSVM models outperform the other four individual models (i.e., individual SVM, ANN, LogR, and LinR models) at 1% significance level. But the McNemar’s test does not conclude that the LSSVM model performs better than the FSVM model.(3)For SVM and ANN models, we can find that these two models perform much better than the two statistical models (i.e., LogR and LinR models) at 1% significance level. Interestingly, the SVM model does not outperform the ANN model at 10% significance level, although many applications have reported that the performance of the SVM was much better than that of ANN. The possible reason lies in data samples used in this study.(4)Comparing with LogR and LinR models, it is easy to find that the LogR model performs better than the LinR model at 5% significance level. All findings are consistent with results reported in Table 3.
In summary, according to the above experimental results and statistical testing, it is easy to conclude that the LSFSVM model can significantly outperform some standard intelligent models (e.g., SVM and ANN) and some statistical models (e.g., LogR and LinR), revealing that the LSFSVM can also be used as a competitive solution to credit risk evaluation.
4. Conclusions
In this paper, a powerful classification method—least squares fuzzy support vector machines (LSFSVM)—is proposed to evaluate credit risks. Through the least squares method, a quadratic programming (QP) problem of SVM can be transformed into a set of linear equations successfully, thereby reducing the computational complexity. Furthermore, the fuzzification processing in the proposed LSFSVM model adds generalization capability and insensitivity to outliers. Experiments with realworld dataset have produced good classification results and fast computational efficiency and have demonstrated that the proposed LSFSVM model can provide a feasible alternative to credit risk assessment. Besides the credit risk evaluation problem, the proposed LSFSVM model can also be extended to other applications, such as consumer credit rating and corporate failure prediction problems, which will be investigated in the future research.
Conflict of Interests
The author declares that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The author would like to express his sincere appreciation to two independent referees in making valuable comments and suggestions to the paper. His comments have improved the quality of the paper immensely. This work is partially supported by Grants from the National Science Fund for Distinguished Young Scholars (NSFC no. 71025005), the National Natural Science Foundation of China (NSFC nos. 90924024 and 91224001), Hangzhou Normal University, and the Fundamental Research Funds for the Central Universities in BUCT.
References
 E. I. Altman, “Financial ratios, discriminant analysis and the prediction of corporate bankruptcy,” Journal of Finance, vol. 23, pp. 89–609, 1968. View at: Google Scholar
 J. C. Wiginton, “A note on the comparison of logit and discriminant models of consumer credit behaviour,” Journal of Financial Quantitative Analysis, vol. 15, pp. 757–770, 1980. View at: Publisher Site  Google Scholar
 B. J. Grablowsky and W. K. Talley, “Probit and discriminant functions for classifying credit applicants: a comparison,” Journal of Economic Business, vol. 33, pp. 254–261, 1981. View at: Google Scholar
 F. Glover, “Improved linear programming models for discriminant analysis,” Decision Science, vol. 21, pp. 771–785, 1990. View at: Publisher Site  Google Scholar
 O. L. Mangasarian, “Linear and nonlinear separation of patterns by linear programming,” Operations Research, vol. 13, pp. 444–452, 1965. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 W. E. Henley and D. J. Hand, “A knearestneighbour classifier for assessing consumer credit risk,” Journal of the Royal Statistical Society Series D: The Statistician, vol. 45, no. 1, pp. 77–95, 1996. View at: Google Scholar
 P. Makowski, “Credit scoring branches out,” Credit World, vol. 75, pp. 30–37, 1985. View at: Google Scholar
 K. K. Lai, L. Yu, S. Y. Wang, and L. G. Zhou, “Neural network metalearning for credit scoring,” in Intelligent Computing, vol. 4113 of Lecture Notes in Computer Science, pp. 403–408, 2006. View at: Google Scholar
 R. Malhotra and D. K. Malhotra, “Evaluating consumer loans using neural networks,” Omega, vol. 31, no. 2, pp. 83–96, 2003. View at: Publisher Site  Google Scholar
 M.C. Chen and S.H. Huang, “Credit scoring and rejected instances reassigning through evolutionary computation techniques,” Expert Systems with Applications, vol. 24, no. 4, pp. 433–441, 2003. View at: Publisher Site  Google Scholar
 F. Varetto, “Genetic algorithms applications in the analysis of insolvency risk,” Journal of Banking and Finance, vol. 22, no. 1011, pp. 1421–1439, 1998. View at: Publisher Site  Google Scholar
 Z. Zhu, H. He, J. A. Starzyk, and C. Tseng, “Selforganizing learning array and its application to economic and financial problems,” Information Sciences, vol. 177, no. 5, pp. 1180–1192, 2007. View at: Publisher Site  Google Scholar
 Z. Huang, H. Chen, C.J. Hsu, W.H. Chen, and S. Wu, “Credit rating analysis with support vector machines and neural networks: A Market Comparative Study,” Decision Support Systems, vol. 37, no. 4, pp. 543–558, 2004. View at: Publisher Site  Google Scholar
 K. K. Lai, L. Yu, L. G. Zhou, and S. Y. Wang, “Credit risk evaluation with least square support vector machine,” in Rough Sets and Knowledge Technology, vol. 4062 of Lecture Notes in Artificial Intelligence, pp. 490–495, 2006. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 K. K. Lai, L. Yu, W. Huang, and S. Y. Wang, “A novel support vector machine metamodel for business risk identification,” in PRICAI 2006: Trends in Artificial Intelligence, vol. 4099 of Lecture Notes in Artificial Intelligence, pp. 480–484, 2006. View at: Google Scholar
 R. Malhotra and D. K. Malhotra, “Differentiating between good credits and bad credits using neurofuzzy systems,” European Journal of Operational Research, vol. 136, no. 1, pp. 190–211, 2002. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 L. Zhou, K. K. Lai, and L. Yu, “Least squares support vector machines ensemble models for credit scoring,” Expert Systems with Applications, vol. 37, no. 1, pp. 127–133, 2010. View at: Publisher Site  Google Scholar
 L. Zhou, K. K. Lai, and L. Yu, “Credit scoring using support vector machines with direct search for parameters selection,” Soft Computing, vol. 13, no. 2, pp. 149–155, 2009. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 L. Yu and X. Yao, “A total least squares proximal support vector classifier for credit risk evaluation,” Soft Computing, vol. 17, no. 4, pp. 643–650, 2013. View at: Publisher Site  Google Scholar
 L. Yu, X. Yao, S. Wang, and K. K. Lai, “Credit risk evaluation using a weighted least squares SVM classifier with design of experiment for parameter selection,” Expert Systems with Applications, vol. 38, no. 12, pp. 15392–15399, 2011. View at: Publisher Site  Google Scholar
 L. Yu, S. Wang, and J. Cao, “A modified least squares support vector machine classifier with application to credit risk analysis,” International Journal of Information Technology and Decision Making, vol. 8, no. 4, pp. 697–710, 2009. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 L. Yu, S. Wang, and K. K. Lai, “Credit risk evaluation using a cvariable least squares support vector classification model,” Communications in Computer and Information Science, vol. 35, pp. 573–579, 2009. View at: Publisher Site  Google Scholar
 T.S. Lee, C.C. Chiu, C.J. Lu, and I.F. Chen, “Credit scoring using the hybrid neural discriminant technique,” Expert Systems with Applications, vol. 23, no. 3, pp. 245–254, 2002. View at: Publisher Site  Google Scholar
 S. Piramuthu, “Financial creditrisk evaluation with neural and neurofuzzy systems,” European Journal of Operational Research, vol. 112, no. 2, pp. 310–321, 1999. View at: Publisher Site  Google Scholar
 Y. Wang, S. Wang, and K. K. Lai, “A new fuzzy support vector machine to evaluate credit risk,” IEEE Transactions on Fuzzy Systems, vol. 13, no. 6, pp. 820–831, 2005. View at: Publisher Site  Google Scholar
 L. Yu, S. Wang, F. Wen, K. K. Lai, and S. He, “Designing a hybrid intelligent mining system for credit risk evaluation,” Journal of Systems Science & Complexity, vol. 21, no. 4, pp. 527–539, 2008. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 R. Smalz and M. Conrad, “Combining evolution with credit apportionment: a new learning algorithm for neural nets,” Neural Networks, vol. 7, no. 2, pp. 341–351, 1994. View at: Publisher Site  Google Scholar
 K. K. Lai, L. Yu, S. Y. Wang, and L. G. Zhou, “Credit risk analysis using a reliabilitybased neural network ensemble model,” in Proceedings of the International Conference on Artificial Neural Networks (ICANN '06), vol. 4132 of Lecture Notes in Computer Science, pp. 682–690, 2006. View at: Google Scholar
 L. Yu, S. Wang, and K. K. Lai, “Credit risk assessment with a multistage neural network ensemble learning approach,” Expert Systems with Applications, vol. 34, no. 2, pp. 1434–1444, 2008. View at: Publisher Site  Google Scholar
 L. Yu, W. Yue, S. Wang, and K. K. Lai, “Support vector machine based multiagent ensemble learning for credit risk evaluation,” Expert Systems with Applications, vol. 37, no. 2, pp. 1351–1360, 2010. View at: Publisher Site  Google Scholar
 L. Yu, S. Wang, and K. K. Lai, “An intelligentagentbased fuzzy group decision making model for financial multicriteria decision support: the case of credit scoring,” European Journal of Operational Research, vol. 195, no. 3, pp. 942–959, 2009. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 L. C. Thomas, “A survey of credit and behavioural scoring: forecasting financial risk of lending to consumers,” International Journal of Forecasting, vol. 16, no. 2, pp. 149–172, 2000. View at: Publisher Site  Google Scholar
 L. C. Thomas, R. W. Oliver, and D. J. Hand, “A survey of the issues in consumer credit modelling research,” Journal of the Operational Research Society, vol. 56, no. 9, pp. 1006–1015, 2005. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 L. Yu, S. Y. Wang, K. K. Lai, and L. G. Zhou, BioInspired Credit Risk Analysis—Computational Intelligence with Support Vector Machines, Springer, Berlin, Germany, 2008.
 C.F. Lin and S.D. Wang, “Fuzzy support vector machines,” IEEE Transactions on Neural Networks, vol. 13, no. 2, pp. 464–471, 2002. View at: Publisher Site  Google Scholar
 D. Tsujinishi and S. Abe, “Fuzzy least squares support vector machines for multiclass problems,” Neural Networks, vol. 16, no. 56, pp. 785–792, 2003. View at: Publisher Site  Google Scholar
 V. N. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, NY, USA, 1995. View at: Publisher Site  MathSciNet
 R. Fletcher, Practical Methods of Optimization, A WileyInterscience Publication, John Wiley & Sons, 2nd edition, 1987. View at: MathSciNet
 J. A. K. Suykens and J. Vandewalle, “Least squares support vector machine classifiers,” Neural Processing Letters, vol. 9, no. 3, pp. 293–300, 1999. View at: Publisher Site  Google Scholar
 L. C. Thomas, D. B. Edelman, and J. N. Crook, Credit Scoring and Its Applications, Society of Industrial and Applied Mathematics, Philadelphia, Pa, USA, 2002. View at: Publisher Site  MathSciNet
 M.C. Chen, L.S. Chen, C.C. Hsu, and W.R. Zeng, “An information granulation based data mining approach for classifying imbalanced data,” Information Sciences, vol. 178, no. 16, pp. 3214–3227, 2008. View at: Publisher Site  Google Scholar
 A. Celikyilmaz and I. B. Turksen, “Fuzzy functions with support vector machines,” Information Sciences, vol. 177, no. 23, pp. 5163–5177, 2007. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 L. Cao and F. E. H. Tay, “Financial forecasting using Support Vector Machines,” Neural Computing and Applications, vol. 10, no. 2, pp. 184–192, 2001. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 T. Van Gestel, B. Baesens, J. Garcia, and P. Van Dijcke, “A support vector machine approach to credit scoring,” Bank en Financiewezen, vol. 2, pp. 73–82, 2003. View at: Google Scholar
 Q. McNemar, “Note on the sampling error of the difference between correlated proportions or percentages,” Psychometrika, vol. 12, no. 2, pp. 153–157, 1947. View at: Publisher Site  Google Scholar
 D. R. Cooper and C. W. Emory, Business Research Methods, Irwin, Chicago, Ill, USA, 1995.
Copyright
Copyright © 2014 Lean Yu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.