Abstract

Bankruptcy prediction has been extensively investigated by data mining techniques since it is a critical issue in the accounting and finance field. In this paper, a new hybrid algorithm combining switching particle swarm optimization (SPSO) and support vector machine (SVM) is proposed to solve the bankruptcy prediction problem. In particular, a recently developed SPSO algorithm is exploited to search the optimal parameter values of radial basis function (RBF) kernel of the SVM. The new algorithm can largely improve the explanatory power and the stability of the SVM. The proposed algorithm is successfully applied in the bankruptcy prediction problem, where experiment data sets are originally from the UCI Machine Learning Repository. The simulation results show the superiority of proposed algorithm over the traditional SVM-based methods combined with genetic algorithm (GA) or the particle swarm optimization (PSO) algorithm alone.

1. Introduction

In the past few years, bankruptcy prediction has been extensively investigated since it is a critical issue in the accounting and finance field [1, 2]. Bankruptcy prediction is very important for banks, since they may lose billions of dollars each year due to the bankruptcy of companies. Therefore, banks need to make a decision on the amount and the interest rate of a loan which reflect the creditworthiness of the counter party. It is also of great importance to predict the risk for accounting and financial researchers, credit rating agencies, and other organizations when dealing with counter parties on the business transactions. Thus, there has been a growing research interest in applying statistical methods, expert systems, and artificial intelligence technologies in the bankruptcy early warning model so as to obtain higher prediction accuracy. The bankruptcy is a typical binary classification problem since there are only two results of prediction, bankruptcy and nonbankruptcy. Up to now, many researchers have proposed some classical bankruptcy prediction models based on statistical methods such as Beaver’s univariate model and Altman’s multivariate model (see, e.g., [1]). However, the validity of these traditional statistical methods mainly depends on the subjective judgments of the human financial experts when applied in the selection of some parameters which, in turn, inevitably makes feature selection bias.

With the development of data mining techniques, machine learning methods have been exploited by many researchers for the bankruptcy prediction problem since these methods can provide an unbiased feature selection and decision making mechanism. Let us now provide a brief review of the latest literature on the issue of the bankruptcy prediction problem based on the intelligent methods. In [3, 4], artificial neural networks (ANNs) have been used for the bankruptcy prediction problem. Genetic algorithm (GA) has been applied for the feature selection in [5]. On the other hand, a hybrid of ANNs and GA algorithm has been proposed in [6] for the bankruptcy prediction. Recently, data mining methods [7] have been widely investigated for the bankruptcy prediction problem. As is well known, ANNs methods suffered from multiple local minima which significantly limits their applications. In the context of bankruptcy prediction, the issue to be investigated is a typical binary classification problem, and there is a great need to seek a more appropriate approach that is capable of handling the pattern classification. In search of such a candidate, SVM appears to be an ideal one because of its advantages such as the fact that the solution to the SVM is global and unique and it has a simple geometric interpretation and gives a sparse solution. SVM models, especially, have already been successfully applied to many financial problems including bankruptcy prediction [8]. However, there is a limitation of the SVM approach which lies in the selection of the parameter values of the kernel function. In order to cope with the limitation, intelligent optimization algorithms, such as GA and the PSO, have been exploited to optimize both the feature subset and the parameters of SVM; see, for example, [9].

Compared with GA, PSO does not have genetic operators such as crossover and mutation, so it is implemented easily with very few parameters to adjust. The PSO algorithm has been successfully applied in a variety of areas because of its advantages such as the effectiveness in performing difficult optimization tasks and the convenience for implementation with fast convergence to a reasonably good solution [10, 11]. PSO, especially, has been applied for optimizing the parameters of kernel functions in the SVM; see, for example, [9] and the references therein. Up to now, several variants of PSO algorithms have been proposed in order to improve the search ability of the PSO algorithm. However, the PSO algorithm has some disadvantages such as easy falling into local minima, slow convergence speed, and poor accuracy. In order to overcome these drawbacks, a SPSO algorithm has been proposed in [12]. The SPSO algorithm has been developed which introduces a mode-dependent velocity updating equation with Markovian switching parameters in order to overcome the contradiction between the local search and global search. The proposed SPSO algorithm can not only avoid the local search stagnating in a local area and wasting more time on an invalid search but also lead the swarm move to a more potential area quickly. Therefore, the SPSO algorithm can greatly improve the ability of SVM. Because of its advantages, the SPSO has been successfully applied in the constrained optimization problem and parameters estimation problem; see, for example, [11, 12]. Unfortunately, up to now, little research has been done on the integration of SPSO and the SVM method, also there has been a great potential application in bankruptcy prediction problem. Inspired by the above discussion, in this paper, we apply the SPSO algorithm to optimize the parameters of the kernel function in the SVM method for the bankruptcy prediction problem.

In this paper, we aim to develop a hybrid SPSO and SVM method for bankruptcy prediction problem. The proposed method integrates the merits of SPSO and the SVM, which can optimize the parameters of kernel functions in SVM and the penalty parameter simultaneously. The experiment data sets are originally from the UCI Machine Learning Repository donated on February 9, 2014 [13]. The main contribution of this paper is mainly twofold. (1) A new hybrid approach called switching particle swarm optimization-support vector machine (SPSO-SVM) is proposed to the issue of the bankruptcy prediction. Note that the proposed algorithm can correctly and effectively predict the bankruptcy. (2) Experiment results have shown that the proposed algorithm can not only largely improve the explanatory power and the stability of the SVM, but also obtain a higher prediction accuracy than other SVM models that are optimized by (1) standard PSO algorithm; (2) genetic algorithm (GA); or (3) SVM that all the parameters are randomly initialized.

The rest of this paper is organized as follows: The relevant background is introduced in Section 2. In Section 3, the SPSO algorithm for parameters optimization of the SVM model is described. The results of bankruptcy prediction by the proposed method are discussed in Section 4 and the overall performance is also demonstrated. Finally, concluding remarks are given in Section 5.

2. Research Background

2.1. Support Vector Machine

SVM, firstly developed by Vapnik in 1995 [14], is a supervised learning model with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis [15]. It was constructed on the basis of VC-dimensional theory and the structural risk minimization principle of statistical learning theory. The main features of SVM are the use of kernels, the absence of local minima, the sparseness of the solution, and the capacity control obtained by optimizing the margin [16]. Compared with other algorithms, SVM has many unique advantages when applied in solving small sample, nonlinear, and high-dimensional pattern recognition problem. In order to gain the best generalization ability, SVM will find the best compromise between model complexity and the learning ability.

The SVM has two kinds of models when applied for classification, which are the linear SVM and the nonlinear SVM. First of all, let us provide a brief introduction of the linear SVM. Given a data set of points where is either 1 or −1, indicating the class to which the point belongs, each is a dimensional real vector. We define a linear discriminated function, , that was produced by the linear combination of individual components. The decision rule for the two classes ( and ) problem is as follows: If , then belongs to . If , then belongs to . Especially, when the equation , that is called as the hyper plane. This can be rewritten aswhere is offset in linear space.

Remark 1. would be zero only when all points have the same class. When points are linearly separable, length of is const margin between classes.

In the case that the sample is nonlinear separable essentially, we need to use the nonlinear SVM which is a hyperplane in the high-dimensional feature space. The classification function is represented bywhere stands for signal function and represents optimal hyperplane offset in nonlinear space.

Remark 2. SVM maps the input vectors into a high-dimensional feature space through some nonlinear mapping, chosen a priori. In this space, an optimal separating hyperplane is constructed. To be strict a so-called linear SVM is an optimal hyperplane.

The kernel function is defined as follows:where is mapping from the input space to the feature space (Hilbert space). In this paper, Gaussian RBF is selected as the kernel function of SVM due to the fact that only a single parameter needs to be tuned. The RBF can be expressed as follows:where stands for exponential function and . In this situation, the corresponding feature space is a Hilbert space of infinite dimensions.

However, the performance of SVM depends largely on the selection of parameters, so setting appropriate values for the kernel parameters is a key step in the SVM accurate prediction. How to find the best values is difficult and also a hot research interest of the SVM algorithm. In this paper, a novel switching particle swarm optimization algorithm is used for the parameter optimization of the SVM. The position of a particle , especially, denotes an optimized parameter , and the fitness of a particle is calculated by using a 10-fold cross validation. Therefore, the data set is divided into 10 groups in this paper. One of the 10 groups is used as the test sample and the other 9 groups are exploited to train the SVM. The fitness function is described as follows:where and are weight values that are set as 0.8 and 0.2, respectively.

2.2. Particle Swarm Optimization

PSO is an effective global optimization algorithm developed by Kennedy and Eberhart in 1995 [10]. It is based on the movement of organisms in a bird flock or fish school to optimize a problem by having a population of candidate solutions represented by particles and moving these particles around in the search space according to some kinds of mathematical equation over the particle’s position and velocity. Each particle’s movement is influenced by its local best known position and also guided toward the best known position in the search space, which are updated as better positions are found by other particles. Therefore, the algorithm is expected to move the swarm toward the best solutions. Compared with other optimization algorithms, the PSO algorithm retains the global search strategy and avoids the complex genetic manipulation and the optimal solution can be found within a few iterations.

Assuming the particle swarm search in an -dimensional space, the population is composed of particles; the position of each particle in population represents a solution to the problem. Particles constantly adjust their positions to search for a new solution. Each particle can remember their optimal solution, denoted by , and the swarm experienced the best position, that is, the currently optimal solution, denoted by . The velocity of the th particle is represented by . Therefore, the velocity and position of the particles at th iteration are updated according to the following equations:where represents the th particle velocity on the -dimension at iteration, is the inertia weight, and are constant acceleration coefficients, and is a random number between 0 and 1.

The original form of PSO algorithm described above has been successfully applied in a variety of fields because of its advantages such as the effectiveness in performing difficult optimization tasks and the convenience for implementation with fast convergence to a reasonably good solution. However, PSO algorithm suffered from the premature convergence problem without consistently converging to the global optimum. Nevertheless, various methods have been exploited to overcome this limitation; see, for example, [11, 12, 17] and the references therein.

3. Switching Particle Swarm Optimization for the Parameter Optimization of SVM

In this section, a modified version of the traditional PSO algorithm, namely, SPSO algorithm [11, 12], is employed to optimize the parameters of SVM for the bankruptcy prediction problem.

In the SPSO algorithm, a mode-dependent velocity updating equation with Markovian switching parameters is introduced to overcome the contradiction between the local search and global search. Generally speaking, in the early search stage, the particle in the swarm should keep its independence and the swarms diversity, which helps to enlarge the search scope and avoid premature convergence to a local optimal. In the latter stage of the search process, all the swarms may converge to the best particle for getting a more accurate solution. The velocity and position of the particle are updated according to the following equations:where , , and are the inertia weight and acceleration coefficients. All of them are mode-dependent on a Markov chain. Let be a Markov chain taking values in a finite state space with probability transition matrix . is the transition rate from to and . can be adjusted by the current search information to balance the global and local search abilities as the following equation:where is the swarm size, is the length of the longest diagonal in the search space, is the dimension of the object problem, is the th value of the th particle, and is the th value of the average point in the whole swarm that can be computed by

Based on the above discussion, the pseudocode of the SPSO algorithm is illustrated in Algorithm 1.

(1) Parameter SwarmParaInitialization()
(2) While not satisfying the terminal condition do
(3)  for to swarm numbers do
(4)   evaluate Fitness()
(5)   update Particle Best()
(6)   update Swarm Best()
(7)   if then
(8)     increase the of
(9)   end if
(10)  if then
(11)   increase the of
(12)  end if
(13)  if then
(14)    
(15)  end if
(16) Calculate new Velocity() of the Particle by (7)
(17) Calculate new Position() of the Particle by (7)
(18) end for
(19) end while

Finally, the flowchart of using the SPSO algorithm to optimize the parameters of SVM is presented in Figure 1.

4. An Illustrative Example

In this section, we apply the hybrid SPSO and SVM algorithm in the bankruptcy prediction problem. The experiment data sets are originally from the UCI Machine Learning Repository donated on February 9, 2014, with URL http://archive.ics.uci.edu/ml/datasets.html. The data sets consist of 143 nonbankruptcy samples and 107 bankruptcy samples; that is, we totally have 250 sample data sets. Each sample data set, especially, includes 6 attributes corresponding to the qualitative parameters in the bankruptcy problem. Each attribute contains three parameters, which are , , and , respectively. There are only two prediction results for the bankruptcy problem, which are and .

The SPSO algorithm is implemented in Matlab environment, and the LIBSVM [18] library is exploited for the SVM classifier. Firstly, the parameters “,” “,” and “” are denoted by 1, 0, and −1, respectively. Secondly, the parameters “NB” and “” are denoted by 1 and 2, respectively, which represent the prediction results of SVM belonging to the first class and the second class, respectively. At the same time, a uniformly distributed random noise was added to the sample data sets in order to represent the different weights when experts are predicting the bankruptcy. Finally, from 1 to 71 sample data sets of nonbankruptcy and from 144 to 196 sample data sets of bankruptcy are selected as training data sets for the SVM. We use the 10-fold cross validation, especially, to find out the optimal parameter values of the kernel function.

The classification results of the SVM model and the proposed hybrid SPSO and SVM model are shown in Figures 2 and 6, respectively. Therefore, for the 126 testing samples, we can see from Figure 2 that the basic SVM model where the parameters are set according to experience can only predict 72 testing samples; the average prediction accuracy is 57.1429%. However, it can be seen from Figure 6 that the optimized SVM model can predict 125 testing samples; the average prediction accuracy reached 99.2063%. On the other hand, the fitness curves of SVM models optimized by GA, PSO, and the SPSO algorithm are shown in Figures 35, respectively. It is observed from Figures 35 that the GA-SVM algorithm has better prediction accuracy than the PSO-SVM method; however, the GA-SVM algorithm takes more time. In particular, we can also see from these figures that the presented hybrid SPSO and SVM method outperformed the other three methods when the time and the prediction accuracy are considered simultaneously.

In order to further evaluate the performance of the proposed method in a quantitative way, let us calculate the prediction accuracy and the time of simulation for the above 4 different SVM models. The results are shown in Table 1. Compared with SVM, GA-SVM, and PSO-SVM methods, the experimental results show the superiority of the proposed method, which can largely improve the explanatory power and the stability of the SVM.

5. Conclusions

In this paper, we have presented a hybrid SPSO and SVM algorithm for the bankruptcy prediction problem. Note that the bankruptcy prediction problem is a critical issue in the accounting and finance field. In particular, a recently developed SPSO algorithm is exploited to optimize the optimal parameter values of RBF kernel of the SVM. Compared with SVM, GA-SVM, and PSO-SVM methods, the experimental results show the superiority of the proposed method, which can largely improve the explanatory power and the stability of the SVM. Finally, the proposed algorithm is successfully applied in the bankruptcy prediction model based on the data sets which are originally from the UCI Machine Learning Repository. This can provide a new method for the bankruptcy prediction. Future research directions would include the application of the developed hybrid SPSO and SVM algorithm in the filtering and control problems [1933].

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grants 61422301, 61374127, and 61104041, the Natural Science Foundation of Heilongjiang Province of China under Grant F201428, the Scientific and Technology Research Foundation of Heilongjiang Education Department under Grants 12541061 and 12541592, the 12th Five-Year-Plan in Key Science and Technology Research of agricultural bureau in Heilongjiang province of China under Grant HNK125B-04-03, the Doctoral Scientific Research Foundation of Heilongjiang Bayi Agricultural University under Grant XDB2014-12, the Foundation for Studying Abroad of Heilongjiang Bayi Agricultural University, the Natural Science Foundation for Distinguished Young Scholars of Heilongjiang Province under Grant JC2015016, Jiangsu Provincial Key Laboratory of E-business, Nanjing University of Finance and Economics, under Grant JSEB201301, and the Major Program of Fujian under Grant 2012I01010428.