Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 2039872 | 16 pages | https://doi.org/10.1155/2019/2039872

Predict the Entrepreneurial Intention of Fresh Graduate Students Based on an Adaptive Support Vector Machine Framework

Academic Editor: Erik Cuevas
Received21 Aug 2018
Revised16 Dec 2018
Accepted01 Jan 2019
Published20 Jan 2019

Abstract

Under the background of “innovation and entrepreneurship,” how to scientifically and rationally choose employment or independent entrepreneurship according to their own comprehensive situation is of great significance to the planning and development of their own career and the social adaptation of university personnel training. This study aims to develop an adaptive support vector machine framework, called RF-CSCA-SVM, for predicting college students' entrepreneurial intention in advance; that is, students choose to start a business or find a job after graduation. RF-CSCA-SVM combines random forest (RF), support vector machine (SVM), sine cosine algorithm (SCA), and chaotic local search. In this framework, RF is used to select the most important factors; SVM is employed to establish the relationship model between the factors and the students’ decision to choose to start their own business or look for jobs. SCA is used to tune the optimal parameters for SVM. Additionally, chaotic local search is utilized to enhance the search capability of SCA. A total of 300 students were collected to develop the predictive model. To validate the developed method, other four meta-heuristic based SVM methods were used for comparison in terms of classification accuracy, Matthews Correlation Coefficients (MCC), sensitivity, and specificity. The experimental results demonstrate that the proposed method can be regarded as a promising success with the excellent predictive performance. Promisingly, the established adaptive SVM framework might serve as a new candidate of powerful tools for entrepreneurial intention prediction.

1. Introduction

In recent years, with the enlargement of college enrollment in China, the number of college graduates is increasing every year and the employment situation of the society is becoming more and tenser. In order to solve the problem of employment, the government and universities are trying all kinds of solutions. For example, it is an important way to alleviate the employment problem of college students at present by encouraging some college students to carry out their own entrepreneurial activities and implementing the employment strategy of “employment driven by entrepreneurship.” However, statistics show that the proportion of college students starting a business after graduation is still low. Less than 5% of the graduates will choose to start their own businesses. At the same time, the satisfaction rate of entrepreneurship after three years of graduation is not high. Although many students have received relevant training in entrepreneurship and innovation education at school, there will still be confusion in the choice of entrepreneurship and employment when they graduate. Therefore, it is necessary to make a deep analysis of the rational choice behavior of college students’ employment and entrepreneurship, in order to find an effective way for students to choose the direction of graduation scientifically. At present, a large number of datasets have been accumulated in colleges and universities. We can make deep mining and analysis of these data to establish an intelligent prediction model by which we can find out the factors that affect college students after graduation to choose entrepreneurship or employment in the data. And then we can further analyze the potential correlation between factors to guide students to better choose entrepreneurship or employment.

Up to now, data mining technology has been applied to establish models to analyze the issues related to the employment of college graduates. Decision tree (DT) is one of the commonly used models which are involved. Zang et al. [1] constructed DT model with ID3 algorithm in order to analyze and find out information that is helpful to the employment of graduates from huge amounts of data. To analyze the relationship between college students’ employment and academic performance, reading status, and other factors, Liu et al. [2] proposed a novel method, Information Gain with Weight Based Decision Tree (IGWDT). The purpose of the information gain weight (IGW) defined in IGWDT is to improve the information gain method. Besides, genetic algorithm is used to get the value of IGW for reflecting the degree of influence of different factors. Finally, the most relevant factors are obtained. Certainly, experiments show that the method can achieve good results. Li et al. [3] applied DT model with C4.5 algorithm to trace useful information to improve the employment rate of college graduate students. The experimental result has shown that the proposed method can classify employment data more quickly and correctly, as well as providing valuable results for analysis and decision making. Wang [4] conducted a more in-depth study on the methods and patterns of college students’ employment data mining and mainly studied the establishment of college students’ employment data mining model based on uniform distribution of fuzzy decision tree (FDT), nonuniform distribution of FDT, increasing distribution of FDT, and decreasing distribution of FDT. Apart from DT models, there are many other models that are often mentioned in the literature. For example, Bayesian methods, neural networks (NN), sequential minimal optimization (SMO), and ensemble methods were utilized by Mishra et al. [5] to predict whether students will be at risk of losing their jobs and help schools take timely measures to train students for the improvement of their employment rate. In addition, Xu et al. [6] used Bayesian algorithm to establish a classification model of graduate employment choice. In this model, the inclusion degree is calculated by fuzzy mathematics to determine the categories of each student. The categories are set according to the degree of job satisfaction of graduates, which are satisfactory, general satisfactory, and unsatisfactory. And, proved by experiments, this method has achieved high accuracy. In [7], Rahman et al. tried to determine the model with the highest accuracy for predicting whether graduates would be employed or unemployed six months after graduation. Thakar et al. [8] proposed to construct a unified forecasting model by integrating clustering and classification techniques to identify whether students have unemployment risk. In this model, K-means kernel with chi-square analysis is applied to the data preprocessing stage and then the combination of k-star, random tree, simple cart, and the random forest was used to predict whether students would be at unemployment risk. Tan et al. [9] used k-nearest neighbors, naïve Bayes (NB), DT, NN, logistic regression, and support vector machines (SVM) to predict and evaluate the attributes of student data sets to confirm that graduates with which qualifications are needed by enterprise. Lanka et al. [10] applied NB to predict whether students could be recognized by enterprises. Proven by experiments, KNN and NB had better prediction results. For the problem of unsatisfactory evaluation system of college students’ entrepreneurship, literature [11] put forward an evaluation system of college students’ entrepreneurship based on backpropagation NN theory to promote the development of college students’ entrepreneurship education.

To sum up, there is a lot of work on predicting students’ employment ability, but there is no report on predicting college students’ entrepreneurial employment after graduation. This paper attempts to use a new machine learning framework to predict students' entrepreneurial intention after graduation, that is, to choose to start a business or not. The proposed framework consists of three main parts. The first part uses random forest (RF) method to select the key features in the data, the second part uses chaotic local search based sine cosine algorithm (CSCA) to optimize an optimal SVM model, and the third part uses CSCA-SVM obtained from the previous stage to predict the new sample. Sine cosine algorithm (SCA) is a new swarm intelligence method that was proposed recently by Mirjalili [12]. Since its introduction, SCA has successfully found its applications for many practical problems [1318]. However, like other intelligent algorithms [1925], the original SCA is easy to fall into the local optimum when solving the practical problems. In this study, we try to use the chaotic local search (CLS) strategy to enhance the local search capability of SCA, termed as CSCA. After that, the CSCA was used to determine the optimal parameters for SVM. As shown, the experimental results have shown that CLS strategy has indeed improved the solution quality and convergence speed for SCA. At the same time, it is also observed that the CSCA has also boosted the performance of SVM compared against many other nature inspired metaheuristic algorithms based SVM models. The efficacy of the proposed RF-CSCA-SVM framework was rigorously compared against four other SVM models based on different swarm intelligent algorithms including SCA, moth-flame optimization (MFO) [26] algorithm, bat algorithm (BA) [27], and grasshopper optimization algorithm (GOA) [28] on the real-life dataset collected from Wenzhou University. The classifiers were compared in terms of four common performance metrics including classification accuracy (ACC), sensitivity, specificity, and the Matthews Correlation Coefficients (MCC) criterion. The experimental results demonstrated that the proposed RF-CSCA-SVM approach achieved much better performance than other competitive counterparts.

The rest of this paper is structured as follows. Section 2 offers brief description on the methodology including random forest, support vector machine, sine cosine algorithm, and chaotic local search strategy. The experimental design is given in Section 3. Section 4 presents the detailed simulation results. The discussion on the experimental results is delivered in Section 5. Finally, Section 6 summarizes the conclusions and recommendations for future work.

2. Methods

In SVM model, Gaussian kernel function is needed to solve the calculation of high-dimensional vector inner product. Gamma value of Gaussian kernel function is mainly used to determine the kernel width of SVM. Meanwhile, soft margin support vector machine needs to introduce penalty factor to reduce noise interference. Values of gamma and penalty factor have a great impact on the classification results. Currently, grid search method and gradient descent method are commonly used for parameter optimization of SVM, but the main disadvantage of these methods is that they are easy to fall into local optimal solutions. SCA based on chaotic local search mechanism can find a good balance between local and global optimization ability, so searching these two key parameters of SVM with this method can better determine the optimal value of these two parameters.

This section will give a detailed introduction of the proposed framework for the entrepreneurial intention prediction of college students, named RF-CSCA-SVM. The main flow of the framework is shown in Figure 1. The whole process is divided into three parts. The first part is to normalize the data and evaluate each feature by random forest (RF) method. The second part is to construct an effective SVM model with optimal parameters based on the improved SCA; at the same time, this optimal model was used to evaluate different feature sets in an incremental way and obtain the best feature set. The main task of the third part is to use the optimal model constructed in the previous stage to predict the new data samples.

The main steps conducted by the RF-CSCA-SVM are described in detail as follows:(i)Step 1: Normalize the data to .(ii)Step 2: Each feature of the data is evaluated using RF algorithm and the optimal subset is selected in an incremental manner based on the importance of each feature.(iii)Step 3: Initialize a population randomly based on the upper and lower bounds of the variables.(iv)Step 4: Evaluate the fitness of all search agents by SVM with agent as parameters and update the best solution obtained so far.(v)Step 5: Update the position of each search agent according to chaotic local search strategy.(vi)Step 6: Check if any search agent goes beyond the search space and amend it.(vii)Step 7: Evaluate the fitness of all search agents by SVM with agent as parameters and update the best solution obtained so far.(viii)Step 8: Update iteration t, t=t+1. If t is less than maximum number of iterations, go to step 5.(ix)Step 9: Return the best solution as the optimal SVM parameter pair (C, ).

The computational complexity of the CSCA-SVM method depends on the number of samples (S), the scale of the problem (D), the population number (p), and the number of iterations (g). Therefore, the overall computational complexity is O(SVM, CSCA)=O(Initialization)+g(O(Evaluate the fitness of all agents)+O(Update the best solution obtained so far)+O(SCA position updating) + O(Calculate the CLS position of the best solution)+O(Output)). As we all know, SVM’s computational complexity on S samples is O(S3). Updating the positions of search agents is O (pD). Evaluating the fitness of all agents is O(pO(S3)). In addition, the computational complexity of CLS strategy for the best solution is O(gp). Therefore, the final computational complexity of the CSCA-SVM method is O(SVM,CSCA)=O(pD)+g(O(pO (S3))+O(N)+O(pD)+O(p)) ≈ O(pD+g(O(+p(D+O(S3)+1))).

2.1. Feature Selection Method: Random Forest (RF)

Random forest (RF) [29] is an ensemble machine learning method that uses bootstrap and node random splitting technology to construct multiple decision trees and obtain final classification results by voting. RF has the ability to analyze the classification characteristics of complex interactions. It has good robustness for noisy data and data with missing values and has a faster learning speed. Its variable importance measure can be used as a feature selection tool [30]. In recent years, it has been widely used in various classification, feature selection, and outlier detection problems. This paper mainly uses the mean decrease accuracy to measure the importance of each feature. This method directly measures the impact of each feature on the accuracy of the model. The main idea is to disrupt the order of characteristic values of each feature and measure the impact of sequence changes on the accuracy of the model. Obviously, for unimportant variables, the disruption order will not affect the accuracy of the model much, but, for the important variables, this operation will reduce the accuracy of the model.

2.2. Prediction Engine: Support Vector Machine (SVM)

Support vector machine (SVM) is an advanced artificial intelligence technology [31], which is mainly based on VC dimension theory and structural risk minimization principle, trying to find a compromise between minimizing training set error and maximizing classification interval, so as to obtain the best generalization ability. At present, SVM has been widely used to solve various practical problems [25, 3242].

SVM is a learning algorithm mainly for small samples, which is very suitable for the prediction modeling of current cases. In this experiment, we will adopt the grid search algorithm to obtain the optimal parameters of the SVM model, which is used to construct the optimal classification function, as shown in the following:

In Eq. (1), K(xi, ) = , ai is the Lagrange coefficient, b is the threshold value, xi is the samples to be tested (i=1...n), and indicates the label corresponding to the training samples. (j=1...n) takes the value of 1 or 2, where 1 and 2 represent the type of students who choose to start their own business and look for jobs, respectively, and T indicates target vector, T = .

2.3. Chaotic Local Search Enhanced Sine Cosine Algorithm (CSCA)

SCA was first put forward by Mirjalili et al. [12] which initializes many original stochastic positions and makes them move outwards or towards the best position. This movement is realized by a mathematical mean whose core is sine and cosine formulas. While the core function returns a value less than -1 or greater than 1, this mechanism of movement enables different areas to be explored by random agents. While the value returned by the core function is between −1 and 1, the mechanism will drive the agents to exploit the most promising area. In order to make the exploration turn to development gradually, SCA algorithm adopts a very effective mechanism, which adds an adaptive range to the core function.

The mathematical formula of the update agent location used in the SCA algorithm is presented as follows:

The general framework of SCA is as in Algorithm 1.

Begin
  Initialize a set of search agents (solution) (X)
  Do
   Evaluate each of the search agents by the objective function
   Update the best solution obtained so far (P=)
   Update r1, r2, r3, and r4
   Update the position of search agents using Eq. (4)
  While (t < maximum number of generations)
  Return the best solution obtained so far as the global optimum
End

Chaos is a classic nonlinear natural phenomenon that has been of concern for a long time. It has many unique characteristics. Chaos is very sensitive to its initial conditions. And, also, it has the characteristics of randomness and ergodicity [43]. The explanation of randomness is that the variation of a chaotic system is random. And ergodicity means that, given enough time, chaotic systems can cover all regions. These characteristics, especially randomness and ergodicity, coincide with those required by a good search operator. Therefore, the optimization of objective function can make use of these characteristics. The time spent in chaos optimization is acceptable on a small scale. But time cost will be difficult to bear after the search space becomes large. In order to solve the above problem, chaotic local search (CLS) can be integrated into other intelligent optimization algorithms. For example, CLS has been introduced into GA, PSO, and BFO [23, 4446]. Similar to other intelligent optimization algorithms, both the initial solution of SCA algorithm and the solution generated in the iterative process have strong randomness. This means that any solution of SCA algorithm has a great potential to be continuously optimized. The improved strategy based on this characteristic can be considered as an overall mutation to the population of SCA algorithm. If we repeatedly add the chaotic perturbation factor to the initial solution in the iterative process of SCA algorithm, many new solutions with higher fitness may be obtained. Consequently, we have reason to believe that the convergence speed of SCA algorithm will be greatly improved by continually replacing the original solution with the best solution obtained by chaotic local search.

The integration of CLS mechanism into SCA intelligent optimization algorithm can not only enhance its search ability, but also make it avoid falling into local optimum. In this paper, the famous logistic formula is used to generate chaotic factors and construct chaotic systems. The logistic formula [47] is very sensitive to original conditions. The chaotic system constructed by it will be integrated into the SCA algorithm, thus constructing a new hybrid algorithm. The following is the logistic formula:

where is the variable quantity, , and is the adjustment factor. Judging from the expression of logistic formula, it is a deterministic formula. However, when we set to 4 and make not equal to any one of 0, 0.25, 0.5, 0.75, and 1, it can generate a series of chaotic factors. This is the sensitivity to the original conditions mentioned above. Similarly, this is also an important feature of chaos. Any slight difference in the original conditions will result in a huge difference in the chaotic system that is subsequently generated. The traces of chaotic factors generated by chaotic systems can be found throughout the whole area that needs searching. The traces of these chaotic factors contain important properties, including irregularity and randomness.

In this paper, chaotic local search is described as follows:where is the kth new position vector produced by chaotic local search. is the position vector of the best solution obtained so far. is kth chaotic value in chaotic sequence. LB and UB are the upper and lower bounds of the search space, respectively. is the shrinking scale given by the following:where Max_iter is the maximum number of iterations and t is the current number of iterations.

For the SCA algorithm, the CLS strategy accelerates its convergence speed. This acceleration is achieved by continually generating new positions with chaotic factors in the iteration process and continually adopting greedy methods to preserve the optimal solution. In this way, the solutions of the whole population will be optimized, and the optimal solution will naturally move towards the global optimum. The general framework of CSCA is as shown in Figure 2.

3. Experimental Designs

3.1. Data Collection and Preparation

The data in this paper is mainly from the 2016 and 2017 graduates of Wenzhou University. From these graduates, 300 students were selected as research subjects, of which 136 were self-employed and 164 were employed. Through the analysis of the subjects’ gender, political status, level of education, major, education year, type of normal school students, type of poor students, grade point average (GPA), and total credits, this study intends to investigate the importance and interrelationships of these nine attributes so as to establish a predictive model for decision support. Table 1 is a detailed description of the nine attributes.


Attributes NameDescription

F1GenderMale and female students are represented by 1 and 2, respectively.
F2Political StatusIt is divided into four categories: members of Communist Party of China, probationary members of the CPC, members of the Communist Youth League, and the masses, represented by 1, 2, 3, and 4, respectively.
F3Level of EducationIt is divided into master’s degree, undergraduate degree, and college degree, represented by 1, 2, and 3, respectively.
F4MajorIt is divided into liberal arts and science, represented by 1 and 2, respectively.
F5Education YearIt is divided into 2 years, 3 years, 4 years, and 5 years, represented by 1, 2, 3, and 4, respectively.
F6Type of Normal School StudentsIt is divided into normal students and nonnormal students, represented by 1 and 2, respectively.
F7Type of Poor StudentsIt is divided into four categories: nondifficult students, employment difficulties, family difficulties, and employment and family difficulties, represented by 1, 2, 3 and 4, respectively.
F8Grade Point Average (GPA)GPA is a way for school to assess students’ learning quality. The score is within 0-4 intervals.
F9Total CreditsIt is a unit of measurement used to calculate students’ learning volume. The more credits students receive, the more they learn.

3.2. Experimental Setup

To validate the proposed approach, we have conducted a comparative study between the proposed method and several SVM methods based on other nature inspired methods including PSO, GA, and SCA. LIBSVM [48] was utilized for SVM implementation. PSO, GA, SCA, and CSCA were implemented from scratch. The empirical experiment was conducted on a Windows Server 2008 R2 operating system with Intel (R) Xeon (R) CPU E5-2660 v3 (2.60 GHz) and 16GB of RAM.

Data was first scaled into the range before classification. In order to gain an unbiased estimate of the generalization accuracy, the k-fold cross-validation (CV) was used to evaluate the classification accuracy [49]. The two of SVM, penalty factor C and kernel width , are both set as . The parameters in PSO, GA, and SCA were set as follows. The two constant factors c1 and c2 in PSO were set to 2 and the inertial weight w in PSO was set to 1. Crossover fraction and mutation probability in GA were set to 0.8 and 0.01, respectively. The constant a in SCA and CSCA is set to 2.

To evaluate the proposed method, commonly used evaluation criteria such as classification accuracy (ACC), sensitivity, specificity, and Matthews Correlation Coefficients (MCC) were analyzed. They are defined as follows:

4. Experimental Results and Discussion

4.1. Benchmark Function Validation

In order to evaluate the performance of the CSCA algorithm, a series of experiments on classical benchmark functions were conducted in this section. The benchmark functions can be divided into three parts: unimodal (see Table 2), multimodal (see Table 3), and fixed-dimension multimodal (see Table 4). Tables 24 include the following aspects for each function: the mathematical equation, the dimension of solution space, the range of optimization variables, and the theoretical optimal value.


FunctionDimRange

300
300
300
300
300
300
300


FunctionDimRange

30
300
300
300
300
+  300


FunctionDimRange

21
40.00030
2-1.0316
20.398

23
3-3.86
6-3.32
4-10.1532
4-10.4028
4-10.5363

The performance of the CSCA is compared with MFO, BA, DA, FPA, GOA, SSA, and the original SCA. And the simulation results of CSCA on the benchmark functions are presented in Table 5. The table records the average value (mean) and the standard deviation (std.) of the best solution of each algorithm over 30 independent runs. For fair comparison, all algorithms are implemented in the same environment on the same computing platform. All algorithms can use the same global settings. The maximum number of iterations and the number of populations were set to 1000 and 30, respectively. In addition, the specific parameter values of the above algorithms, such as MFO, BA, and the original SCA, are set according to their original papers.


FunctionMetricCSCAMFOBADAFPAGOASSASCA

f1mean1.85E − 092.12E − 011.59E + 011.57E + 035.04E + 023.76E + 011.60E − 082.89E − 02
std3.34E − 096.31E − 011.41E + 005.17E + 022.01E + 021.98E + 012.75E − 093.48E − 02
rank14587623

f2mean3.04E − 083.61E + 012.71E + 012.32E + 011.57E + 011.62E + 012.30E + 003.94E − 05
std6.29E − 082.36E + 012.87E + 015.70E + 003.51E + 009.91E + 002.36E + 009.20E − 05
rank18764532

f3mean7.94E − 072.61E + 049.55E + 011.78E + 044.73E + 022.67E + 038.15E + 024.44E + 03
std1.09E − 061.42E + 042.75E + 017.37E + 032.01E + 021.24E + 034.57E + 024.51E + 03
rank18273546

f4mean1.99E − 067.87E + 012.63E + 002.71E + 011.85E + 011.40E + 011.30E + 012.63E + 01
std1.53E − 065.89E + 001.42E + 005.95E + 003.64E + 005.05E + 004.28E + 008.68E + 00
rank18275436

f5mean7.32E − 109.69E + 034.73E + 031.96E + 053.45E + 044.59E + 031.97E + 023.10E + 02
std9.69E − 102.83E + 041.40E + 031.29E + 051.88E + 045.86E + 032.32E + 025.51E + 02
rank16587423

f6mean9.97E − 129.90E + 021.53E + 011.54E + 035.14E + 023.63E + 011.64E − 085.99E + 00
std1.05E − 113.13E + 031.80E + 006.43E + 021.85E + 021.11E + 015.02E − 092.13E + 00
rank17486523

f7mean3.21E − 065.34E + 003.23E + 016.37E − 013.10E − 011.21E + 001.51E − 012.45E − 02
std2.74E − 061.35E + 012.22E + 013.48E − 011.42E − 018.11E − 015.34E − 022.81E − 02
rank17854632

f8mean−1.26E + 04−7.83E + 03−7.19E + 03−5.39E + 03−7.77E + 03−7.63E + 03−7.82E + 03−3.92E + 03
std3.10E − 108.14E + 029.45E + 025.64E + 021.86E + 023.64E + 026.89E + 022.49E + 02
rank12674538

f9mean1.31E − 101.78E + 022.87E + 021.84E + 029.84E + 011.06E + 027.65E + 011.84E + 01
mean2.67E − 103.90E + 013.75E + 013.00E + 012.21E + 013.70E + 011.80E + 012.03E + 01
rank16874532

f10mean3.78E − 061.73E + 011.04E + 019.77E + 009.19E + 006.92E + 002.50E + 001.15E + 01
std2.23E − 064.72E + 007.59E + 009.61E − 011.12E + 001.26E + 002.81E − 011.01E + 01
rank18654327

f11mean2.23E − 093.65E + 016.41E − 011.54E + 015.03E + 001.08E + 008.61E − 033.41E − 01
std5.07E − 096.37E + 015.62E − 026.87E + 001.55E + 004.87E − 029.29E − 033.20E − 01
rank18476523

f12mean3.26E − 126.32E + 031.40E + 011.16E + 037.33E + 009.51E + 008.54E + 004.84E + 01
std7.88E − 122.00E + 045.91E + 002.16E + 034.07E + 004.35E + 005.66E + 001.34E + 02
rank18572436

f13mean1.38E − 117.28E + 002.42E + 001.58E + 055.93E + 033.64E + 011.40E + 018.21E + 03
std1.75E − 111.15E + 014.34E − 011.41E + 051.08E + 041.56E + 011.38E + 012.59E + 04
rank13286547

f14mean9.98E − 013.73E + 008.30E + 009.98E − 019.98E − 015.39E + 009.98E − 012.20E + 00
std7.24E − 154.51E + 005.60E + 007.54E − 107.11E − 095.01E + 001.81E − 161.01E + 00
rank26834715

f15mean3.49E − 043.03E − 038.76E − 037.11E − 034.90E − 041.98E − 027.57E − 041.04E − 03
std1.44E − 056.11E − 039.99E − 039.15E − 032.36E − 042.70E − 022.32E − 043.90E − 04
rank15762834

f16mean−1.03E + 00−1.03E + 00−1.03E + 00−1.03E + 00−1.03E + 00−1.03E + 00−1.03E + 00−1.03E + 00
std1.87E − 050.00E + 007.90E − 041.11E − 053.24E − 119.56E − 151.14E − 143.82E − 05
rank61854327

f17mean3.99E − 013.98E − 013.98E − 013.98E − 013.98E − 013.98E − 013.98E − 013.99E − 01
std7.87E − 040.00E + 005.13E − 041.36E − 066.71E − 151.41E − 147.41E − 152.00E − 03
rank71652438

f18mean3.00E + 003.00E + 003.04E + 003.00E + 003.00E + 005.70E + 003.00E + 003.00E + 00
std2.31E − 051.75E − 153.09E − 024.13E − 058.27E − 138.54E + 001.29E − 134.81E − 05
rank41753826

f19mean−3.85E + 00−3.86E + 00−3.83E + 00−3.86E + 00−3.86E + 00−3.58E + 00−3.86E + 00−3.86E + 00
std2.12E − 032.49E − 031.25E − 021.37E − 045.98E − 125.48E − 015.84E − 143.13E − 03
rank64732815

f20mean−3.08E + 00−3.23 E + 00−2.86 E + 00−3.26 E + 00−3.32 E + 00−3.25 E + 00−3.22 E + 00−3.05 E + 00
std5.03E − 014.24E − 015.40E − 021.29E − 011.22E − 014.20E − 031.74E − 013.53E − 01
rank64821357

f21mean−1.02E + 01−6.90E + 00−3.24E + 00−6.05E + 00−1.02E + 01−4.40E + 00−6.63E + 00−2.98E + 00
std2.21E − 103.55E + 001.64E + 002.86E + 002.00E − 043.12E + 003.17E + 002.53E + 00
rank13752648

f22mean−1.04E + 01−9.64E + 00−6.87E + 00−5.13E + 00−1.04E + 01−6.52E + 00−8.58E + 00−2.51E + 00
std2.71E − 102.42E + 003.02E + 003.03E + 009.23E − 043.48E + 003.00E + 001.78E + 00
rank13572648

f23mean−1.05E + 01−7.57E + 00−5.04E + 00−6.47E + 00−1.05E + 01−5.46E + 00−7.65E + 00−4.56E + 00
std1.41E − 103.84E + 002.85E + 003.55E + 002.38E − 023.67E + 003.80E + 001.49E + 00
rank14752638

Sum of rank481151341368612164124
Average rank2.08755.8265.9133.7395.2612.7835.391
Overall rank14783526

Inspecting the detailed results of algorithms on the 23 benchmark functions in Table 5, CSCA has the best results on 17 out of 23 test functions. For all seven unimodal f1-f7 functions and six multimodal f8-f13 functions, the proposed CSCA algorithm can outperform all other algorithms. According to CSCA and SCA metrics, the CSCA performance is improved compared to the basic SCA. In addition, the ranking results also prove that CSCA provides the best solution among all in terms of the mean index. For ten fixed-dimension multimodal functions (f14-f23), CSCA has attained the exact optimal solutions for f15, f21, f22, f23. For other six functions (f14, f16, f17, f18, f19, f20), although in dealing with some cases the improved CSCA is not better than other optimizers, it is observed that the optimization effect of proposed CSCA still obtains better results than the basic SCA in more than 90% of fixed-dimension multimodal cases. The results show that the utilized chaotic local search in CSCA has enhanced the efficacy of the SCA effectively. Moreover, based on rankings, it can be observed that the developed CSCA can achieve the best place. And the overall ranks show that the SSA, FPA, MFO, SCA, BA, and DA algorithms are in the next places, respectively.

Moreover, to visually show the performance of the CSCA, convergence curves of CSCA, MFO, BA, DA, FPA, GOA, SSA, and the original SCA on some typical benchmark functions are also provided in Figure 3. For f1, f4, and f7, the proposed CSCA algorithm has reached the best solution and it is seen that the worst results of CSCA are much better than the best values of the classical SCA and other six algorithms. It is clearly shown that the performance of the CSCA on unimodal cases has improved and the CSCA is much better than other algorithms. Regarding the convergence curves reported in Figure 3, it can be detected that the convergence speed of the CSCA is better than other optimizers in multimodal cases. In dealing with f8, the proposed technique has converged very quickly throughout early steps. For f9, it is seen that the fastest convergence also belongs to the CSCA algorithm, while MFO, BA, DA, FPA, GOA, SSA, and SCA cannot show a better trend. From f12, the proposed CSCA shows the best function value in the early stages, while other optimizers have all fallen into local optima because of their weaker search capability. For f15, the proposed CSCA algorithm has reached the best solution and it is clearly seen that the worst result of CSCA is much better than the best solutions of other methods. For f22 and f23, CSCA has the best performance among all in terms of the std. index and is found to provide better results for these fixed dimensions cases.

To sum up, we can conclude that the proposed CSCA algorithm achieves better search performance and is well capable of escaping the local optimum values than all other competitors.

4.2. Prediction Results of Entrepreneurial Intention

In this experiment we first used the random forest (RF) to evaluate the importance of each feature of the data set. The results of the evaluation are shown in Figure 4. From the figure, we can find that the importance of each feature is very different. Some features, like major (F4) and total credits (F9), are more important while some features, like political status (F3), education year (F5), and type of poor students (F7), are basically redundant features. Then, the optimal feature subset is formed based on the importance of these features in an incremental manner. Table 6 lists the results of the CSCA-SVM model on the incremental feature subset in the form of mean values with the standard deviation in the parentheses. From the table, it can be seen that the subset of features consisting of the five most important features can make the proposed model achieve the best performance. With these five top ranked features, CSCA-SVM has achieved the good prediction results with ACC of 84%, sensitivity of 85.97%, specificity of 83.47%, and MCC of 0.685. It is also interesting to see that RF-CSCA-SVM has got the smallest standard deviation based on these five features. It validates the robustness and stableness of the proposed model. Therefore, these five best features were taken to build the predictive model for the subsequent experiment.


Size of feature subset ACCSensitivitySpecificityMCC

10.6600(0.0625)0.6357(0.0473)0.7605 (0.1534)0.3265(0.1420)
20.7300(0.0637)0.7399(0.0889)0.7323 (0.0719)0.4600(0.1313)
30.7767 (0.0802)0.7815 (0.1025)0.7862 (0.0850)0.5560 (0.1621)
40.7567 (0.0649)0.7701 (0.0750)0.7553 (0.1014)0.5136 (0.1398)
50.8400 (0.0439)0.8597 (0.0570)0.8347 (0.0862)0.6850 (0.0873)
60.7733 (0.0872)0.8168 (0.1084)0.7413 (0.1080)0.5529 (0.1831)
70.7867 (0.0849)0.8482 (0.1187)0.7381 (0.0786)0.5819 (0.1791)
80.7800 (0.0706)0.8307 (0.0634)0.7464 (0.1101)0.5710 (0.1360)
90.6967 (0.1024)0.7282 (0.1066)0.6688 (0.1140)0.3928 (0.2073)

In order to verify the effectiveness of the proposed method, we conducted a comparative study between the RF-CSCA-SVM with other four SVM models based on different nature inspired metaheuristic algorithms including RF-SCA-SVM, RF-MFO-SVM, RF-GOA-SVM, and RF-BA-SVM. The detailed comparison of the five methods is shown in Figure 5. It is revealed that the RF-CSCA-SVM model is better than all other models in four evaluation indexes, and its standard deviation is also the smallest. It means that RF-CSCA-SVM model has better performance and stability in comparison with other models. In terms of the most important evaluation indicator ACC, RF-CSCA-SVM achieved the best result and the smallest standard deviation. RF-GOA-SVM achieved second place, followed by RF-MFO-SVM and RF-CSCA-SVM. The result obtained by RF-BA-SVM is the worst. In the other three evaluation indicators of sensitivity, specificity, and MCC, the results obtained by the RF-CSCA-SVM model are also much better than the other comparative counterparts. It is not difficult to find that the different results obtained by the model have obvious difference. The main reason lies in that there is a different parameter configuration obtained by each optimization algorithm for SVM. This also reveals that the parameters of SVM have a significant impact on the classification performance of SVM. In sum, we can see that the proposed RF-CSCA-SVM can achieve better results or very competitive results than other involved counterparts in four performance metrics.

Figure 6 shows evolutionary curves of RF-CSCA-SVM and the other four models during the training stage. These curves are the average results of the 10 curves obtained by these models during the 10-fold cross-validation. As shown, it is apparent that the RF-CSCA-SVM model can quickly achieve a good performance during the model training process, which reveals that CSCA has a strong search capability and can find much more suitable parameters for SVM in an efficient manner. For RF-BA-SVM, it obtains the worst result compared with other models. The possible reason is that BA’s global search ability is not strong enough to jump out of local optimum. Based on the analysis of the experimental results above, it is not difficult to find that the proposed CSCA strategy has improved a lot the performance of SVM compared to the original SCA. The main reason is owing to the fact that the embedded chaotic local search strategy has enhanced a lot the search capabilities of SCA.

To further evaluate the generalization capability of the proposed method, a hold-out way was conducted. The whole dataset was split into 80% and 20% for training and test, respectively. Owing to the randomness, the method has been run 10 times. The detailed results and the confusion matrix with different runs were recorded in Table 7. As shown, the test accuracy of 83.50% can be achieved, and the sensitivity, specificity, and MCC of the proposed RF-CSCA-SVM on the test set are 91.25%, 72.22%, and 0.6602, respectively. The results of the confusion matrix show that most entrepreneurship students can be accurately predicted as entrepreneurship. The recognition error rate mainly occurs when the students who are employed are misclassified as entrepreneurs.


Fold No.RF-CSCA-SVM
Confusion matrixAccuracySensitivitySpecificityMCC

13620.83330.94740.63640.6361
814

23430.85000.91890.73910.6787
617

33040.83330.88240.76920.6591
620

42930.81670.90630.71430.6367
820

53410.81670.97140.60000.6371
1015

63220.83330.94120.69230.6659
818

72850.83330.84850.81480.6633
522

83000.85000.91890.73910.6787
1416

93430.85000.94590.69570.6807
617

102710.83330.84380.82140.6652
1220

Avg.0.83500.91250.72220.6602

Dev.0.01230.04280.07080.0178

5. Discussions

The study discovered some interesting results. From the experimental results, we can find that the most important features include major (F4), gender (F1), type of normal school students (F6), grade point average (F8), and total credits (F9); the influence of these features on the choice of entrepreneurial intention is relatively prominent. The data show that different majors have obvious choice of entrepreneurial intention. On the whole, arts students have higher initiative intention than science students because they are more active in thinking, lower in employment, and more highly motivated to start their own businesses than students of science and engineering. Gender differences also have a significant impact on entrepreneurial intentions. The proportion of boys choosing to start a business is much higher than that of girls’ maybe because boys are adventurous, and girls prefer stable jobs. Academic achievement and entrepreneurial choice have a clear impact. The academic achievement mainly includes the total average score point and the total credit score, and the academic achievement is generally divided into three grades: good, middle, and next. The students whose academic achievement is in the middle stream do not have obvious advantage in the employment choice, and they will actively think about how to improve their employment possibility through various channels. The intention of starting a business is obviously stronger than that of the students in the other streams. Students with better academic performance choose stable, high-paying, or prestigious careers. Students with poor academic performance do not have a clear plan. Family situation also has a significant impact on entrepreneurial willingness and entrepreneurial behavior of college students. Students with lower socioeconomic status have higher entrepreneurial willingness. They hope to change the social class status through personal efforts. At the same time, the family situation in the aspects of financial experience, personal connection, and other aspects can provide the individual college students with the ability and convenience to start a business, thereby improving the success rate of entrepreneurship. Due to the influence of traditional ideas, normal school students’ posts are considered more stable and decent than other professions, and the concept of entrepreneurship is weak, so the normal school students’ entrepreneurial intention and choice are more indifferent than those of nonnormal school students.

It should be noted that the present study has several limitations that require further discussion. Firstly, the samples involved in this study were limited. To get more accurate results, a larger number of consecutive samples are required to be collected to take part in training the more unbiased learning model. Second, the study was accomplished in a single university. Confirmation of the model in multicenter studies would make the model more reliable for decision support. Furthermore, the involved attributes are limited; future studies should seek to investigate more attributes which may have impact on the students’ entrepreneurial intention.

6. Conclusions and Future Work

In this study, we established an improved SVM framework to predict the employment intentions of college graduates. In order to improve the accuracy of prediction, this paper firstly proposes to use RF to screen the key features in the data and further proposes an improved SCA strategy to tune the optimal parameters of SVM and finally uses the established CSCA-SVM model to predict new samples. Experimental results show that the proposed method has better classification performance than the SVM method based on other swarm intelligent optimization methods on the indicators of ACC, MCC, sensitivity, and specificity. Therefore, we can draw a preliminary conclusion that the proposed prediction framework can effectively predict students' employment intention. In the future work, we plan to establish a set of decision support system based on the proposed framework to assist school departments to predict students’ employment intention. In addition, we plan to collect more data samples in the future to improve the predictive performance of the proposed method.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

Acknowledgments

This research is supported by the Zhejiang Provincial Natural Science Foundation of China (LY17F020012), Science and Technology Plan Project of Wenzhou, China (ZG2017019), and the Medical and Health Technology Projects of Zhejiang Province, China (2019315504).

References

  1. W. Zang and R. Guo, “Application of data mining in employment instruction of undergraduate students,” in Proceedings of the 2010 IEEE International Conference on Intelligent Computing and Intelligent Systems, ICIS 2010, pp. 772–775, China, October 2010. View at: Google Scholar
  2. Y. Liu, L. Hu, F. Yan, and B. Zhang, “Information gain with weight based decision tree for the employment forecasting of undergraduates,” in Proceedings of the 2013 IEEE International Conference on Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing, GreenCom-iThings-CPSCom 2013, pp. 2210–2213, China, August 2013. View at: Google Scholar
  3. L. Li, Y. Zheng, X. H. Sun, and F. S. Wang, “The application of decision tree algorithm in the employment management system,” Applied Mechanics and Materials, vol. 543-547, pp. 1639–1642, 2014. View at: Publisher Site | Google Scholar
  4. J. Wang, “Research of college students' employment data mining based on fuzzy decision tree (FDT),” Agro FOOD Industry Hi Tech, vol. 28, no. 3, pp. 2889–2893, 2017. View at: Google Scholar
  5. T. Mishra, D. Kumar, and S. Gupta, “Students’ employability prediction model through data mining,” International Journal of Applied Engineering Research, vol. 11, no. 4, pp. 2275–2282, 2016. View at: Google Scholar
  6. Y. Xu and B. Zhang, “Research on the employment option of college graduates based on bayesian algorithm of data mining and inclusion degree,” Advanced Science Letters, vol. 11, no. 1, pp. 846–848, 2012. View at: Publisher Site | Google Scholar
  7. N. A. B. A. Rahman, K. L. Tan, and C. K. Lim, “Supervised and unsupervised learning in data mining for employment prediction of fresh graduate students,” Journal of Telecommunication, Electronic and Computer Engineering, vol. 9, no. 2-12, pp. 155–161, 2017. View at: Google Scholar
  8. P. Thakar, A. Mehta, and Manisha, “A unified model of clustering and classification to improve students' employability prediction,” International Journal of Intelligent Systems and Applications, vol. 9, no. 9, pp. 10–18, 2017. View at: Publisher Site | Google Scholar
  9. N. A. A. Rahman, K. L. Tan, and C. K. Lim, “Predictive analysis and data mining among the employment of fresh graduate students in HEI,” in Proceedings of the 2nd International Conference on Applied Science and Technology 2017, ICAST 2017, Malaysia, April 2017. View at: Google Scholar
  10. S. Lanka and C. A. Pinto, “A research review of the student staging paradigm by Machine learning approach,” International Journal of Pure and Applied Mathematics, vol. 119, no. 12, pp. 12849–12856, 2018. View at: Google Scholar
  11. K. Zhou and M. T. Jin, “The university students entrepreneurship evaluation system based on BP neural network,” Applied Mechanics and Materials, vol. 556-562, pp. 6742–6745, 2014. View at: Publisher Site | Google Scholar
  12. S. Mirjalili, “SCA: A Sine Cosine Algorithm for solving optimization problems,” Knowledge-Based Systems, vol. 96, pp. 120–133, 2016. View at: Publisher Site | Google Scholar
  13. S. Das, A. Bhattacharya, and A. K. Chakraborty, “Solution of short-term hydrothermal scheduling using sine cosine algorithm,” Soft Computing, vol. 6, pp. 1–19, 2017. View at: Google Scholar
  14. A.-F. Attia, R. A. El Sehiemy, and H. M. Hasanien, “Optimal power flow solution in power systems using a novel Sine-Cosine algorithm,” International Journal of Electrical Power & Energy Systems, vol. 99, pp. 331–343, 2018. View at: Publisher Site | Google Scholar
  15. B. Mahdad and K. Srairi, “A new interactive sine cosine algorithm for loading margin stability improvement under contingency,” Electrical Engineering, vol. 100, no. 2, pp. 913–933, 2018. View at: Google Scholar
  16. D. R. Nayak, R. Dash, B. Majhi, and S. Wang, “Combining extreme learning machine with modified sine cosine algorithm for detection of pathological brain,” Computers & Electrical Engineering, vol. 68, pp. 366–380, 2018. View at: Publisher Site | Google Scholar
  17. H. Nenavath and R. K. Jatoth, “Hybridizing sine cosine algorithm with differential evolution for global optimization and object tracking,” Applied Soft Computing, vol. 62, pp. 1019–1043, 2018. View at: Publisher Site | Google Scholar
  18. J. Wang, W. Yang, P. Du, and T. Niu, “A novel hybrid forecasting system of wind speed based on a newly developed multi-objective sine cosine algorithm,” Energy Conversion and Management, vol. 163, pp. 134–150, 2018. View at: Publisher Site | Google Scholar
  19. W. Deng, R. Yao, H. Zhao, X. Yang, and G. Li, “A novel intelligent diagnosis method using optimal LS-SVM with improved PSO algorithm,” Soft Computing, pp. 1–18, 2017. View at: Google Scholar
  20. W. Deng, H. Zhao, X. Yang, J. Xiong, M. Sun, and B. Li, “Study on an improved adaptive PSO algorithm for solving multi-objective gate assignment,” Applied Soft Computing, vol. 59, pp. 288–302, 2017. View at: Publisher Site | Google Scholar
  21. W. Deng, H. Zhao, L. Zou, G. Li, X. Yang, and D. Wu, “A novel collaborative optimization algorithm in solving complex optimization problems,” Soft Computing, vol. 21, no. 15, pp. 4387–4398, 2017. View at: Publisher Site | Google Scholar
  22. X. Zhao, X. Zhang, Z. Cai et al., “Chaos enhanced grey wolf optimization wrapped ELM for diagnosis of paraquat-poisoned patients,” Computational Biology and Chemistry, 2018. View at: Publisher Site | Google Scholar
  23. Q. Zhang, H. Chen, J. Luo, Y. Xu, C. Wu, and C. Li, “Chaos Enhanced Bacterial Foraging Optimization for Global Optimization,” IEEE Access, vol. 6, pp. 64905–64919, 2018. View at: Publisher Site | Google Scholar
  24. J. Luo, H. Chen, Q. zhang, Y. Xu, H. Huang, and X. Zhao, “An improved grasshopper optimization algorithm with application to financial stress prediction,” Applied Mathematical Modelling: Simulation and Computation for Engineering and Environmental Systems, vol. 64, pp. 654–668, 2018. View at: Publisher Site | Google Scholar | MathSciNet
  25. L. Shen, H. Chen, Z. Yu et al., “Evolving support vector machines using fruit fly optimization for medical data classification,” Knowledge-Based Systems, vol. 96, pp. 61–75, 2016. View at: Publisher Site | Google Scholar
  26. S. Mirjalili, “Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm,” Knowledge-Based Systems, vol. 89, pp. 228–249, 2015. View at: Publisher Site | Google Scholar
  27. X.-S. Yang, “A new metaheuristic bat-inspired algorithm,” in Nature Inspired Cooperative Strategies for Optimization (NICSO 2010), J. R. Gonzalez, D. A. Pelta, C. Cruz, G. Terrazas, and N. Krasnogor, Eds., vol. 284 of Studies in Computational Intelligence, pp. 65–74, Springer, Berlin, Germany, 2010. View at: Publisher Site | Google Scholar
  28. S. Saremi, S. Mirjalili, and A. Lewis, “Grasshopper optimisation algorithm: theory and application,” Advances in Engineering Software, vol. 105, pp. 30–47, 2017. View at: Publisher Site | Google Scholar
  29. L. Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, 2001. View at: Publisher Site | Google Scholar
  30. X. Wang, Z. Wang, J. Weng, C. Wen, H. Chen, and X. Wang, “A New Effective Machine Learning Framework for Sepsis Diagnosis,” IEEE Access, vol. 6, pp. 48300–48310, 2018. View at: Publisher Site | Google Scholar
  31. B. Schölkopf and A. J. Smola, “Support Vector Machines,” Data Mining Knowledge Discovery Handbook, vol. 1, no. 3, pp. 283–289, 2003. View at: Google Scholar
  32. H.-L. Chen, B. Yang, J. Liu, and D.-Y. Liu, “A support vector machine classifier with rough set-based feature selection for breast cancer diagnosis,” Expert Systems with Applications, vol. 38, no. 7, pp. 9014–9022, 2011. View at: Publisher Site | Google Scholar
  33. H. L. Chen, B. Yang, S. J. Wang et al., “Towards an optimal support vector machine classifier using a parallel particle swarm optimization strategy,” Applied Mathematics and Computation, vol. 239, pp. 180–197, 2014. View at: Publisher Site | Google Scholar | MathSciNet
  34. H.-L. Chen, D.-Y. Liu, B. Yang, J. Liu, and G. Wang, “A new hybrid method based on local fisher discriminant analysis and support vector machines for hepatitis disease diagnosis,” Expert Systems with Applications, vol. 38, no. 9, pp. 11796–11803, 2011. View at: Publisher Site | Google Scholar
  35. M. Pontil and A. Verri, “Support vector machines for 3D object recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 6, pp. 637–646, 1998. View at: Publisher Site | Google Scholar
  36. O. Chapelle, P. Haffner, and V. N. Vapnik, “Support vector machines for histogram-based image classification,” IEEE Transactions on Neural Networks and Learning Systems, vol. 10, no. 5, pp. 1055–1064, 1999. View at: Publisher Site | Google Scholar
  37. H. Drucker, D. Wu, and V. N. Vapnik, “Support vector machines for spam categorization,” IEEE Transactions on Neural Networks and Learning Systems, vol. 10, no. 5, pp. 1048–1054, 1999. View at: Publisher Site | Google Scholar
  38. S. Hua and Z. Sun, “A novel method of protein secondary structure prediction with high segment overlap measure: support vector machine approach,” Journal of Molecular Biology, vol. 308, no. 2, pp. 397–407, 2001. View at: Publisher Site | Google Scholar
  39. I. Guyon, J. Weston, S. Barnhill, and V. Vapnik, “Gene selection for cancer classification using support vector machines,” Machine Learning, vol. 46, no. 1–3, pp. 389–422, 2002. View at: Publisher Site | Google Scholar
  40. C. Huang, L. S. Davis, and J. R. G. Townshend, “An assessment of support vector machines for land cover classification,” International Journal of Remote Sensing, vol. 23, no. 4, pp. 725–749, 2002. View at: Publisher Site | Google Scholar
  41. Z. Huang, H. Chen, C.-J. Hsu, W.-H. Chen, and S. Wu, “Credit rating analysis with support vector machines and neural networks: a market comparative study,” Decision Support Systems, vol. 37, no. 4, pp. 543–558, 2004. View at: Publisher Site | Google Scholar
  42. J. Mourão-Miranda, A. L. W. Bokde, C. Born, H. Hampel, and M. Stetter, “Classifying brain states and determining the discriminating activation patterns: Support Vector Machine on functional MRI data,” NeuroImage, vol. 28, no. 4, pp. 980–995, 2005. View at: Publisher Site | Google Scholar
  43. T. Kapitaniak, “Continuous control and synchronization in chaotic systems,” Chaos, Solitons & Fractals, vol. 6, no. C, pp. 237–244, 1995. View at: Publisher Site | Google Scholar
  44. D. Jia, Y. Jiao, and J. Zhang, “Satisfactory design of IIR digital filter based on chaotic mutation particle swarm optimization,” in Proceedings of the 3rd International Conference on Genetic and Evolutionary Computing, WGEC 2009, pp. 48–51, China, October 2009. View at: Google Scholar
  45. G.-C. Liao and T.-P. Tsao, “Application of a fuzzy neural network combined with a chaos genetic algorithm and simulated annealing to short-term load forecasting,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 330–340, 2006. View at: Publisher Site | Google Scholar
  46. Q. Lü, G. Shen, and R. Yu, “A chaotic approach to maintain the population diversity of genetic algorithm in network training,” Computational Biology and Chemistry, vol. 27, no. 3, pp. 363–371, 2003. View at: Publisher Site | Google Scholar
  47. R. M. May, “Simple mathematical models with very complicated dynamics,” Nature, vol. 261, no. 5560, pp. 459–467, 1976. View at: Publisher Site | Google Scholar
  48. C. Chang and C. Lin, “LIBSVM: a Library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3, article 27, 2011. View at: Publisher Site | Google Scholar
  49. R. Kohavi, “A study of cross-validation and bootstrap for accuracy estimation and model selection,” in Proceedings of the IJCAI-95, C. S. Mellish, Ed., pp. 1137–1143, Morgan Kaufmann, Los Altos, CA, USA, 1995. View at: Google Scholar

Copyright © 2019 Jixia Tu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

661 Views | 418 Downloads | 6 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.