Complexity

Complexity / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 9932292 | https://doi.org/10.1155/2021/9932292

Wei-Chang Yeh, Yunzhi Jiang, Shi-Yi Tan, Chih-Yen Yeh, "A New Support Vector Machine Based on Convolution Product", Complexity, vol. 2021, Article ID 9932292, 19 pages, 2021. https://doi.org/10.1155/2021/9932292

A New Support Vector Machine Based on Convolution Product

Academic Editor: Hocine Cherifi
Received06 Mar 2021
Accepted26 May 2021
Published12 Jun 2021

Abstract

The support vector machine (SVM) and deep learning (e.g., convolutional neural networks (CNNs)) are the two most famous algorithms in small and big data, respectively. Nonetheless, smaller datasets may be very important, costly, and not easy to obtain in a short time. This paper proposes a novel convolutional SVM (CSVM) that has the advantages of both CNN and SVM to improve the accuracy and effectiveness of mining smaller datasets. The proposed CSVM adapts the convolution product from CNN to learn new information hidden deeply in the datasets. In addition, it uses a modified simplified swarm optimization (SSO) to help train the CSVM to update classifiers, and then the traditional SVM is implemented as the fitness for the SSO to estimate the accuracy. To evaluate the performance of the proposed CSVM, experiments were conducted to test five well-known benchmark databases for the classification problem. Numerical experiments compared favorably with those obtained using SVM, 3-layer artificial NN (ANN), and 4-layer ANN. The results of these experiments verify that the proposed CSVM with the proposed SSO can effectively increase classification accuracy.

1. Introduction

Data mining is an effective method for examining and learning from extensive compound datasets of varying quality [1] and has been broadly applied to numerous practical problems in medicine [24], engineering [5], time series data [6], image classification [7], speech recognition [8], handwritten recognition [9], management [10], and social sciences [11], with classification being one of the most popular topics in data mining. Numerous classifiers for data mining have been established such as support vector machines (SVMs) [3, 4, 7] and deep learning algorithms [8, 9, 12].

Deep learning based on artificial neural networks (ANNs) is made up of neurons that have learnable weights and biases such that the neural network, a special mathematical function, is connected or close to the data in dataset as much as possible [12, 13]. Deep learning techniques include convolution neural networks (CNNs) for the continuous space data types (e.g., image and speech recognition) [7, 8, 14], recurrent neural networks (RNNs) for the time series data types (e.g., stock markets and language modeling) [12], and generative adversarial networks (GANs) for generating new examples and classifying examples [15]. Deep learning is an adequate and straightforward data-mining method for big data [12, 13]. Moreover, since deep learning techniques need big data to learn the classification rules, that is, they only work well for large datasets, they pose an enormous challenge to many applications with respect to obtaining large enough datasets [12, 13]. Furthermore, deep learning relies on good hardware, especially the graphics processing unit (GPU), to have better performance, but such hardware is still expensive [16, 17].

The SVM is another well-known and effective supervised learning model for selecting attributes and classifying data. Before the rise of deep learning, the SVM outperformed ANNs in various real-life applications in medicine [3, 4], semiconductor industry [18], online analysis [19], spectral unmixing resolution [20], imbalanced datasets [21], mining financial distress [22], data classification [23], and so forth [2426]. In comparison with deep learning techniques that try to connect data in terms of ANNs, the SVM separates (not to connect) different classes of data based on the kernels through mathematical optimization [27, 28]. In addition, an SVM has high accuracy with less computation power and small data, which are two shortcomings of deep learning [2426]. Therefore, besides the original SVM, various enhanced SVMs have been developed before the development of deep learning [21, 2426]. SVMs are discussed in detail in Section 5.1.

Small data are well-formatted data with small volumes that are accessible, understandable, and actionable for decision makers [29]. The value of data lies in the information content but not the volume of data [30]. For some cases such as the marketing strategies of targeting campaigns or delivering personalized experiences, big data might not be appropriate because they do not require full-on big data [31]. Conversely, small data extract an individual’s data and provide valuable information to help decision makers formulate strategies. Moreover, the occurrence of small data is rare, with the process of collecting them being expensive and strenuous [4, 21]. Hence, if the data mining of small data is improved, it will aid in making useful, cost-efficient, and timely decisions in small data applications.

Deep learning techniques and SVMs belong to a broader family of machine learning algorithms. Deep learning techniques (e.g., convolution neural networks (CNNs)) based on neural networks are powerful for mining big data but less effective in smaller datasets. On the contrary, SVMs outperform all neural network types in smaller datasets but are less effective in mining big data. This paper proposes a novel convolutional SVM (CSVM) that has the advantages of both SVM and deep learning to enhance SVM by maximizing its prediction accuracy and tests for classifying two-class datasets.

The proposed CSVM employs a supervised learning technique that is based on simplified swarm optimization (SSO), which is another powerful machine learning algorithm [2, 6, 3235]. Numerical experiments and comparative research with ANNs and the traditional SVM show the accuracy and effectiveness of the proposed CSVM tested on five two-class datasets.

To summarize the above, the theoretical contribution of this paper is the use of a novel convolutional SVM (CSVM) that has the advantages of both SVM and deep learning, that is, the use of SVM and the vital operation techniques of CNNs including stride and convolution, to enhance SVM and the use of one-solution, one-filter, one-variable greedy SSO update mechanism to prevent the solutions that are near optimums to be kept away from their current positions and to reduce the runtime.

2. Proposed and Traditional Convolution Products

The major difference between the proposed CSVM and the traditional SVM is the convolution product. Hence, the traditional and the proposed new convolution products are introduced and discussed in Section 2.

2.1. Convolution-Related Concept

CNNs represent some of the most significant models of deep learning, and their performance has been verified in numerous recognition research areas. Among the vital operation techniques of CNNs, we introduce some that are used in this paper [12, 13].(1)Padding: to prevent the reduction in data size generated by the convolution process in the next layer, we add zeros around the input image, with such action being called padding.(2)Stride: a kernel that is moving a horizontal or vertical distance each time is called a stride. The greater the stride is, the more independent the neighboring values in the convolution process are.(3)Convolution: in each operation of convolution, multiplication of the values between the input and the kernel (filter) moves through based on the given stride after padding. Then, these products are summed up and filled in the corresponding positions on the next layer.

2.2. Proposed Convolution Product with Repeated Attributes

Suppose that Natt, Nsol, Nfilter, and Nvar are the numbers of attributions, solutions, filters constructed in each solution, and the variables contained in each filter, respectively. Let be the value of the ath attribute in the rth record and let be the value of the ath attribute in the rth record after using the fth filter, where a = 1, 2, …, Natt, s = 1, 2, …, Nsol, and f = 1, 2, …, Nfilter. For example, nine attributes are used in the breast cancer dataset of University of California Irvine (UCI) [36], and the vector representing the first normalized record is listed as follows:

is calculated using the convolution product in terms of and the filter Xs, f as follows:

From equation (2), there are Nvar attributes that are included in the ath attribute if a + Nvar1 < Natt. However, no attributes vr, Natt+1, vr, Natt+2, …, vr, Natt + N are included when we need to use equation (2) to update the last Nvar1 attributes with a + Nvar − 1 > Nvar.

Let filter Xs, 1 = [xs, 1, 1, xs, 1, 2, xs, 1, 3] = [−1, 0, 1]. The procedures for generating the new attributes using the convolution product are listed as follows:

From the above, the first and second old attributes (i.e., and ) are used only once as shown in equation (3) and twice as shown in equations (3) and (4) for generating and , respectively. Similarly, the last and the second last attributes in I1 (i.e., and ) are only shown in equations (8) and (9), respectively. Moreover, there are no new attributes and based on equation (2).

There is no padding in the proposed CSVM. However, we need to guarantee that the two following situations are satisfied to fix the above problems:(1)Each attribute is included in the same number (i.e., Nvar) of convolution products(2)The last j attributes still exist after each convolution product

The first (Nvar − 1) attributes are repeated and appended in the last attribute of the same record such that the total number of attributes is an integer multiple of Nvar; that is, for and f = 1, 2, …, Nfilter. Hence, following the same example discussed above, we have

Thus, each new attribute is generated by three convolution products, and we have new I1 accordingly.

Let Ii, j be the updated ith record after using the jth filter, Ii = Ii, 0. The next example demonstrates updated I1 after two filters are used, with each having Nvar = 3 variables; Xs, 2 = [xs, 2, 1, xs, 2, 2, xs, 2, 3] = [1.8, −0.9, 0.7].

Thus, after using the two filters Xs,1 and Xs,2, we have

The basic idea of the proposed convolution product with repeated attributes is that the first (Nvar − 1) attributes are repeated and appended in the last attribute of each (updated) record such that the total number of attributes is an integer multiple of Nvar; that is, for . The pseudocode of the proposed convolution product with repeated attributes is listed in Algorithm 1.

Input: The rth record and the sth solution .
Output: The IrXs.
STEP C0. Let f = 1 and for i = 1, 2, …, Natt.
STEP C1. Let a = 1, for i = Natt + 1, 2, …, Natt + Nvar − 1, k = i − Natt.
STEP C2. Let b = 0, i = a, and j = 1.
STEP C3. Let .
STEP C4. If j < Nvar, let i = i + 1, j = j + 1, and go to STEP C3.
STEP C5. If a < Natt, let , a = a + 1, and go to STEP C2.
STEP C6. If f < Nfilter, let f = f + 1 and go to STEP C1.

Additionally, we obtain the following properties after employing the proposed convolution product with repeated attributes.

Property 1. If xs, f, 1 = α and xs, f, k = 0 for all k = 2, …, Nvar and all f = 1, …, Nfilter, thenfor all a = 2, …, Natt and f = 1, …, Nfilter.

3. Proposed and Traditional SSO

In the proposed CSVM, all values in filters of the proposed convolution product with repeated attributes are updated based on the proposed new SSO. The traditional SSO is introduced briefly, and the proposed SSO including the new self-adaptive solution structure with pFilter, the novel one-solution, one-filter, one-variable greedy update mechanism, and the fitness function are presented in Section 3.

3.1. Traditional SSO

The SSO is one of the simplest machine-learning methods [2, 6, 3235] in terms of its update mechanism. It was first proposed by Yeh and has been tested to be a very useful and efficient algorithm for optimization problems [33, 34], including data mining [2, 6]. Owing to its simplicity and efficiency, SSO is used here to find the best values in filters of the proposed CSVM.

The basic idea of SSO is that each variable, such as the jth variable in the ith solution xi, j, needs to be updated based on the following stepwise function [2, 6, 3235]:where the value ρ[0, 1] [0, 1] is generated randomly and the parameters , , , and 1 –  are all in [0, 1] and are the probabilities of the current variable that are copied and pasted from the best of all solutions, the best ith solution, the current solution, and a random generated feasible value, respectively.

There are different variants of the traditional SSO which are customized to different problems from the no free lunch theorem; for example, the four items in equation (15) are also reduced to three items to increase the efficiency; parameters , and are all self-adapted; special values or equations are implemented to replace , , xi, j, and x; or only a certain number of variables are selected to be updated, and so forth. However, the SSO update mechanism is always based on the stepwise function.

3.2. Fitness Function

Fitness functions help solutions learn toward optimization to attain goals in artificial intelligence, such as the proposed CSVM, the traditional SVM, and the CNN. The accuracy obtained by the SVM, based on the records transferred from the proposed convolutions, is adopted here to represent the fitness to maximize in the CSVM:Input: all records and the sth solution for r = 1, 2, …, Nrec.Output: the F (Xs).STEP F0: calculate Ir = IrXs based on the pseudocode provided in Section 2.2 for r = 1, 2, …, Nrec.STEP F1: classifier {I1, I2, …, INrec} using the SVM and let the accuracy be F (Xs).

3.3. Self-Adaptive Solution Structure and pFilter

In the proposed CSVM, each variable of all filters in each solution is initialized randomly from [−2, 2]. Each filter and solution are presented by Nvar 1 and NfilterNvar, respectively, since the number of filters may be more than one. For example, the sth solution Xs and the fth filter Xs, f in Xs are denoted as follows:where

However, overall, the number of filters is equal, that is, Nfilter for each solution and all generations. However, a greater number of filters do not always guarantee a better fitness value. Hence, we need to record the best number of filters for each solution. Let filter j be the best filter of solution s = 1, 2, …, Nsol, and define pFilter[s] = j if F[Xs, f] ≤ F[Xs, j] for all k = 1, 2, …, Nfilter. Note that Xh, i is the best solution for pFilter[h] = i among all existing solutions if F[Xs, f] ≤ F[Xh, i] for all s = 1, 2, …, Nsol and f = 1, 2, …, Nfilter.

In the end, only the best solution (e.g., Xs) and its best number of filters, namely, Xs, 1, Xs, 2, …, Xs, j, where pFilter[s] = j, are reported. In addition, the update mechanism is based on the best filter in the proposed CSVM. Hence, the solution is self-adapted by the best number of filters.

3.4. One-Solution, One-Filter, One-Variable Greedy SSO Update Mechanism

The proposed new one-solution, one-filter, one-variable greedy SSO update mechanism is discussed in this subsection.

3.4.1. One-Solution Is Selected Randomly to be Updated in Each Generation

In the proposed CSVM, all values in filters are variables that must be determined to implement convolution products. Without the help from the GPU, it takes a long time to update variables to deepen the SVM. Hence, instead of the traditional algorithms, including SSO, the genetic algorithm (GA), and particle swarm optimization (PSO), of which all solutions need to be updated, only one solution is selected randomly for updating in each generation of the proposed new SSO update mechanism. Let solution s be selected to be updated based on the following equations:where ρ[0, 1] is a random floating-point number generated from interval [0, 1] and ρ[1, Nsol] {1, 2, …, Nsol} is the index of the solution selected randomly, gBest is the index of the best solution found, and the 0 is a new solution generated randomly. The new updated solution Xs will be either discarded or replaced with the old Xs based on the process described next.

3.4.2. One-Filter One-Variable Greedy Update Mechanism

All variables need to be updated, namely, the all-variable update mechanism, in the traditional SSO, and it has a higher probability of escaping the local trap compared to the updates with only some variables. However, the all-variable update mechanism may cause solutions that are near optimums to be kept away from their current positions. Additionally, its runtime is Nsol times that of the one-variable update, which selects one variable randomly to be updated. Hence, to reduce the runtime, only one variable in one filter in the solution selected in Section 3.4.1 is updated.

Let s be the solution selected to be updated. In the proposed new SSO, only one filter, for example, f, where f = 1, 2, …, pFilter[f] = j, in solution s is chosen randomly. Moreover, one variable, for example, xs, f, k, where k = 1, 2, …, Nvar, in such filter Xs, f is also selected randomly to be updated based on the following simple process:where is a the random number generated in the update mechanism, and subscript is the lower bound and upper bound for the random number . The interval of is derived from the optimal value of multiple randomized trial and error results. 0.05 is the step size of the local search, in order to ensure that in the local search process to find a fine enough optimal solution. After resetting all variables in these filters Xs, h to a random number generated from [−2, 2] for all h > f, we have

Also, F[Xs, l] = F[Xs, f − 1] for all l < f.

Moreover, the updated solutions Xs, including these new updated variables and filters, are all discarded, if their fitness values are not better than that of Xs; that is,

3.5. Pseudocode of the Proposed SSO

The pseudocode of the proposed SSO based on the new self-adaptive solution structure, pFilter, and the new update mechanism are listed in Algorithm 2.

Input: A random selected solution (e.g., Xs) with its pFilter.
Output: The updated Xs.
STEP U0. Generate a random number ρ[0,1] from [0, 1] and select a solution, say Xs where s {1, 2, …, Nsol} based on equation (19).
STEP U1. Select a filter, say Xs, j where j {1, 2, …, pFilter[s]}.
STEP U2. Update Xs to X based on equation (21).
STEP U3. Based on equation (22) to decide to let Xs = X or discard X.
STEP U4. If Xs = X, let pFilter[s] = f, where F (Xi) ≤ F (Xf) for all i = 1, 2, …, Nfilter. Otherwise, halt.
STEP U5. If F (XgBest, pFilter[gBest]) ≤ F (Xs, pFilter[s]), let gBest = s.

4. Proposed Small-Sample OA to Tune Parameters

It is important to select the most representative combination of parameters to find good results for all algorithms, such as the three parameters , and in SSO. To reduce the computation burden, a novel concept called small-sample orthogonal array (OA) is proposed in terms of OA test to tune parameters in Section 4.

4.1. OA

The design of experiment (DOE) adopts an array design that arranges the tests and factors in rows and columns, respectively, such that rows and columns are independent of each other, and there is only one test level in each factor level [37]. The DOE is able to select better parameters from some representative predefined combinations to reduce test numbers [2, 38].

The Taguchi OA test, first developed by Taguchi [37], is a DOE that is implemented to achieve the objective of this study. OA is denoted by Ln (ab), where , a, and b are the numbers of tries, levels of each factor, and factors, respectively. For example, Table 1 represents an OA denoted by L9 (34).


Try IDFactor 1Factor 2Factor 3Factor 4

11111
21222
31333
42123
52231
62312
73132
83213
93321

From Table 1, we can see that the characteristics of the OA are orthogonal as follows:(1)The number of different levels in each column is equal; for example, numbers 1, 2, and 3 appear three times in each column in Table 1.(2)All ordered pairs of the two factors for the same test also appear exactly once, for example, (1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (2, 3), (3, 1), (3, 2), and (3, 3) in columns 1 and 2, 1 and 3, 1 and 4, 2 and 3, 2 and 4, and 3 and 4, to ensure that each level is dispersed evenly in the complete combination of each level of factors.

4.2. Proposed Small-Sample OA

There are three major methods for tuning parameters:(1)The try-and-error method: It implements the tests exhaustively by trying all possible cases to find the one with the better results. It is the simplest and the most inefficient one.(2)The parameter-adapted method: It selects and tests some set of parameters from the existing parameters, which are already used in some applications. This method may have some issues with respect to identifying the characteristics of new problems.(3)The DOE: It selects the parameters from the experiment design. Compared to the two aforementioned methods, this method is the most efficient and effective one. However, this method faces an efficiency problem in large datasets or needs to be repeated very often.

Hence, to overcome these aforementioned problems, a novel method called the small-sample OA test is proposed to improve the OA method for tuning parameters. To reduce the runtime, the proposed small-sample OA test only samples few data randomly from the dataset and conducts the OA test on the subsets of such small-sample data to find the best parameters that result in the highest accuracy, the shortest runtime, and/or the largest number of solutions with the maximal number of obtained highest accuracy based on the three following rules:Rule 1. The one with the highest accuracy among all others;Rule 2. The one with the shortest runtime, with a big gap between such runtime and others if there is a tie based on Rule 1;Rule 3. The one with the largest number of solutions that have the highest accuracy if there is a tie based on Rule 2.

Then, this selected parameter set is applied to the rest of the unsampled dataset. The example for this proposed test is provided in Section 6.

5. Proposed CSVM and Traditional SVM

The proposed CSVM is a convolutional SVM modified by employing a new convolution product, which is updated based on the proposed new SSO. The traditional SVM is introduced briefly, and then the proposed pseudocode of the proposed CSVM is presented.

5.1. Traditional SVM

SVMs are excellent machine learning tools for binary classification cases [27, 28]. The purpose of an SVM is to maximize the margin between two support hyperplanes to separate two classes of data. Let X = {z1 = (x1, y1), z2 = (x2, y2), …, zn = (xn, yn)} be a two-class dataset for training. For example, in a linear SVM, a hyperplane is a line, and we want to find the best hyperplane WTX + b = 0 to separate these two classes of data in X, where W is the weight vector and b is the bias perpendicular to such hyperplane such that ||W|| is as large as possible. The above linear SVM is a constrained optimization model and it can be written as follows [27, 28]:

After applying the Lagrange multiplier method to the constrained optimization model, the SVM problem is a convex quadratic programming problem that can be presented as follows [27]:where λi is the Lagrange multiplier.

For these high-dimensional data, it is very difficult to find a single linear line to separate two different sets. Hence, these data are mapped into a higher-dimensional space using a function that is called the kernel in SVM. Then, a hyperplane can be found to separate the mapped data. Here, we list some popular kernel functions [27, 28]:

For more details of SVM and its development, the reader is referred to [25, 26].

5.2. Pseudocode of the Proposed CSVM

The pseudocode of the proposed CSVM is described below together with the integration of the proposed convolution product discussed in Subsection 2.2, the proposed SSO introduced in Section 3, and the proposed small-sample OA presented in Section 4.2 (Algorithm 3).

 PROCEDURE CSVM0
Input: A dataset.
Output: The accuracy of the classifier CSVM.
STEP 0. Separate the dataset into k folds randomly, and then select one-fold (e.g., the kth fold of the dataset); for the small-sample OA, it has N tries.
STEP 1. Implement CSVM0 (i, k) using the ith parameter setting on the kth fold of the dataset for i = 1, 2, …, N, and then let the parameter setting of the try (e.g., i) with the highest accuracy among all N tries.
STEP 2. Implement CSVM0 (i, j) on the jth fold of the dataset using the parameter setting of the ith try for j = 1, 2, …, k.
PROCEDURE CSVM0 (α, β)
Input: The parameter setting in the αth try of the small-sample OA and the βth fold of the dataset.
Output: The accuracy.
STEP W0. Generate solutions Xs randomly, then calculate F (Xs, f) based on the proposed convolution product and the SVM. Find pFilter[s] and gBest such that F (XgBest, pFilter[gBest]) ≥ F (Xs, f), where s = 1, 2, …, Nsol and f = 1, 2, …, Nfilter.
STEP W1. Let t = 1.
STEP W2. Update a randomly selected solution based on the pseudocode of the new SSO provided in Subsection 3.5 and the parameter setting in the αth try of OA.
STEP W3. Increase the value of t by 1, that is, let t = t + 1, and then go to STEP W2 if t < Ngen.
STEP W4. Halt, F (XgBest, pFilter[gBest]) is the accuracy, and XgBest, pFilter[gBest] is the classifier.

6. Experimental Results and Summary

There are two experiments, Ex1 and Ex2, in this study. Ex1 is based on the proposed small-sample OA concept to find the parameters , Cp, , Ngen, Nfilter, and Nvar in the proposed CSVM. Then, these parameters are employed in Ex2 to conduct an extension test to compare these results with those obtained from the DSCM, SVM, 3-layer ANN, and 4-layer ANN, respectively.

6.1. Simulation Environment

Four algorithms are developed and adapted in this study including the proposed CSVM, SVM, the 3-layer ANN, and the 4-layer ANN. The proposed CSVM is implemented using Dev C++ Version 5.11 C/C++, and the SVM part is integrated by calling the libsvm library [28] with all default setting parameters. The codes of both the 3-layer and 4-layer ANNs are modified using the source code provided in [39], which is coded in Python and run in Anaconda with epochs = 150, batch_size = 10, loss = “binary_crossentropy,” optimizer = “Adam,” activation = “ReLU” and 12 neurons in the first hidden layer, and activation = “sigmoid” in the second hidden layer of the 4-layer ANN. The test environment is Intel (R) Core (TM) i9-9900K CPU @ 3.60 GHz, 32.0 GB memory, and 64-bit Windows 10.

To validate the proposed CSVM, the proposed CSVM was compared with the traditional SVM and the 3-layer and 4-layer ANNs on five well-known datasets: “Australian Credit Approval” (A), “breast-cancer” (B), “diabetes” (D), “fourclass” (F), and “Heart Disease” (H) [34] based on a tenfold cross-validation in Ex2. Summary of the five datasets is provided in Table 2. A brief introduction of the datasets is as follows:“Australian Credit Approval” (A): this file concerns credit card applications. This database exists elsewhere in the repository (Credit Screening Database) in a slightly different form. This dataset is interesting because there is a good mix of attributes-continuous, nominal with small numbers of values, and nominal with larger numbers of values. There are also a few missing values.“breast-cancer” (B): the term “breast cancer” refers to a malignant tumor that has developed from cells in the breast. It is the most common cancer among women in almost all parts of the world. The used dataset consists of 699 instances that were classified as benign and malignant. Also, the dataset has 11 integer-valued attributes.“diabetes” (D): diabetes mellitus is one of the most serious health challenges in both developing and developed countries. Diabetes dataset that we used contains 8 categories and 768 instances and records on diabetes patients (several weeks to months worth of glucose, insulin, and lifestyle data per patient and a description of the problem domain), gathered from larger databases belonging to the National Institute of Diabetes and Digestive and Kidney Diseases.“fourclass” (F): the dataset has irregular spreads over the space including disconnected regions and they are not linearly separable. A four-class nonlinearly separable dataset consists of 862 pieces of data and 2 dimensions.“Heart Disease” (H): heart attack diseases remain the main cause of death worldwide, including South Africa, and possible detection at an earlier stage will prevent the attacks. This database contains 76 attributes, but all published experiments refer to using a subset of 14 of them. In particular, the Cleveland database is the only one that has been used by ML researchers to this date.


IDFull nameRecord numberAttribute numberAttribute characteristics

AAustralian Credit Approval69014Integer, real
Bbreast-cancer69910Integer, real
DDiabetes7688Integer, real
Ffourclass8622Integer, real
HHeart Disease27013Integer, real

Let , T, G, f, and N be the highest accuracy levels obtained in the end, the runtime, the earliest generation that obtained , the number of filters generating , and the number of solutions that have , respectively. To be easily recognized, the subscripts 25, 50, 75, 100, avg, max, min, and std represent the related values obtained at the end of the 25th, 50th, 75th, and 100th generations, the average, the maximum, minimum, and the standard deviation, respectively.

6.2. Ex1: Small-Sample OA Test

The orthogonal array used in this study is called L9 (34) as shown in Table 3.


TryNfilterNsolNvar()

11111
21222
31333
42321
52332
62113
73231
83312
93123

In L9 (34), there are nine tries and four factors: C = (), Nsol, Nvar, and Nfilter; each factor has three levels as shown in Table 4. The higher the level, the larger related values with the exception of C; for example, in level 1, Nsol = 25 is smaller than that in level 2. The most distinguishable difference among all three levels in C of Table 3 is that level 2 has higher cr which is to increase the global search ability, while level 3 has the lower value of cr to enhance the local search ability.


Level codeNfilterNsolNvarC = ()

1125(0.40, 0.30, 0.20, 0.10)
2350(0.35, 0.25, 0.15, 0.25)
3475(0.45, 0.30, 0.20, 0.05)

The results obtained from the proposed CSVM in terms of the proposed small-sample OA test are listed in Table 5, in which each try is run fifteen times, where the larger Nfilter, Nsol, Nvar, and/or Ngen, the longer the runtime. However, it is not necessary to have better fitness values from Table 5. For example, the best fitness value has already been found in G25, namely, F25 = F50 = F75, in all datasets except Dataset D whose best fitness value is found in G75.


IDTryT25G25f25N25100F25T50G50f50N50100F50T75G75f75N75100F75

A2113.09711285.0000026.11191186.6666639.07191186.66666
227.6891188.3333455.33131188.3333482.59281288.33334
342.8141488.3333486.04111588.33334128.39261688.33334
4130.54631388.33334260.181131588.33334389.331131588.33334
543.4463190.0000087.39143190.00000131.20143190.00000
684.35731586.66666168.38133288.33334252.73273388.33334
7149.2885490.00000298.47155690.00000447.50245790.00000
8215.83951488.33334427.65951488.33334637.65215188.33334
9113.3295888.33334225.841851188.33334337.702551588.33334

B2116.03611397.0588231.99611397.0588247.832611497.05882
233.41511597.0588267.26511597.05882101.45511597.05882
351.63611597.05882103.01611597.05882155.17611597.05882
4160.82031597.05882324.22031597.05882485.32031597.05882
553.08231597.05882106.13231597.05882159.76231597.05882
6103.51031597.05882209.98031597.05882316.64031597.05882
7185.49051597.05882372.27175198.52941558.38175198.52941
8271.5195198.52941542.9395198.52941815.9695198.52941
991.8855198.52941184.50175298.52941276.67175298.52941

D1122.70011578.0821945.41011578.0821968.09011578.08219
245.3681880.8219190.931911180.82191136.652111280.82191
368.4541282.19178137.29131282.19178206.65271482.19178
4207.1583282.19178416.77193582.19178624.94293682.19178
568.4293382.19178137.33173582.19178206.86293682.19178
6136.6193780.82191274.811531080.82191412.702331280.82191
7232.5565682.19178464.951841082.19178696.72255183.56165
8345.07451580.82191691.15451580.821911037.06451580.82191
9114.5895182.19178230.88185282.19178347.32295582.19178

F3119.82011580.2325639.71011580.2325659.61011580.23256
239.97011580.2325680.54011580.23256121.10011580.23256
361.63911283.72093123.971911483.72093186.522211583.72093
4181.25011580.23256364.80011580.23256548.34011580.23256
562.05631583.72093124.67631583.72093187.85631583.72093
6120.07011580.23256242.21011580.23256364.72011580.23256
7211.59251583.72093424.27251583.72093637.24251583.72093
8303.38011580.23256610.26011580.23256918.33011580.23256
9100.15011580.23256202.20011580.23256304.68011580.23256

H2112.58811088.0000026.111911188.0000039.672411388.00000
226.09911088.0000054.121911488.0000081.412611588.00000
340.9891588.0000082.38161788.00000124.00281988.00000
4122.50631488.00000244.67631488.00000366.86631488.00000
539.94831088.0000080.13831088.00000122.232431288.00000
679.05531288.00000160.391631588.00000241.891631588.00000
7136.33851288.00000273.141551388.00000409.732151488.00000
8203.64751588.00000407.86751588.00000612.67751588.00000
968.0895888.00000135.69175988.00000203.972551288.00000

1Select the setting based on Rule 1. 2Select the setting based on Rule 2. 3Select the setting based on Rule 3.

Adhering to Rule 1 listed in Section 4, only the try with the highest accuracy is selected to be used for the rest of the unsampled dataset. In this case, Try 7 is selected for Dataset D, since the greatest accuracy is obtained from Try 7 in G75. From Rule 2, the runtime T must be considered if there are two tries tied in accuracy. For example, both Try 5 and Try 9 have the highest accuracy in Dataset A, but Try 5 is selected, since its runtime is only 43.43, which is considerably less than the runtime (149.28) of Try 9. Similarly, Try 7 and Try 1 are selected for Datasets B and H, respectively. The parameter setting for the rest of datasets, namely, Dataset F, is based on Rule 3, and Try 5 is selected in accordance with Rule 2.

Hence, we obtain the parameter settings listed in Table 6.


IDTry IDNrecNattNfilterNsolNvar()Ngen100FSVM100FNgenT

A569014375 = 11(0.35, 0.25, 0.15, 0.25)2581.66666490.0000043.44
B969910425 = 5(0.45, 0.30, 0.20, 0.05)2595.58823498.5294191.88
D77688450 = 6(0.40, 0.30, 0.20, 0.10)7576.71232683.56165696.72
F58622375 = 2(0.35, 0.25, 0.15, 0.25)2580.23255983.7209362.05
H127013125 = 4(0.40, 0.30, 0.20, 0.10)2580.00000088.0000012.58

In Dataset F, there are only two attributes resulting in also two variables in each filter of Table 6. Another observation is that the values Nfilter, Nvar, Nsol, and Ngen are always the smallest, since all the best final fitness values are equal to 88.00000 regardless of the generation number. Then, the parameter setting with less runtime is selected, which is reasonable. This is similar to Dataset B whose solution number is only 25, with less local search ability.

In Table 5, the accuracy levels obtained from SVM for the first fold of each dataset are listed in the last second column named 100FSVM. From Table 5, all values in FNgen are better than those in the corresponding FSVM. Moreover, also from Table 5, all fitness values obtained from G25, namely, F25, are already at least equal to FSVM; that is, FSVM ≤ F25 ≤ F50 ≤ F75 ≤ F100. Hence, the proposed CSVM outperforms the traditional SVM in the small-sample OA, and the wide discrepancy between the final performances of the CSVM and the SVM is further reinforced in Subsection 6.3 using the parameters setting from the proposed small-sample OA.

6.3. Ex2

The results for G100 are collected to evaluate the effectiveness of the concept of the proposed small-sample OA and verify any possible effects on the average and the best fitness values of higher generation numbers. The complete data including the average, best, worst, and standard deviation of fitness of each fold for each dataset are listed in Tables 711.


FoldIndexSVMF25F50F75F100

1AVG81.66666490.22222290.72222190.94444491.277776
MAX81.66666491.66666493.33333693.33333693.333336
MIN81.66666488.33333688.33333690.00000090.000000
STDEV0.0000001.0480151.0434361.0434370.947201

2AVG92.15686096.73202597.25490197.51633997.647058
MAX92.15686098.03921598.03921598.03921598.039215
MIN92.15686096.07843096.07843096.07843096.078430
STDEV0.0000000.9401240.9770060.8819150.797722

3AVG83.11688292.98701493.46320493.98268594.112555
MAX83.11688296.10389796.10389796.10389796.103897
MIN83.11688290.90908890.90908890.90908890.909088
STDEV0.0000001.4308101.4260601.5435681.552959

4AVG92.18750093.75000093.80208393.80208393.958333
MAX92.18750093.75000095.31250095.31250095.312500
MIN92.18750093.75000093.75000093.75000093.750000
STDEV0.0000000.0000000.2852720.2852720.540228

5AVG87.87879293.78787794.09090794.24242294.292928
MAX87.87879295.45454495.45454495.45454495.454544
MIN87.87879292.42424092.42424093.93939293.939392
STDEV0.0000000.7282740.7282740.6164220.651793

6AVG80.70175287.07602487.60234087.66082087.719299
MAX80.70175287.71929987.71929987.71929987.719299
MIN80.70175284.21052685.96491285.96491287.719299
STDEV0.0000000.9755330.4451020.3203060.000000

7AVG82.27848191.30801891.85654091.98312192.151897
MAX82.27848192.40506093.67088393.67088393.670883
MIN82.27848188.60759791.13924491.13924491.139244
STDEV0.0000001.0370950.7193900.6919870.697290

8AVG85.13513290.40540690.72072290.81081291.126127
MAX85.13513291.89189191.89189191.89189193.243240
MIN85.13513289.18918689.18918690.54054390.540543
STDEV0.0000000.7401670.5867190.5497800.768000

9AVG86.41975493.33333493.74485893.99177294.156381
MAX86.41975495.06172995.06172995.06172995.061729
MIN86.41975492.59259092.59259092.59259093.827164
STDEV0.0000000.6953630.5552810.5360150.555279

10AVG85.18518890.37037190.61728590.82304691.028807
MAX85.18518891.35802591.35802592.59259092.592590
MIN85.18518888.88888588.88888590.12345990.123459
STDEV0.0000000.7534040.6953600.7016290.720113


FoldIndexSVMF25F50F75F100

1AVG95.58823498.52941198.52941198.52941198.578431
MAX95.58823498.52941198.52941198.529411100.000000
MIN95.58823498.52941198.52941198.52941198.529411
STDEV0.0000000.0000000.0000000.0000000.268492

2AVG100.000000100.000000100.000000100.000000100.000000
MAX100.000000100.000000100.000000100.000000100.000000
MIN100.000000100.000000100.000000100.000000100.000000
STDEV0.0000000.0000000.0000000.0000000.000000

3AVG95.58823497.30392197.49999997.59803897.745097
MAX95.58823498.52941198.52941198.52941198.529411
MIN95.58823497.05882397.05882397.05882397.058823
STDEV0.0000000.5574250.6854290.7207830.746201

4AVG100.000000100.000000100.000000100.000000100.000000
MAX100.000000100.000000100.000000100.000000100.000000
MIN100.000000100.000000100.000000100.000000100.000000
STDEV0.0000000.0000000.0000000.0000000.000000

5AVG100.000000100.000000100.000000100.000000100.000000
MAX100.000000100.000000100.000000100.000000100.000000
MIN100.000000100.000000100.000000100.000000100.000000
STDEV0.0000000.0000000.0000000.0000000.000000

6AVG94.11764597.05882397.05882397.05882397.058823
MAX94.11764597.05882397.05882397.05882397.058823
MIN94.11764597.05882397.05882397.05882397.058823
STDEV0.0000000.0000000.0000000.0000000.000000

7AVG95.58823497.89215698.03921598.28431398.333333
MAX95.588234100.000000100.000000100.000000100.000000
MIN95.58823497.05882397.05882397.05882397.058823
STDEV0.0000000.9206800.8918800.8707260.840216

8AVG98.52941198.52941198.52941198.52941198.529411
MAX98.52941198.52941198.52941198.52941198.529411
MIN98.52941198.52941198.52941198.52941198.529411
STDEV0.0000000.0000000.0000000.0000000.000000

9AVG94.11764598.52941198.52941198.52941198.529411
MAX94.11764598.52941198.52941198.52941198.529411
MIN94.11764598.52941198.52941198.52941198.529411
STDEV0.0000000.0000000.0000000.0000000.000000

10AVG97.18309897.18309897.18309897.18309897.230046
MAX97.18309897.18309897.18309897.18309898.591553
MIN97.18309897.18309897.18309897.18309897.183098
STDEV0.0000000.0000000.0000000.0000000.257148


FoldIndexSVMF25F50F75F100

1AVG76.71232683.51598183.92694084.01826384.018263
MAX76.71232684.93150384.93150384.93150384.931503
MIN76.71232680.82191580.82191580.82191580.821915
STDEV0.0000001.2710381.1338101.1564131.156413

2AVG76.81159282.51207682.94685983.14009683.236714
MAX76.81159285.50724885.50724885.50724885.507248
MIN76.81159279.71014481.15941681.15941681.159416
STDEV0.0000001.3147641.0549741.0411530.983928

3AVG82.02246986.40449486.66666686.89138586.891385
MAX82.02246987.64045087.64045087.64045087.640450
MIN82.02246985.39325785.39325785.39325785.393257
STDEV0.0000000.7435550.7656690.6814370.681437

4AVG76.47058979.55882579.70588479.80392379.852943
MAX76.47058980.88235580.88235580.88235580.882355
MIN76.47058979.41176679.41176679.41176679.411766
STDEV0.0000000.4487190.5982920.6614360.685429

5AVG71.23288071.27854271.32420371.36986571.369865
MAX71.23288072.60273772.60273772.60273772.602737
MIN71.23288071.23288071.23288071.23288071.232880
STDEV0.0000000.2501010.3475440.4179830.417983

6AVG68.29268671.13820971.91056772.15447072.154470
MAX68.29268673.17073173.17073173.17073173.170731
MIN68.29268669.51219270.73170570.73170570.731705
STDEV0.0000000.9245100.8154580.8523560.852356

7AVG82.55814482.55814482.59690382.59690382.635663
MAX82.55814482.55814483.72093283.72093283.720932
MIN82.55814482.55814482.5581447.00000082.558144
STDEV0.0000000.0000000.2122950.2122950.295009

8AVG71.42857474.02597274.32900374.63203474.805194
MAX71.42857475.32467775.32467775.32467775.324677
MIN71.42857471.42857472.72727272.72727272.727272
STDEV0.0000001.0231670.8817030.7420100.731485

9AVG80.51947883.24675283.67965283.85281183.939391
MAX80.51947884.41558184.41558184.41558184.415581
MIN80.51947881.81818481.81818481.81818483.116882
STDEV0.0000000.8594310.7380770.7380770.636534

10AVG78.37838084.00900984.59459484.90991085.000000
MAX78.37838087.83783787.83783787.83783787.837837
MIN78.37838082.43243482.43243482.43243482.432434
STDEV0.0000001.3782651.6107121.5903851.677734


FoldIndexSVMF25F50F75F100

1AVG80.23255986.24031087.01550487.24806287.403101
MAX80.23255988.37209388.37209388.37209388.372093
MIN80.23255983.72093283.72093284.88372084.883720
STDEV0.0000001.6748681.2621301.2015731.104529

2AVG79.00000081.13333381.33333381.43333381.533333
MAX79.00000082.00000082.00000082.00000082.000000
MIN79.00000081.00000081.00000081.00000081.000000
STDEV0.0000000.3457460.4794630.5040070.507416

3AVG75.86206881.07279581.41762281.49425081.570878
MAX75.86206882.75862182.75862182.75862182.758621
MIN75.86206878.16091978.16091978.16091980.459770
STDEV0.0000000.8394200.7445040.6981900.367634

4AVG81.81818483.86363784.16666884.35606284.659091
MAX81.81818486.36363286.36363286.36363286.363632
MIN81.81818482.95454482.95454482.95454482.954544
STDEV0.0000000.8118010.8919490.8793790.882748

5AVG82.85714084.28571384.28571384.28571384.333332
MAX82.85714084.28571384.28571384.28571385.714287
MIN82.85714084.28571384.28571384.28571384.285713
STDEV0.0000000.0000000.0000000.0000000.260821

6AVG81.91489482.97872283.01418383.04964483.085105
MAX81.91489482.97872284.04255784.04255784.042557
MIN81.91489482.97872282.97872282.97872282.978722
STDEV0.0000000.0000000.1942290.2699040.324607

7AVG85.89743885.89743885.89743885.89743885.897438
MAX85.89743885.89743885.89743885.89743885.897438
MIN85.89743885.89743885.89743885.89743885.897438
STDEV0.0000000.0000000.0000000.0000000.000000

8AVG71.42857477.75281177.94007774.63203478.127343
MAX71.42857478.65168878.65168875.32467778.651688
MIN71.42857477.52809177.52809172.72727277.528091
STDEV0.0000000.4571220.5507110.7420100.570131

9AVG80.51947884.57364385.15503883.85281185.465115
MAX80.51947886.04650986.04650984.41558186.046509
MIN80.51947881.39534882.55814481.81818483.720932
STDEV0.0000001.3633491.0437590.7380770.732235

10AVG78.37838089.28571389.28571384.90991089.285713
MAX78.37838089.28571389.28571387.83783789.285713
MIN78.37838089.28571389.28571382.43243489.285713
STDEV0.0000000.0000000.0000001.5903850.000000


FoldIndexSVMF25F50F75F100

1AVG80.00000088.40000088.53333388.93333389.066667
MAX80.00000092.00000092.00000092.00000092.000000
MIN80.00000088.00000088.00000088.00000088.000000
STDEV0.0000001.2205141.3829841.7207321.799106

2AVG82.75862196.32184296.43678596.55172796.551727
MAX82.75862196.55172796.55172796.55172796.551727
MIN82.75862193.10344793.10344796.55172796.551727
STDEV0.0000000.8748570.6295670.0000000.000000

3AVG82.75862189.65517489.77011789.77011789.770117
MAX82.75862189.65517493.10344793.10344793.103447
MIN82.75862189.65517489.65517489.65517489.655174
STDEV0.0000000.0000000.6295660.6295660.629566

4AVG88.23529192.74509693.23529293.43137193.431371
MAX88.23529194.11764594.11764594.11764594.117645
MIN88.23529191.17646891.17646891.17646891.176468
STDEV0.0000001.4924011.3708581.2652451.265245

5AVG100.00000100.000000100.000000100.000000100.000000
MAX100.00000100.000000100.000000100.000000100.000000
MIN100.00000100.000000100.000000100.000000100.000000
STDEV0.0000000.0000000.0000000.0000000.000000

6AVG80.00000093.50000094.66666794.83333395.000000
MAX80.00000095.00000095.00000095.00000095.000000
MIN80.00000090.00000090.00000090.00000095.000000
STDEV0.0000002.3304581.2685410.9128710.000000

7AVG82.60869695.65217695.79710395.79710395.797103
MAX82.608696100.000000100.000000100.000000100.000000
MIN82.60869691.30435295.65217695.65217695.652176
STDEV0.0000001.1417950.7938000.7938000.793800

8AVG89.28571389.88095190.47618990.71428491.071426
MAX89.28571392.85714092.85714092.85714092.857140
MIN89.28571389.28571389.28571389.28571389.285713
STDEV0.0000001.3537461.7123681.7795451.816240

9AVG75.00000086.20370786.38889286.48148486.481484
MAX75.00000088.88888588.88888588.88888588.888885
MIN75.00000086.11111586.11111586.11111586.111115
STDEV0.0000000.5071490.8475770.9604030.960403

10AVG84.61538788.71795088.97436189.23077189.615386
MAX84.61538792.30769392.30769392.30769392.307693
MIN84.61538788.46154088.46154088.46154088.461540
STDEV0.0000000.9758001.3297921.5647621.792660

6.3.1. Boxplots of the Experimental Results from Ex2

Both results obtained from the 3-layer and 4-layer ANNs are the least favorable with a big gap between the proposed CSVM and the traditional SVM. Hence, these two ANN-based methods are not discussed further, and we only focus on the proposed CSVM and the traditional SVM.

We determined that the higher the generation number, the better average fitness value. However, it can be observed that the best fitness value remains unchanged from G75 to G100 except for the 8th fold in Dataset A, the 1st and 10th folds in Dataset B, and the 5th fold in Dataset F. Therefore, Ngen = 75 is acceptable and there is no need for Ngen = 100 to increase the fitness value of the best solution. The position (the fitness values obtained) and the length (the range of the fitness values) of box in G100 are frequently higher and shorter than those of G25 in most boxplots. Hence, a larger generation number has a higher probability of enhancing the average solution quality under the cost of the longer runtime but ultimately does little to improve the best fitness value.

6.3.2. Number of Folds for Finding the Final Best Fitness Values

Table 12 lists the number of folds that have found the final best fitness values. The subscripts of dataset ID in the first column of Table 12 indicate the generation number used in Ex 2; for example, B25 indicates that 25 generations are used for Dataset B based on the parameters obtained with small-sample OA in Ex 1. Folds 7, 8, 10, 8, and 10 (see bold numbers in Table 12) under G25, G25, G75, G25, and G25 in Datasets A, B, D, F, and H, respectively, have found the best final fitness values.


DatasetG25G50G75G100

A2571,4,788,109810
B2581,1081,1081,1010
D7597101010
F2585,6959510
H2510101010

The folds written as subscripts indicate the final best fitness values that have failed to be found. For example, 71,4,7 in G25, A25 represents that there are seven folds (from the ten folds) that have already found the best fitness values after 25 generations with the remaining three folds 1, 4, and 7 failing to do so in Dataset A.

To calculate the probability of the best final fitness value in Table 12, we add the folds (7 + 8 + 10 + 8 + 10) and divide the product by the total number of folds (50) in Table 6 to get 86%, which informs us that the probability of finding the best final fitness value without reaching G100, which entails a significantly longer runtime, is 86%.

Hence, the proposed small-sample OA is effective in setting parameters to increase the efficiency and solution quality of the proposed CSVM. The above observation further confirms that having better parameters ultimately negates the need for a greater generation number to increase the fitness of the best solution.

6.3.3. ANOVA of the Experimental Results

To investigate the small-sample OA, the Analysis of Variance (ANOVA) is carried out to test the average fitness obtained from the proposed CSVM in terms of the parameters set by the small-sample OA, as shown in Table 13. The cells marked with “v” indicate that there is a significant gap between the pair of distinctive generation numbers listed in their respective rows in the fold denoted by the column. This is reinforced through the distinct difference between the average fitness values obtained from G25 and G75 in all folds of Dataset A.


12345678910

A(25, 50)
(25, 75)vvvVvvvvVv
(25, 100)vvvvvvvvVv
(50, 75)
(50, 100)vvvvvvvvVv
(75, 100)

B(25, 50)
(25, 75)
(25, 100)
(50, 75)
(50, 100)
(75, 100)

D(25, 50)
(25, 75)
(25, 100)
(50, 75)
(50, 100)
(75, 100)

F(25, 50)vvvvvvvvVv
(25, 75)vvvvvvvvVv
(25, 100)vvvvvvvvVv
(50, 75)
(50, 100)
(75, 100)

G(25, 50)
(25, 75)
(25, 100)
(50, 75)
(50, 100)
(75, 100)

From Table 13, the minimal generation numbers should be 75 and 50 for only Datasets A and F, respectively, with an insignificant gap between the fitness values in each fold. Hence, the proposed small-small OA is still effective in determining the generation number to reduce the significant difference among all fitness values; even it focuses only on the best fitness value and not the average fitness value that we found.

6.3.4. MPI of the Experimental Results

To further investigate the development of the proposed CSVM, two other indices, the average maximum possible improvement (MPIavg%) and the best maximum possible improvement (MPIavg%), are introduced and defined as

The MPIavg% and MPImax% results are listed in Tables 14 and 15, respectively, where the cells marked “” indicate that both the related Fsvm and the average and/or the best fitness obtained are 100% correct, for example, the 2nd, 4th, and 5th folds in Dataset B in Table 14. The bold numbers denote the best values among all folds for each dataset under the same generation number. Note that a value of 100, as in the 7th fold of Dataset B in Table 15, indicates that the related accuracy is 100%.


12345678910Avg

AF2546.6758.3358.4620.0048.7533.0350.9535.4650.9135.0043.76
F5049.3965.0061.2820.6751.2535.7654.0537.5853.9436.6746.56
F7550.6168.3364.3620.6752.5036.0654.7638.1855.7638.0647.93
F10052.4270.0065.1322.6752.9236.3655.7140.3056.9739.4449.19

BF2566.6738.8950.0052.220.0075.000.0040.40
F5066.6743.3350.0055.560.0075.000.0041.51
F7566.6745.5650.0061.110.0075.000.0042.62
F10067.7848.8950.0062.220.0075.001.6743.65

DF2529.2224.5824.3713.120.168.970.009.0914.0026.0414.96
F5030.9826.4625.8313.750.3211.410.2210.1516.2228.7516.41
F7531.3727.2927.0814.170.4812.180.2211.2117.1130.2117.13
F10031.3727.7127.0814.370.4812.180.4411.8217.5630.6217.36

FF2530.3910.1621.5911.258.335.880.0013.9136.8318.1815.65
F5034.3111.1123.0212.928.336.080.0014.6439.2118.1816.78
F7535.4911.5923.3313.968.336.270.0015.0740.0018.1817.22
F10036.2712.0623.6515.638.616.470.0015.3640.4818.1817.67

HF2542.0078.6740.0038.3367.5075.005.5644.8126.6746.50
F5042.6779.3340.6742.5073.3375.8311.1145.5628.3348.81
F7544.6780.0040.6744.1774.1775.8313.3345.9330.0049.86
F10045.3380.0040.6744.1775.0075.8316.6745.9332.5050.68


12345678910Avg

AF2554.5575.0076.9220.0062.5036.3657.1445.4563.6441.6753.32
F5063.6475.0076.9240.0062.5036.3664.2945.4563.6441.6756.95
F7563.6475.0076.9240.0062.5036.3664.2945.4563.6450.0057.78
F10063.6475.0076.9240.0062.5036.3664.2954.5563.6450.0058.69

BF2566.6766.6750.00100.000.0075.000.0051.19
F5066.6766.6750.00100.000.0075.000.0051.19
F7566.6766.6750.00100.000.0075.000.0051.19
F100100.0066.6750.00100.000.0075.0050.0063.10

DF2535.2937.5031.2518.754.7615.380.0013.6420.0043.7522.03
F5035.2937.5031.2518.754.7615.386.6713.6420.0043.7522.70
F7535.2937.5031.2518.754.7615.386.6713.6420.0043.7522.70
F10035.2937.5031.2518.754.7615.386.6713.6420.0043.7522.70

FF2541.1814.2928.5725.008.335.880.0017.3942.8618.1820.17
F5041.1814.2928.5725.008.3311.760.0017.3942.8618.1820.76
F7541.1814.2928.5725.008.3311.760.0017.3942.8618.1820.76
F10041.1814.2928.5725.0016.6711.760.0017.3942.8618.1821.59

HF2560.0080.0040.0050.0075.00100.0033.3355.5650.0060.43
F5060.0080.0060.0050.0075.00100.0033.3355.5650.0062.65
F7560.0080.0060.0050.0075.00100.0033.3355.5650.0062.65
F10060.0080.0060.0050.0075.00100.0033.3355.5650.0062.65

As shown in Tables 14 and 15, the results obtained from the proposed CSVM are at least 14.96% and 20.17%, with at most a 50.68% and 63.10% improvement in MPIavg% and MPImax%, respectively. The results shed light on the effectiveness of the proposed CSVM in comparison with the traditional SVM. It can be also observed that the more attributes, the greater the results obtained from the proposed CSVM regardless of the number of records. Ultimately, compared to the traditional SVM, the proposed CSVM is more suitable for small data.

7. Conclusions and Future Work

Classification is of utmost importance in data mining. The proposed new classifier, CSVM, is a convolutional SVM modified with a new repeated-attribute convolution product, in which all variables in each filter are updated and trained based on the proposed novel SSO. Equipped with a self-adaptive structure and pFilter, this greedy SSO is a one-solution, one-filter, one-variable type and its parameters are delineated by the proposed small-sample OA.

According to the experiment results for the five UCI datasets, namely, Australian Credit Approval, breast-cancer, Diabetes, fourclass, and Heart Disease [36], from Ex2 in Section 6, the proposed CSVM with the parameter setting selected from Ex1 outperforms the traditional SVM, the 3-layer ANN, and the 4-layer ANN with an improved accuracy of at least 14.96% and up to 50.68% in MPIavg%. Hence, the proposed small-sample OA discussed in Subsection 4.2 enables the CSVM to improve its overall performance, while the proposed CSVM ultimately serves as a successful concoction of the advantages of SVM, the convolution product, and SSO.

The classifier design method is a crucial element in the provision of useful information in the modern world. Through comparisons of the results of experiments, it can be determined whether further research will be conducted on the proposed CSVM, which will be applied to multiclass datasets based on several references [40, 41] with more attributes, classes, and records, and amalgamated with particular feature selections.

Data Availability

To validate the proposed CSVM, it was compared with the traditional SVM and the 3-layer and 4-layer ANNs on five well-known datasets, “Australian Credit Approval” (A), “breast-cancer” (B), “Diabetes” (D), “fourclass” (F), and “Heart Disease” (H), at http://archive.ics.uci.edu/ml/.

Disclosure

This article was once submitted to arXiv as a temporary submission that was just for reference and did not provide the copyright.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was supported in part by the Ministry of Science and Technology, R.O.C., under Grants MOST 102-2221-E-007-086-MY3 and MOST 104-2221-E-007-061-MY3 and the National Natural Science Foundation of China under Grant 61702118.

References

  1. D. Hand, H. Mannila, and P. Smyth, Principles of Data Mining, The MIT Press, Cambridge, MA, USA, 2001.
  2. W.-C. Yeh, “Novel swarm optimization for mining classification rules on thyroid gland data,” Information Sciences, vol. 197, pp. 65–76, 2012. View at: Publisher Site | Google Scholar
  3. B. Liu, Y. Xiao, and L. Cao, “SVM-based multi-state-mapping approach for multi-class classification,” Knowledge-Based Systems, vol. 129, pp. 79–96, 2017. View at: Publisher Site | Google Scholar
  4. J. Spilka, J. Frecon, R. Leonarduzzi, N. Pustelnik, P. Abry, and M. Doret, “Sparse support vector machine for intrapartum fetal Heart rate classification,” IEEE Journal of Biomedical and Health Informatics, vol. 21, no. 3, pp. 664–671, 2017. View at: Publisher Site | Google Scholar
  5. W.-C. Yeh, “A squeezed artificial neural network for the symbolic network reliability functions of binary-state networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 11, pp. 2822–2825, 2017. View at: Publisher Site | Google Scholar
  6. W. C. Wei-Chang Yeh, “New parameter-free simplified swarm optimization for artificial neural network training and its application in the prediction of time series,” IEEE Transactions on Neural Networks and Learning Systems, vol. 24, no. 4, pp. 661–665, 2013. View at: Publisher Site | Google Scholar
  7. L. Liu, W. Huang, and C. Wang, “Hyperspectral image classification with kernel-based least-squares support vector machines in sum space,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 11, no. 4, pp. 1144–1157, 2018. View at: Publisher Site | Google Scholar
  8. O. Abdel-Hamid, A.-r. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, “Convolutional neural networks for speech recognition,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 22, no. 10, pp. 1533–1545, 2014. View at: Publisher Site | Google Scholar
  9. X. Liu, B. Hu, Q. Chen, X. Wu, and J. You, “Stroke sequence-dependent deep convolutional neural network for online handwritten Chinese character recognition,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 11, pp. 4637–4648, 2020. View at: Publisher Site | Google Scholar
  10. R. Dechter, Learning while Searching in Constraint-Satisfaction Problems, University of California, Los Angeles, CA, USA, 1986.
  11. C. C. Aggarwal, “An introduction to social network data analytics,” In Social Network Data Analytics, Springer, Berlin, Germany, 2011. View at: Google Scholar
  12. Z. Cui, F. Xue, X. Cai, Y. Cao, G.-g. Wang, and J. Chen, “Detection of malicious code variants based on deep learning,” IEEE Transactions on Industrial Informatics, vol. 14, no. 7, pp. 3187–3196, 2018. View at: Publisher Site | Google Scholar
  13. J. Schmidhuber, “Deep learning,” Scholarpedia, vol. 10, no. 11, p. 32832, 2015. View at: Publisher Site | Google Scholar
  14. D. Li, L. Deng, B. Bhooshan Gupta, H. Wang, and C. Choi, “A novel CNN based security guaranteed image watermarking generation scenario for smart city applications,” Information Sciences, vol. 479, pp. 432–447, 2019. View at: Publisher Site | Google Scholar
  15. I. Goodfellow, “Generative adversarial networks,” in Proceedings of the International Conference on Neural Information Processing Systems, pp. 2672–2680, Bangkok, Thailand, November 2014. View at: Google Scholar
  16. S. Mittal and S. Vaishay, “A survey of techniques for optimizing deep learning on GPUs,” Journal of Systems Architecture, vol. 99, p. 101635, 2019. View at: Publisher Site | Google Scholar
  17. Y. Sun, B. Xue, M. Zhang, and G. Yen, “Completely automated CNN architecture design based on blocks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 4, pp. 1242–1254, 2019. View at: Google Scholar
  18. S. Khandelwal, L. Garg, and D. Boolchandani, “Reliability-aware support vector machine-based high-level surrogate model for analog circuits,” IEEE Transactions on Device and Materials Reliability, vol. 15, no. 3, pp. 461–463, 2015. View at: Publisher Site | Google Scholar
  19. B. Gu and V. S. Sheng, “Feasibility and finite convergence analysis for accurate on-line $\nu$-Support vector machine,” IEEE Transactions on Neural Networks and Learning Systems, vol. 24, no. 8, pp. 1304–1315, 2013. View at: Publisher Site | Google Scholar
  20. X. Li, X. Jia, L. Wang, and K. Zhao, “On spectral unmixing resolution using extended support vector machines,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 9, pp. 4985–4996, 2015. View at: Publisher Site | Google Scholar
  21. Y. Xu, “Maximum margin of twin spheres support vector machine for imbalanced data classification,” IEEE Transactions on Cybernetics, vol. 47, no. 6, pp. 1540–1550, 2017. View at: Publisher Site | Google Scholar
  22. T.-J. Hsieh, H.-F. Hsiao, and W.-C. Yeh, “Mining financial distress trend data using penalty guided support vector machines based on hybrid of particle swarm optimization and artificial bee colony algorithm,” Neurocomputing, vol. 82, pp. 196–206, 2012. View at: Publisher Site | Google Scholar
  23. T. J. Tsung-Jung Hsieh and W. C. Wei-Chang Yeh, “Knowledge discovery employing grid scheme least squares support vector machines based on orthogonal design bee colony algorithm,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 41, no. 5, pp. 1198–1212, 2011. View at: Publisher Site | Google Scholar
  24. L. Tang, Y. Tian, and P. M. Pardalos, “A novel perspective on multiclass classification: regular simplex Support Vector Machine,” Information Sciences, vol. 480, pp. 324–338, 2019. View at: Publisher Site | Google Scholar
  25. Y. Rahulamathavan, R. C.-W. Phan, S. Veluru, K. Cumanan, and M. Rajarajan, “Privacy-preserving multi-class support vector machine for outsourcing the data classification in cloud,” IEEE Transactions on Dependable and Secure Computing, vol. 11, no. 5, pp. 467–479, 2014. View at: Publisher Site | Google Scholar
  26. W. Wang, J. Xi, A. Chong, and L. Li, “Driving style classification using a semisupervised support vector machine,” IEEE Transactions on Human-Machine Systems, vol. 47, no. 5, pp. 650–660, 2017. View at: Publisher Site | Google Scholar
  27. B. E. Boser, I. M. Guyon, and V. N. Vapnik, “A training algorithm for optimal margin classifiers,” in Proceedings of the Fifth Annual Workshop on Computational learning Theory – COLT ‘92, p. 144, Pittsburgh, PA, USA, July 1992. View at: Google Scholar
  28. LIBSVM -- A Library for Support Vector Machines, 2020. http://www.csie.ntu.edu.tw/%7Ecjlin/libsvm/.
  29. EDUCBA, Find Out 10 the Difference between Small Data vs Big Data, EDUCBA, Maharashtra, India, 2018, https://www.educba.com/small-data-vs-big-data/.
  30. Banafa, Small Data vs. Big Data: Back to the Basics, Banafa, Duabi, UAE, 2020, https://datafloq.com/read/small-data-vs-big-data-back-to-the-basic/706.
  31. L. Blackman, “Social media and the politics of small data: post publication peer review and academic value,” Theory, Culture & Society, vol. 33, no. 4, pp. 3–26, 2016. View at: Publisher Site | Google Scholar
  32. W.-C. Yeh, “A two-stage discrete particle swarm optimization for the problem of multiple multi-level redundancy allocation in series systems,” Expert Systems with Applications, vol. 36, no. 5, pp. 9192–9200, 2009. View at: Publisher Site | Google Scholar
  33. W.-C. Yeh, “Optimization of the disassembly sequencing problem on the basis of self-adaptive simplified swarm optimization,” IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, vol. 42, no. 1, pp. 250–261, 2012. View at: Publisher Site | Google Scholar
  34. W. C. Yeh, “An improved simplified swarm optimization,” Knowledge-Based Systems, vol. 82, pp. 60–69, 2012. View at: Google Scholar
  35. W.-C. Yeh, “A new exact solution algorithm for a novel generalized redundancy allocation problem,” Information Sciences, vol. 408, no. 10, pp. 182–197, 2017. View at: Publisher Site | Google Scholar
  36. Archive.ics.uci.edu. 2020. http://archive.ics.uci.edu/ml/.
  37. D. C. Montgomery, Design and Analysis of Experiments, Wiley, Hoboken, NJ, USA, 10th ed. edition, 2019.
  38. W.-C. Yeh, “Orthogonal simplified swarm optimization for the series-parallel redundancy allocation problem with a mix of components,” Knowledge-Based Systems, vol. 64, pp. 1–12, 2014. View at: Publisher Site | Google Scholar
  39. J. Brownlee, Your First Deep Learning Project in Python with Keras Step-by-step, Machine Learning Mastery, Vermont, Victoria, 2020, https://machinelearningmastery.com/tutorial-first-neural-network-python-keras/.
  40. C. Demirkesen and H. Cherifi, “A comparison of multiclass SVM methods for real world natural scenes,” Advanced Concepts for Intelligent Vision Systems, vol. 5259, pp. 752–763, 2008. View at: Publisher Site | Google Scholar
  41. V. Blanco, A. Japón, and J. Puerto, “Optimal arrangements of hyperplanes for SVM-based multiclass classification,” Advances in Data Analysis and Classification, vol. 14, no. 1, pp. 175–199, 2020. View at: Publisher Site | Google Scholar

Copyright © 2021 Wei-Chang Yeh et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views305
Downloads593
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.