Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 8571712 | https://doi.org/10.1155/2020/8571712

Adeeb Noor, Muhammed Kürşad Uçar, Kemal Polat, Abdullah Assiri, Redhwan Nour, "A Novel Approach to Ensemble Classifiers: FsBoost-Based Subspace Method", Mathematical Problems in Engineering, vol. 2020, Article ID 8571712, 11 pages, 2020. https://doi.org/10.1155/2020/8571712

A Novel Approach to Ensemble Classifiers: FsBoost-Based Subspace Method

Academic Editor: Elio Masciari
Received10 Apr 2020
Revised05 Jun 2020
Accepted18 Jun 2020
Published15 Jul 2020

Abstract

In this article, an algorithm is proposed for creating an ensemble classifier. The name of the algorithm is the F-score subspace method (FsBoost). According to this method, the features are selected with the F-score and classified with different or the same classifiers. In the next step, the ensemble classifier is created. Two versions that are named FsBoost.V1 and FsBoost.V2 have been developed based on classification by the same or different classifiers. According to the results obtained, the results are consistent with the literature. Besides, a higher accuracy rate is obtained compared with many algorithms in the literature. The algorithm is fast because it has a few steps. It is thought that the algorithm will be successful due to these advantages.

1. Introduction

An ensemble classifier is a method in which multiple classifications are used together to improve classification performance [1, 2]. For example, when three classifiers are used to classify an object, the classifier works like this: if the first classifier is classified as a cat, the second classifier is classified as a dog, and the third classifier is classified as a cat, the ensemble classifier generates the result by taking the average of these decisions. There are many ways to create an ensemble classifier. Some of the most commonly used ones are (1) adaptive resampling and combining (boosting) [3], (1.1) AdaBoost (adaptive boosting) [4], (2) bagging (bootstrap aggregating) [5], and (3) random subspace [6].

The boosting method can create powerful classifiers by combining and training weak classifiers [3]. The most commonly used boosting method is AdaBoost [4]. The AdaBoost method tries to improve performance by focusing on misclassified instances [4]. In the bagging method, classifiers trained with different training sets randomly selected (random sampling method) from the dataset are combined [5]. Outputs of classifiers are combined with majority voting or weighted voting [5]. In the random subspace method, feature subsets are generated by randomly selecting from samples [6]. Each subset has an element and an feature [6]. In other words, a subset of features is created, not a subset of instances [6]. In this way, the training process is accelerated. These subclasses and classifiers are trained to form ensemble classifiers. The outputs of the classifiers are combined with majority voting or weighted voting.

These algorithms have a disadvantage. As the education levels for AdaBoost increase, the number of samples decreases, and training becomes more difficult. There is a need for more samples for training. It is quite slow because the training stages are too much [7, 8]. The bagging method involves complex calculations [7]. Both methods require many iterations [1]. So, the success rate is usually lower than the random forest method [1]. These models cannot explain the dataset by modeling it as decision trees [7]. When these disadvantages are taken into consideration, these methods are still in need of improvement. In this study, a new ensemble algorithm based on the F-score feature selection algorithm has been developed to reduce the processing load of existing ensemble algorithms and to increase the accuracy rate.

Feature selection algorithms are often used in the machine learning field to improve the performance of systems [911]. In the field of machine learning, datasets are used in a variety of sizes and types [1215]. Large size data will cause the classifier to lengthen the training duration. Feature selection algorithms have been developed to solve this problem [9, 16, 17]. They do this by clearing irrelevant data when holding relevant data [9]. Thus, data size, process load, and training time decrease while classification accuracy increases [9, 18]. Many feature selection algorithms have been developed in the literature [1, 16]. However, in this study, the F-score feature selection algorithm is used because it can work fast, and its performance is good [16]. Feature selection algorithms can be used in many places such as health areas [1922].

In this study, two different methods have been developed, namely, FsBoost.V1 and FsBoost.V2, based on the F-score feature selection algorithm that can enhance training performance for ensemble classifiers. The FsBoost.V1 method is like the random subspace method. However, the features are chosen concerning the data label, not random. Selected datasets are classified with a single classifier, and then ensemble classifier 1 is created. This process is repeated for three or more different classifiers. Eventually, ensemble classifiers for three different classifiers are merged. In this way, it is ensured that unnecessary data are removed from the training process. The operation can be interrupted first in the ensemble classifier. In the FsBoost.V2 method, all data are classified with different classifiers. In the second step, the subfeature space is created by the F-score feature selection algorithm and reclassified. The ensemble classifier is created because of classification. This process was repeated a second time. Eventually, ensemble classifiers for three different classifiers are merged. The use of a single classifier reduces the cost. Only relevant features are retrieved by using the F-score feature selection algorithm. This process accelerates the training process. Complexity is less than other algorithms.

2. Materials and Methods

The operation was performed according to the flow in Figure 1. Firstly, the records to be used in the study were collected. Then, features were selected with the F-score feature selection algorithm. Finally, the data are classified with different classifiers, and their performances are calculated. When these operations are performed, ensemble classifiers are created, and their performances are calculated at different levels and formations.

2.1. Collection of Data

The data used in the study were downloaded from the Machine Learning Repository website of the University of California, Irvine (UCI) [23, 24]. The data consist of 4 groups (A/B/C/D) belonging to epilepsy patients (Table 1). Records include EEG records of individuals. Each record is 23.6 seconds. 2300 EEG recordings were taken during the epileptic seizure. The other 2300 records (nonepilepsy) were recorded while in a healthy condition. However, the records belong to epileptic patients. The epilepsy data in each set are the same. However, nonepilepsy records are different. The database contains 178 features for each EEG recording.


DatasetGroupTotalNumber of features
EpilepsyNonepilepsy

A115011502300178
B115011502300178
C115011502300178
D115011502300178

2.2. F-Score Feature Selection Algorithm

The F-score is one of the feature selection algorithms that helps distinguish classes from each other [25]. To select the feature, an F-score value () is calculated for each feature (equation (1)). The F-score threshold value () is determined by taking the average of all F-score values. For the th feature, if , th feature is selected. This step is repeated for each feature.

The variables in equation (1) are (1) feature vector , and . (2) , , respectively, are the positive (+) and negative (−) total number of elements in the class, and . (3) is the feature number. (4) , , and are the average value, mean value in the negative class, and mean value in the positive class of the th property, respectively. (5) represents the th positive example of the th feature. represents the th negative example of the th feature.

In the study, A, B, C, and D dataset features were selected with the F-score (Table 2). Feature selection has been applied twice.


DatasetABCD

Number of features178178178178
1st feature selection60686254
2nd feature selection28252325

2.3. Ensemble Classifier

The ensemble classifier is a system created by combining different classifiers to produce safer and more stable estimates [26]. The system is built with classifiers. can be single or double. While classifying according to the feature vector, for each feature vector 1, each classifier generates an output value. The output values produced are counted. Then, the output of the ensemble classifier is determined by the number of votes. If the number of classifiers is even, the average of the decision values of the classifiers is rounded off, and the decision of the ensemble classifier is determined. This process applies to all feature vectors. The ensemble classifier was prepared in MATLAB using three different classifiers: kNN, PNN, and SVMs [27].

The kNN is one of the machine learning classification methods with advisory learning [28, 29]. Under the structure of the training dataset, classification is done according to nearest of the new classifier. In this study, was selected, and ten distance calculation formulas were used. These include Spearman, Seuclidean, Minkowski, Mahalanobis, Jaccard, Hamming, Euclidean, Cosine, Correlation, and Cityblock.

PNN is a statistical classification algorithm based on kernel and Bayesian [30]. The method is developed based on feedforward networks [30]. The classifier takes care of all class elements when processing [31]. The radial-based kernel function calculates the distance between class samples. The user in the PNN classifier can manipulate the spread parameter. As the spread parameter approaches zero, the network begins to behave like the nearest neighbor classifier [32]. This value when farther away from zero, the classifier classifies, considering several vectors that separate data from each other [32]. In the study, PNN networks were designed with a total of 500 different values ranging from 0.01 to 5 steps of the spread parameter, with 0.01 step range. At the end of the study, the best performing network parameters and performance criteria were calculated.

SVMs are among the best machine learning algorithms [33]. They can be used in the regression analysis as well as classification [33]. SVMs try to separate datasets from each other with a linear and nonlinear line. The purpose of the SVM algorithm is to be able to distinguish between the data with the minimum error [34]. Gaussian or radial basis function (RBF) kernel (rbf) was used in the study. The BoxConstraint box limit is set between 1 and 100 so that the best performance can be achieved.

2.4. Ensemble Classifier Powered by the F-Score

In this study, two different ensemble classifiers, namely, Classifier-FsBoost.V1 and Feature-FsBoost.V2, were developed.

2.4.1. Classifier-Based Ensemble Classifier: FsBoost.V1

The implementation steps of this method are shown in detail in Figure 2. Accordingly to this, firstly, a dataset (A) is classified in a classifier (kNN). In the second step, the first feature selection is performed and again classified in the same classifier (kNN). In the third step, the first and second feature selection are performed and again classified in the same classifier (kNN). Thus, it is classified in three different steps, but only in a classifier (kNN). These three results are combined to form the kNN ensemble. The same process is repeated in PNN and SVMs. Eventually, the kNN ensemble, the PNN ensemble, and the SVM ensemble are combined into a single ensemble classifier.

2.4.2. Feature-Based Ensemble Classifier: FsBoost.V2

The steps for this method are shown in Figure 3. Accordingly to this, firstly, a dataset (A) is classified by each classifier (kNN, PNN, and SVMs). These three classifiers are combined to obtain ensemble classifier 1. In the second step, the first feature selection is performed, and the process in the first step is repeated. In the third step, the first and second property selection steps are performed together, and then the first process is repeated. Ensemble 1, 2, and 3 classifiers are combined to create the ensemble classifier.

2.5. Performance Evaluation Criteria and Distribution of Data for Classification

Different performance evaluation criteria were used to test the accuracy rates of the proposed systems. These are accuracy rates, sensitivity, specificity, kappa value, receiver operating characteristic (ROC), area under a ROC (AUC), and k (10-fold) cross-validation accuracy.

While classifying the datasets, they were divided into two groups: training (50%) and test (50%) (Table 3).


ClassFor A datasetFor B datasetFor C datasetFor D dataset
Training (50%)Test (50%)TotalTraining (50%)Test (50%)TotalTraining (50%)Test (50%)TotalTraining (50%)Test (50%)Total

Epilepsy115011502300115011502300115011502300115011502300
Nonepilepsy115011502300115011502300115011502300115011502300
Total230023004600230023004600230023004600230023004600

3. Results

The work aims to develop a new algorithm to improve the ensemble classifier performance. We have developed an algorithm (FsBoost) that is similar to the random subspace method but with less workload, faster running, and better performance. F-score feature selection algorithm based on this method has two versions (FsBoost.V1 vs. FsBoost.V2). The ensemble classifier is created with a single classifier in FsBoost.V1 (Şekil 2, Level 1) and at least three different classifiers in FsBoost.V2 (Şekil 3, Level 1). The developed algorithms were tested with four two-class datasets (A, B, C, and D) (Table 3).

According to the FsBoost algorithm, the dataset features were selected twice using the F-score feature selection algorithm. For example, according to FsBoost.V1, the dataset (A) is classified with the same classifier after each property selection (Figure 2, Level 1—kNN1, kNN2, and kNN3) (Table 4). kNN ensemble was formed by combining classifiers of three kNNs (Figure 2, Level 1) (Table 5). This process was repeated with three different classifiers to create PNN ensemble and SVM ensemble (Figure 2, Level 1) (Table 5). Then, the kNN ensemble, PNN ensemble, and SVM ensemble were combined to form the final ensemble classifier (Figure 2, Level 2) (Table 5). This process is repeated for each dataset (Tables 48).


k-nearest neighbor classification algorithm

NPk = 2k = 2k = 2
DF
EuclideanSeuclideanEuclidean
FS012
NF682515
ClassSenSpeAccSenSpeAccSenSpeAcc
E0.841.0092.170.870.8993.260.891.0094.57
N-E1.000.841.001.001.000.89
AUC0.920.870.94
Kappa0.840.930.88
F-measure0.9293.260.93
10-fold (%)88.8591.2292.54

Probabilistic neural networks

NPSpreadSpreadSpread
0.110.110.21
FS012
NF682515
ClassSenSpeAccSenSpeAccSenSpeAcc
E0.871.0093.260.940.7593.090.751.0087.70
N-E1.000.870.921.001.000.75
AUC0.930.860.94
Kappa0.870.930.89
F-measure0.9393.090.94
10-fold (%)0.110.110.21

Support vector machines

NPBoxConstraint
3212
FS012
NF682515
ClassSenSpeAccSenSpeAccSenSpeAcc
E0.991.0099.650.990.9899.390.980.9998.83
N-E1.000.991.000.990.990.98
AUC1.000.990.99
Kappa0.990.990.99
F-measure1.0099.390.99
10-fold (%)99.7699.5499.02

DF: distance function, Sen: sensitivity, Spe: specificity, Acc: accuracy (%), NP: network parameters, FS: feature selection, NF: number of features, EC: ensemble classifier, E: epilepsy, and N-E: nonepilepsy.

LevelLevel 1Level 2
ClassifierkNN ensemblePNN ensembleSVM ensembleEnsemble
ClassSenSpeAccSenSpeAccSenSpeAccSenSpeAcc

For A dataset
E0.881.0093.870.891.0094.260.991.0099.480.931.0096.43
N-E1.000.881.000.891.000.991.000.93
AUC0.940.940.990.96
Kappa0.880.890.990.93
F-measure0.930.940.990.96

For B dataset
E0.851.0092.480.761.0087.830.960.9997.480.871.0093.52
N-E1.000.851.000.760.990.961.000.87
AUC0.920.880.970.94
Kappa0.850.760.950.87
F-measure0.920.860.970.93

For C dataset
E0.840.9991.910.800.9989.170.970.9897.430.880.9993.78
N-E0.990.840.990.800.980.970.990.88
AUC0.920.890.970.94
Kappa0.840.780.950.88
F-measure0.910.880.970.93

For D dataset
E0.830.9789.870.640.9680.130.950.9394.040.860.9691.30
N-E0.970.830.960.640.930.950.960.86
AUC0.900.800.940.91
Kappa0.800.600.880.83
F-measure0.890.770.940.91

Sen: sensitivity, Spe: specificity, Acc: accuracy (%), E: epilepsy, and N-E: nonepilepsy.

k-nearest neighbor classification algorithm

NPk = 2k = 2k = 2
DF
EuclideanEuclideanMinkowski
FS012
NF682515
ClassSenSpeAccSenSpeAccSenSpeAcc
E0.801.0090.000.830.8890.960.880.9692.04
N-E1.000.800.990.960.960.88
AUC0.900.820.92
Kappa0.800.900.84
F-measure0.8990.960.91
10-fold (%)85.3987.5490.41

Probabilistic neural networks

NPSpreadSpreadSpread
0.110.210.21
FS012
NF682515
ClassSenSpeAccSenSpeAccSenSpeAcc
E0.800.9889.040.790.7888.910.780.8882.96
N-E0.980.800.990.880.880.78
AUC0.890.780.89
Kappa0.780.880.78
F-measure0.8888.910.88
10-fold (%)0.110.210.21

Support vector machines

NPBoxConstraint
4884
FS012
NF682515
ClassSenSpeAccSenSpeAccSenSpeAcc
E0.980.9998.130.970.9597.170.950.9695.57
N-E0.990.980.970.960.960.95
AUC0.980.940.97
Kappa0.960.970.95
F-measure0.9897.170.97
10-fold (%)98.4897.3795.41

DF: distance function, Sen: sensitivity, Spe: specificity, Acc: accuracy (%), NP: network parameters, FS: feature selection, NF: number of features, EC: ensemble classifier, E: epilepsy, and N-E: nonepilepsy.

k-nearest neighbor classification algorithm

NPk = 2k = 1k = 5
DF
EuclideanMinkowskiEuclidean
FS012
NF682515
ClassSenSpeAccSenSpeAccSenSpeAcc
E0.790.9788.040.820.8489.170.840.9288.13
N-E0.970.790.970.920.920.84
AUC0.880.780.90
Kappa0.760.890.80
F-measure0.8789.170.89
10-fold (%)84.2486.3087.17

Probabilistic neural networks

NPSpreadSpreadSpread
0.410.410.41
FS012
NF682515
ClassSenSpeAccSenSpeAccSenSpeAcc
E0.680.9782.430.640.5379.520.530.9574.09
N-E0.970.680.950.950.950.53
AUC0.820.590.80
Kappa0.650.760.60
F-measure0.8079.520.77
10-fold (%)0.410.410.41

Support vector machines

NPBoxConstraint
1212
FS012
NF682515
ClassSenSpeAccSenSpeAccSenSpeAcc
E0.970.9394.610.930.9393.780.930.9292.30
N-E0.930.970.940.920.920.93
AUC0.950.880.94
Kappa0.890.940.88
F-measure0.9593.780.94
10-fold (%)95.4394.5492.33

DF: distance function, Sen: sensitivity, Spe: specificity, Acc: accuracy (%), NP: network parameters, FS: feature selection, NF: number of features, EC: ensemble classifier, E: epilepsy, and N-E: nonepilepsy.

k-nearest neighbor classification algorithm

NPk = 2k = 2k = 4
DF
EuclideanEuclideanEuclidean
FS012
NF682515
ClassSenSpeAccSenSpeAccSenSpeAcc
E0.841.0091.960.850.8392.300.830.9890.52
N-E1.000.840.990.980.980.83
AUC0.920.850.92
Kappa0.840.920.85
F-measure0.9192.300.92
10-fold (%)88.1389.7290.50

Probabilistic neural networks

NPSpreadSpreadSpread
0.110.210.31
FS012
NF682515
ClassSenSpeAccSenSpeAccSenSpeAcc
E0.840.9891.000.790.5889.170.581.0078.65
N-E0.980.840.991.001.000.58
AUC0.910.780.88
Kappa0.820.880.76
F-measure0.9089.170.86
10-fold (%)0.110.210.31

Support vector machines

NPBoxConstraint
4153
FS012
NF682515
ClassSenSpeAccSenSpeAccSenSpeAcc
E0.971.0098.480.960.9297.220.920.9794.48
N-E1.000.970.990.970.970.92
AUC0.980.940.97
Kappa0.970.970.95
F-measure0.9897.220.97
10-fold (%)99.2297.3594.07

DF: distance function, Sen: sensitivity, Spe: specificity, Acc: accuracy (%), NP: network parameters, FS: feature selection, NF: number of features, EC: ensemble classifier, E: epilepsy, and N-E: nonepilepsy.

In FsBoost.V2, the dataset (A) is classified with different classifiers after each feature selection (Figure 3, Level 1—kNN1, PNN1, and SVM1) (Table 4). These three classifiers were combined to create ensemble 1 (Figure 3, Level 1—ensemble 1) (Table 9). Then, ensemble 1, ensemble 2, and ensemble 3 were combined to form the final ensemble classifier (Figure 3, Level 2) (Table 9). This process is repeated for each dataset (Tables 47 and 9). Finally, the FsBoost ensemble algorithm is also compared with the ensemble algorithms available in the literature (Table 10).


LevelLevel 1Level 2
ClassifierEnsemble 1Ensemble 2Ensemble 3Ensemble

For A dataset
FS0120/1/2
NF68251568/25/15
ClassSenSpeAccSenSpeAccSenSpeAccSenSpeAcc
E0.921.0095.910.961.0098.000.911.0095.520.941.0097.22
N-E1.000.921.000.961.000.911.000.94
AUC0.960.980.960.97
Kappa0.920.960.910.94
F-measure0.960.980.950.97

For B dataset
FS0120/1/2
NF68251568/25/15
ClassSenSpeAccSenSpeAccSenSpeAccSenSpeAcc
E0.901.0094.830.890.9994.170.830.9991.090.881.0094.04
N-E1.000.900.990.890.990.831.000.88
AUC0.950.940.910.94
Kappa0.900.880.820.88
F-measure0.950.940.900.94

For C dataset
FS0120/1/2
NF68251568/25/15
ClassSenSpeAccSenSpeAccSenSpeAccSenSpeAcc
E0.871.0093.300.880.9993.390.900.9693.350.900.9994.52
N-E1.000.870.990.880.960.900.990.90
AUC0.930.930.930.95
Kappa0.870.870.870.89
F-measure0.930.930.930.94

For D dataset
FS0120/1/2
NF68251568/25/15
ClassSenSpeAccSenSpeAccSenSpeAccSenSpeAcc
E0.840.9790.390.850.9791.090.860.9489.570.870.9791.65
N-E0.970.840.970.850.940.860.970.87
AUC0.900.910.900.92
Kappa0.810.820.790.83
F-measure0.900.910.890.91

Sen: sensitivity, Spe: specificity, Acc: accuracy (%), FS: feature selection, NF: number of features, EC: ensemble classifier, E: epilepsy, and N-E: nonepilepsy.

MethodsDatasets
ABCD
RankAccRankAccRankAccRankAcc

AdaBoostM1399.04297.22496.48394.35
Bag299.43496.87296.83194.87
GentleBoost498.96397.04396.78294.65
LogitBoost598.96596.35596.35494.22
LPBoost698.57696.35794.74792.83
RobustBoost1295.741687.781786.351587.17
RUSBoost1790.831784.481686.431684.39
Subspace1594.201094.151392.021190.54
TotalBoost798.35795.78695.87693.30

FsBoost.V1
Level 1—kNN ensemble1693.871392.481491.911389.87
Level 1—PNN ensemble1494.261587.831589.171780.13
Level 1—SVM ensemble199.48197.48197.43594.04
Level 2—ensemble997.221194.04894.52891.65

FsBoost.V2
Level 1—ensemble 11195.91894.831293.301290.39
Level 1—ensemble 2898.00994.171093.391091.09
Level 1—ensemble 31395.521491.091193.351489.57
Level 2—ensemble1096.431293.52993.78991.30
kNN92.1791.9690.0088.04
PNN93.2691.0089.0482.43
SVMs99.6598.4898.1394.61

Acc: accuracy (%).

Accuracy rates for FsBoost.V1 and FsBoost.V2 are higher than those for single classifiers (Table 10). The FsBoost algorithm is well ranked compared to other boosting algorithms in the literature (Table 10). FsBoost.V1—Level 1—SVM ensemble method is the best method when compared with the literature (Table 10, Rank).

Three different datasets were used to reconfirm the results obtained. The distribution of datasets is shown in Table 11.


DatasetsNFTraining (50%)TotalTest (50%)Total
Class 1Class 2Class 1Class 2

Basehock89497500997497499996
Madelon9568062013006206801300
PCMAC87491481972491480971

NF: number of features.

In order to compare the FsBoost algorithm with boosting algorithms, three different datasets were reanalyzed. The results obtained from the analysis are summarized in Table 12. According to the results, the algorithm with the average best performance is the FsBoost.V1 Level 2 ensemble algorithm.


MethodDatasetsMean
BasehockMadelonPCMAC
RankAccRankAccRankAccAccRank

AdaBoostM1559.341752.92263.7558.675
Bag1750.90855.001752.7352.8817
GentleBoost1358.231553.54562.8258.209
LogitBoost1258.431453.69363.5458.566
LPBoost1656.22755.151652.8354.7416
RobustBoost459.541353.77163.9559.092
RUSBoost1556.33656.001256.0256.1214
Subspace1158.551653.311554.0455.3015
TotalBoost659.041254.38462.8258.754

FsBoost.V1
Level 1 kNN ensemble1058.73356.851454.2756.6212
Level 1 PNN ensemble259.74256.921158.8158.498
Level 1 SVM ensemble758.841154.541060.1457.8411
Level 2 ensemble160.14456.62960.5659.101

FsBoost.V2
Level 1—ensemble 1958.841054.62662.2058.557
Level 1—ensemble 2858.84954.69760.8758.1310
Level 1—ensemble 31457.33157.381354.8956.5413
Level 2 ensemble359.74556.54860.7659.013
kNN59.0454.2354.58
PNN57.2353.0058.39
SVMs56.2254.6961.89

Acc: accuracy (%).

4. Discussion and Conclusion

FsBoost is one of the best algorithms developed until now [47]. This method has very few steps. In this way, it provides results faster. A high accuracy rate is a distinct advantage. Algorithms with high accuracy and fast results are preferred in medical data classification. In this regard, FsBoost may be preferred.

FsBoost contains fewer calculations and steps than the algorithms in the literature [47]. The accuracy rate is very good compared with other algorithms (Table 10) [4]. Considering these advantages, FsBoost may be a commonly used algorithm soon.

FsBoost algorithms are also suitable for use in biomedical signal processing, deep learning, and communication [3537].

FsBoost can be used with three or more classifiers. Besides, FsBoost.V1 is a version of FsBoost that can be used with a single classifier. Achieving high performance with a single classifier is a distinct advantage of FsBoost.V1. The F-score feature selection algorithm creates this advantage. By combining different features, the same data can be interpreted differently. If the classifiers are strong, FsBoost increases in performance. Therefore, it is recommended that the algorithm is used with robust classifiers. Ensemble classifiers often bring out a strong classifier by combining weak classifiers. This is the weakness of FsBoost.

As a result, we can say that FsBoost is an alternative method to create an ensemble classifier. A high-performance ensemble classifier can be created with a powerful classifier and the F-score feature selection algorithm.

Data Availability

The datasets in our paper could be downloaded from the UCI Machine Learning Repository (https://archive.ics.uci.edu/ml/datasets/index.html). The authors can send all the datasets based on the readers’ requests.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This project was funded by the Deanship of Science Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under grant no. RG-2-611-40. The authors, therefore, acknowledge with thanks to DSR for the technical and financial support.

References

  1. L. Rokach, “Ensemble-based classifiers,” Artificial Intelligence Review, vol. 33, no. 1-2, pp. 1–39, 2010. View at: Publisher Site | Google Scholar
  2. M. K. Uçar, “Classification performance-based feature selection algorithm for machine learning: P-score,” Innovation and Research in BioMedical Engineering, 2020. View at: Publisher Site | Google Scholar
  3. Y. Freund and E. Robert, “Schapire. Experiments with a new boosting algorithm,” in Proceedings of the ICML ’96: 13th International Conference on Machine Learning, pp. 148–156, Bari Italy, July 1996. View at: Google Scholar
  4. R. Rojas, AdaBoost and the Super Bowl of Classifiers A Tutorial Introduction to Adaptive Boosting, 2009.
  5. L. Breiman, “Bagging predictors,” Machine Learning, vol. 24, no. 2, pp. 123–140, 1996. View at: Publisher Site | Google Scholar
  6. T. K. Ho, “The random subspace method for constructing decision forests,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 8, pp. 832–844, 1998. View at: Publisher Site | Google Scholar
  7. B. Peter, “Bagging, boosting and ensemble learning,” in Handbook of Computational Statistics: Concepts and Methods, J. E. Gentle, W. Karl Härdle, and Y. Mori, Eds., pp. 1–38, Springer-Verlag Berlin Heidelberg, Heidelberg, Germany, 2012. View at: Google Scholar
  8. M. Toğaçar, B. Ergen, and Z. Cömert, “A deep feature learning model for pneumonia detection applying a combination of mRMR feature selection and machine learning models,” Innovation and Research in BioMedical Engineering, 2019. View at: Publisher Site | Google Scholar
  9. D. Guan, W. Yuan, Y.-K. Lee, K. Najeebullah, and M. K. Rasel, “A review of ensemble learning based feature selection,” IETE Technical Review, vol. 31, no. 3, pp. 190–198, 2014. View at: Publisher Site | Google Scholar
  10. N. Daldal, K. Polat, and Y. Guo, “Classification of multi-carrier digital modulation signals using NCM clustering based feature-weighting method,” Computers in Industry, vol. 109, p. 45, 2019. View at: Publisher Site | Google Scholar
  11. M. K. Uçar, M. Nour, H. Sindi, and K. Polat, “The effect of training and testing process on machine learning in biomedical datasets,” Mathematical Problems in Engineering, vol. 2020, Article ID 2836236, 17 pages, 2020. View at: Publisher Site | Google Scholar
  12. K. Polat, S. Şahan, H. Kodaz, and S. Güneş, “Breast cancer and liver disorders classification using artificial immune recognition system (AIRS) with performance evaluation by fuzzy resource allocation mechanism,” Expert Systems with Applications, vol. 32, no. 1, pp. 172–183, 2007. View at: Publisher Site | Google Scholar
  13. S. AlMuhaideb and M. E. B. Menai, “An individualized preprocessing for medical data classification,” Procedia Computer Science, vol. 82, pp. 35–42, 2016. View at: Publisher Site | Google Scholar
  14. N. Daldal, M. Nour, and K. Polat, “A novel demodulation structure for quadrate modulation signals using the segmentary neural network modelling,” Applied Acoustics, vol. 164, Article ID 107251, 2020. View at: Publisher Site | Google Scholar
  15. N. Daldal, “A novel demodulation method for quadrate type modulations using a hybrid signal processing method,” Physica A: Statistical Mechanics and Its Applications, vol. 540, Article ID 122836, 2020. View at: Publisher Site | Google Scholar
  16. K. Polat and S. Güneş, “A new feature selection method on classification of medical datasets: kernel F-score feature selection,” Expert Systems with Applications, vol. 36, no. 7, pp. 10367–10373, 2009. View at: Publisher Site | Google Scholar
  17. K. Polat and K. Onur Koc, “Detection of skin diseases from dermoscopy image using the combination of convolutional neural network and one-versus-all,” Journal of Artificial Intelligence and Systems, vol. 2, no. 1, pp. 80–97, 2020. View at: Publisher Site | Google Scholar
  18. J. Cai, J. Luo, S. Wang, and S. Yang, “Feature selection in machine learning: a new perspective,” Neurocomputing, vol. 300, pp. 70–79, 2018. View at: Publisher Site | Google Scholar
  19. A. Noor, “The utilization of E-health in the kingdom of Saudi Arabia,” International Research Journal of Engineering and Technology (IRJET), vol. 6, no. 9, 2019. View at: Google Scholar
  20. A. Noor, L. Wang, B. Ahmed et al., “D4: deep drug-drug interaction discovery and demystification,” Bioinformatics, 2020. View at: Publisher Site | Google Scholar
  21. A. Noor, “Discovering gaps in Saudi education for digital health transformation,” International Journal of Advanced Computer Science and Applications, vol. 10, no. 10, pp. 105–109, 2019. View at: Publisher Site | Google Scholar
  22. A. Noor, A. Assiri, S. Ayvaz, C. Clark, and M. Dumontier, “Drug-drug interaction discovery and demystification using Semantic Web technologies,” Journal of the American Medical Informatics Association, vol. 24, no. 3, pp. 556–564, 2017. View at: Publisher Site | Google Scholar
  23. R. G. Andrzejak, K. Lehnertz, F. Mormann et al., “Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: dependence on recording region and brain state,” Physical Review E—Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics, vol. 64, no. 6, 2001. View at: Publisher Site | Google Scholar
  24. R. G. Elger CE Andrzejak, K. Lehnertz, C. Rieke, F. Mormann, and P. David, UCI Machine Learning Repository: Epileptic Seizure Recognition Data Set, University of California, Oakland, CA, USA, 2001.
  25. M. K. Uçar, M. R. Bozkurt, C. Bilgin, and K. Polat, “Automatic detection of respiratory arrests in OSA patients using PPG and machine learning techniques,” Neural Computing and Applications, vol. 28, no. 10, pp. 2931–2945, 2017. View at: Publisher Site | Google Scholar
  26. L. Rokach, A. Schclar, and E. Itach, “Ensemble methods for multi-label classification,” Expert Systems with Applications, vol. 41, no. 16, pp. 7507–7523, 2014. View at: Publisher Site | Google Scholar
  27. A. S. D. P. Wallisch, M. E. Lusignan, M. D. Benayoun, T. I. Baker, and N. G. Hatsopoulos, “MATLAB for neuroscientists: an introduction to scientific computing in MATLAB,” Journal of Undergraduate Neuroscience Education, vol. 13, no. 1, 2014. View at: Google Scholar
  28. S. Şahan, K. Polat, H. Kodaz, and S. Güneş, “A new hybrid method based on fuzzy-artificial immune system and k-nn algorithm for breast cancer diagnosis,” Computers in Biology and Medicine, vol. 37, no. 3, pp. 415–423, 2007. View at: Publisher Site | Google Scholar
  29. M. Khan, Q. Ding, and W. Perrizo, “k-nearest neighbor classification on spatial data streams using P-trees,” in Advances in Knowledge Discovery and Data Mining, pp. 517–528, Springer Berlin Heidelberg, Heidelberg, Germany, 2002, Chapter Lecture No. View at: Google Scholar
  30. A. Khamis, S. Hussain, A. Mohamed, and E. Bizkevelci, “Islanding detection in a distributed generation integrated power system using phase space technique and probabilistic neural network,” Neurocomputing, vol. 148, pp. 587–599, 2015. View at: Publisher Site | Google Scholar
  31. E. Parzen, “On estimation of a probability density function and mode,” The Annals of Mathematical Statistics, vol. 33, no. 3, pp. 1065–1076, 1962. View at: Publisher Site | Google Scholar
  32. P. D. Wasserman, Advanced Methods in Neural Computing, Van Nostrand Reinhold, New York, NY USA, 1993.
  33. C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995. View at: Publisher Site | Google Scholar
  34. V. N. Mandhala, V. Sujatha, and B. Renuka Devi, “Scene classification using support vector machines,” in Proceedings of the 2014 IEEE International Conference on Advanced Communications, Control and Computing Technologies, pp. 1807–1810, Ramanathapuram, India, May 2014. View at: Publisher Site | Google Scholar
  35. M. Arican, K. Polat, and K. Polat, “Binary particle swarm optimization (BPSO) based channel selection in the EEG signals and its application to speller systems,” Journal of Artificial Intelligence and Systems, vol. 2, no. 1, pp. 27–37, 2020. View at: Publisher Site | Google Scholar
  36. A. Ozdemir and K. Polat, “Deep learning applications for hyperspectral imaging: a systematic review,” Journal of the Institute of Electronics and Computer, vol. 2, no. 1, pp. 39–56, 2020. View at: Publisher Site | Google Scholar
  37. N. Daldal, Z. Cömert, and K. Polat, “Automatic determination of digital modulation types with different noises using convolutional neural network based on time–frequency information,” Applied Soft Computing Journal, vol. 86, Article ID 105834, 2020. View at: Publisher Site | Google Scholar

Copyright © 2020 Adeeb Noor et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views438
Downloads266
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.